2019-05-01 23:03:17

by Brendan Higgins

[permalink] [raw]
Subject: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

## TLDR

I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
5.2.

Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
we would merge through your tree when the time came? Am I remembering
correctly?

## Background

This patch set proposes KUnit, a lightweight unit testing and mocking
framework for the Linux kernel.

Unlike Autotest and kselftest, KUnit is a true unit testing framework;
it does not require installing the kernel on a test machine or in a VM
and does not require tests to be written in userspace running on a host
kernel. Additionally, KUnit is fast: From invocation to completion KUnit
can run several dozen tests in under a second. Currently, the entire
KUnit test suite for KUnit runs in under a second from the initial
invocation (build time excluded).

KUnit is heavily inspired by JUnit, Python's unittest.mock, and
Googletest/Googlemock for C++. KUnit provides facilities for defining
unit test cases, grouping related test cases into test suites, providing
common infrastructure for running tests, mocking, spying, and much more.

## What's so special about unit testing?

A unit test is supposed to test a single unit of code in isolation,
hence the name. There should be no dependencies outside the control of
the test; this means no external dependencies, which makes tests orders
of magnitudes faster. Likewise, since there are no external dependencies,
there are no hoops to jump through to run the tests. Additionally, this
makes unit tests deterministic: a failing unit test always indicates a
problem. Finally, because unit tests necessarily have finer granularity,
they are able to test all code paths easily solving the classic problem
of difficulty in exercising error handling code.

## Is KUnit trying to replace other testing frameworks for the kernel?

No. Most existing tests for the Linux kernel are end-to-end tests, which
have their place. A well tested system has lots of unit tests, a
reasonable number of integration tests, and some end-to-end tests. KUnit
is just trying to address the unit test space which is currently not
being addressed.

## More information on KUnit

There is a bunch of documentation near the end of this patch set that
describes how to use KUnit and best practices for writing unit tests.
For convenience I am hosting the compiled docs here:
https://google.github.io/kunit-docs/third_party/kernel/docs/
Additionally for convenience, I have applied these patches to a branch:
https://kunit.googlesource.com/linux/+/kunit/rfc/v5.1-rc7/v1
The repo may be cloned with:
git clone https://kunit.googlesource.com/linux
This patchset is on the kunit/rfc/v5.1-rc7/v1 branch.

## Changes Since Last Version

None. I just rebased the last patchset on v5.1-rc7.

--
2.21.0.593.g511ec345e18-goog


2019-05-01 23:04:17

by Brendan Higgins

[permalink] [raw]
Subject: [PATCH v2 02/17] kunit: test: add test resource management API

Create a common API for test managed resources like memory and test
objects. A lot of times a test will want to set up infrastructure to be
used in test cases; this could be anything from just wanting to allocate
some memory to setting up a driver stack; this defines facilities for
creating "test resources" which are managed by the test infrastructure
and are automatically cleaned up at the conclusion of the test.

Signed-off-by: Brendan Higgins <[email protected]>
---
include/kunit/test.h | 109 +++++++++++++++++++++++++++++++++++++++++++
kunit/test.c | 95 +++++++++++++++++++++++++++++++++++++
2 files changed, 204 insertions(+)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index 23c2ebedd6dd9..819edd8db4e81 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -12,6 +12,69 @@
#include <linux/types.h>
#include <linux/slab.h>

+struct kunit_resource;
+
+typedef int (*kunit_resource_init_t)(struct kunit_resource *, void *);
+typedef void (*kunit_resource_free_t)(struct kunit_resource *);
+
+/**
+ * struct kunit_resource - represents a *test managed resource*
+ * @allocation: for the user to store arbitrary data.
+ * @free: a user supplied function to free the resource. Populated by
+ * kunit_alloc_resource().
+ *
+ * Represents a *test managed resource*, a resource which will automatically be
+ * cleaned up at the end of a test case.
+ *
+ * Example:
+ *
+ * .. code-block:: c
+ *
+ * struct kunit_kmalloc_params {
+ * size_t size;
+ * gfp_t gfp;
+ * };
+ *
+ * static int kunit_kmalloc_init(struct kunit_resource *res, void *context)
+ * {
+ * struct kunit_kmalloc_params *params = context;
+ * res->allocation = kmalloc(params->size, params->gfp);
+ *
+ * if (!res->allocation)
+ * return -ENOMEM;
+ *
+ * return 0;
+ * }
+ *
+ * static void kunit_kmalloc_free(struct kunit_resource *res)
+ * {
+ * kfree(res->allocation);
+ * }
+ *
+ * void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp)
+ * {
+ * struct kunit_kmalloc_params params;
+ * struct kunit_resource *res;
+ *
+ * params.size = size;
+ * params.gfp = gfp;
+ *
+ * res = kunit_alloc_resource(test, kunit_kmalloc_init,
+ * kunit_kmalloc_free, &params);
+ * if (res)
+ * return res->allocation;
+ * else
+ * return NULL;
+ * }
+ */
+struct kunit_resource {
+ void *allocation;
+ kunit_resource_free_t free;
+
+ /* private: internal use only. */
+ struct list_head node;
+};
+
struct kunit;

/**
@@ -104,6 +167,7 @@ struct kunit {
const char *name; /* Read only after initialization! */
spinlock_t lock; /* Gaurds all mutable test state. */
bool success; /* Protected by lock. */
+ struct list_head resources; /* Protected by lock. */
void (*vprintk)(const struct kunit *test,
const char *level,
struct va_format *vaf);
@@ -127,6 +191,51 @@ int kunit_run_tests(struct kunit_module *module);
} \
late_initcall(module_kunit_init##module)

+/**
+ * kunit_alloc_resource() - Allocates a *test managed resource*.
+ * @test: The test context object.
+ * @init: a user supplied function to initialize the resource.
+ * @free: a user supplied function to free the resource.
+ * @context: for the user to pass in arbitrary data to the init function.
+ *
+ * Allocates a *test managed resource*, a resource which will automatically be
+ * cleaned up at the end of a test case. See &struct kunit_resource for an
+ * example.
+ */
+struct kunit_resource *kunit_alloc_resource(struct kunit *test,
+ kunit_resource_init_t init,
+ kunit_resource_free_t free,
+ void *context);
+
+void kunit_free_resource(struct kunit *test, struct kunit_resource *res);
+
+/**
+ * kunit_kmalloc() - Like kmalloc() except the allocation is *test managed*.
+ * @test: The test context object.
+ * @size: The size in bytes of the desired memory.
+ * @gfp: flags passed to underlying kmalloc().
+ *
+ * Just like `kmalloc(...)`, except the allocation is managed by the test case
+ * and is automatically cleaned up after the test case concludes. See &struct
+ * kunit_resource for more information.
+ */
+void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp);
+
+/**
+ * kunit_kzalloc() - Just like kunit_kmalloc(), but zeroes the allocation.
+ * @test: The test context object.
+ * @size: The size in bytes of the desired memory.
+ * @gfp: flags passed to underlying kmalloc().
+ *
+ * See kzalloc() and kunit_kmalloc() for more information.
+ */
+static inline void *kunit_kzalloc(struct kunit *test, size_t size, gfp_t gfp)
+{
+ return kunit_kmalloc(test, size, gfp | __GFP_ZERO);
+}
+
+void kunit_cleanup(struct kunit *test);
+
void __printf(3, 4) kunit_printk(const char *level,
const struct kunit *test,
const char *fmt, ...);
diff --git a/kunit/test.c b/kunit/test.c
index 5bf97e2935579..541f9adb1608c 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -66,6 +66,7 @@ static void kunit_vprintk(const struct kunit *test,
int kunit_init_test(struct kunit *test, const char *name)
{
spin_lock_init(&test->lock);
+ INIT_LIST_HEAD(&test->resources);
test->name = name;
test->vprintk = kunit_vprintk;

@@ -93,6 +94,11 @@ static void kunit_run_case_internal(struct kunit *test,
test_case->run_case(test);
}

+static void kunit_case_internal_cleanup(struct kunit *test)
+{
+ kunit_cleanup(test);
+}
+
/*
* Performs post validations and cleanup after a test case was run.
* XXX: Should ONLY BE CALLED AFTER kunit_run_case_internal!
@@ -103,6 +109,8 @@ static void kunit_run_case_cleanup(struct kunit *test,
{
if (module->exit)
module->exit(test);
+
+ kunit_case_internal_cleanup(test);
}

/*
@@ -150,6 +158,93 @@ int kunit_run_tests(struct kunit_module *module)
return 0;
}

+struct kunit_resource *kunit_alloc_resource(struct kunit *test,
+ kunit_resource_init_t init,
+ kunit_resource_free_t free,
+ void *context)
+{
+ struct kunit_resource *res;
+ unsigned long flags;
+ int ret;
+
+ res = kzalloc(sizeof(*res), GFP_KERNEL);
+ if (!res)
+ return NULL;
+
+ ret = init(res, context);
+ if (ret)
+ return NULL;
+
+ res->free = free;
+ spin_lock_irqsave(&test->lock, flags);
+ list_add_tail(&res->node, &test->resources);
+ spin_unlock_irqrestore(&test->lock, flags);
+
+ return res;
+}
+
+void kunit_free_resource(struct kunit *test, struct kunit_resource *res)
+{
+ res->free(res);
+ list_del(&res->node);
+ kfree(res);
+}
+
+struct kunit_kmalloc_params {
+ size_t size;
+ gfp_t gfp;
+};
+
+static int kunit_kmalloc_init(struct kunit_resource *res, void *context)
+{
+ struct kunit_kmalloc_params *params = context;
+
+ res->allocation = kmalloc(params->size, params->gfp);
+ if (!res->allocation)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void kunit_kmalloc_free(struct kunit_resource *res)
+{
+ kfree(res->allocation);
+}
+
+void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp)
+{
+ struct kunit_kmalloc_params params;
+ struct kunit_resource *res;
+
+ params.size = size;
+ params.gfp = gfp;
+
+ res = kunit_alloc_resource(test,
+ kunit_kmalloc_init,
+ kunit_kmalloc_free,
+ &params);
+
+ if (res)
+ return res->allocation;
+ else
+ return NULL;
+}
+
+void kunit_cleanup(struct kunit *test)
+{
+ struct kunit_resource *resource, *resource_safe;
+ unsigned long flags;
+
+ spin_lock_irqsave(&test->lock, flags);
+ list_for_each_entry_safe(resource,
+ resource_safe,
+ &test->resources,
+ node) {
+ kunit_free_resource(test, resource);
+ }
+ spin_unlock_irqrestore(&test->lock, flags);
+}
+
void kunit_printk(const char *level,
const struct kunit *test,
const char *fmt, ...)
--
2.21.0.593.g511ec345e18-goog

2019-05-01 23:04:34

by Brendan Higgins

[permalink] [raw]
Subject: [PATCH v2 03/17] kunit: test: add string_stream a std::stream like string builder

A number of test features need to do pretty complicated string printing
where it may not be possible to rely on a single preallocated string
with parameters.

So provide a library for constructing the string as you go similar to
C++'s std::string.

Signed-off-by: Brendan Higgins <[email protected]>
---
include/kunit/string-stream.h | 51 ++++++++++++
kunit/Makefile | 3 +-
kunit/string-stream.c | 144 ++++++++++++++++++++++++++++++++++
3 files changed, 197 insertions(+), 1 deletion(-)
create mode 100644 include/kunit/string-stream.h
create mode 100644 kunit/string-stream.c

diff --git a/include/kunit/string-stream.h b/include/kunit/string-stream.h
new file mode 100644
index 0000000000000..567a4629406da
--- /dev/null
+++ b/include/kunit/string-stream.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * C++ stream style string builder used in KUnit for building messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <[email protected]>
+ */
+
+#ifndef _KUNIT_STRING_STREAM_H
+#define _KUNIT_STRING_STREAM_H
+
+#include <linux/types.h>
+#include <linux/spinlock.h>
+#include <linux/kref.h>
+#include <stdarg.h>
+
+struct string_stream_fragment {
+ struct list_head node;
+ char *fragment;
+};
+
+struct string_stream {
+ size_t length;
+ struct list_head fragments;
+
+ /* length and fragments are protected by this lock */
+ spinlock_t lock;
+ struct kref refcount;
+};
+
+struct string_stream *new_string_stream(void);
+
+void destroy_string_stream(struct string_stream *stream);
+
+void string_stream_get(struct string_stream *stream);
+
+int string_stream_put(struct string_stream *stream);
+
+int string_stream_add(struct string_stream *this, const char *fmt, ...);
+
+int string_stream_vadd(struct string_stream *this,
+ const char *fmt,
+ va_list args);
+
+char *string_stream_get_string(struct string_stream *this);
+
+void string_stream_clear(struct string_stream *this);
+
+bool string_stream_is_empty(struct string_stream *this);
+
+#endif /* _KUNIT_STRING_STREAM_H */
diff --git a/kunit/Makefile b/kunit/Makefile
index 5efdc4dea2c08..275b565a0e81f 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1 +1,2 @@
-obj-$(CONFIG_KUNIT) += test.o
+obj-$(CONFIG_KUNIT) += test.o \
+ string-stream.o
diff --git a/kunit/string-stream.c b/kunit/string-stream.c
new file mode 100644
index 0000000000000..7018194ecf2fa
--- /dev/null
+++ b/kunit/string-stream.c
@@ -0,0 +1,144 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * C++ stream style string builder used in KUnit for building messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <[email protected]>
+ */
+
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <kunit/string-stream.h>
+
+int string_stream_vadd(struct string_stream *this,
+ const char *fmt,
+ va_list args)
+{
+ struct string_stream_fragment *fragment;
+ int len;
+ va_list args_for_counting;
+ unsigned long flags;
+
+ /* Make a copy because `vsnprintf` could change it */
+ va_copy(args_for_counting, args);
+
+ /* Need space for null byte. */
+ len = vsnprintf(NULL, 0, fmt, args_for_counting) + 1;
+
+ va_end(args_for_counting);
+
+ fragment = kmalloc(sizeof(*fragment), GFP_KERNEL);
+ if (!fragment)
+ return -ENOMEM;
+
+ fragment->fragment = kmalloc(len, GFP_KERNEL);
+ if (!fragment->fragment) {
+ kfree(fragment);
+ return -ENOMEM;
+ }
+
+ len = vsnprintf(fragment->fragment, len, fmt, args);
+ spin_lock_irqsave(&this->lock, flags);
+ this->length += len;
+ list_add_tail(&fragment->node, &this->fragments);
+ spin_unlock_irqrestore(&this->lock, flags);
+ return 0;
+}
+
+int string_stream_add(struct string_stream *this, const char *fmt, ...)
+{
+ va_list args;
+ int result;
+
+ va_start(args, fmt);
+ result = string_stream_vadd(this, fmt, args);
+ va_end(args);
+ return result;
+}
+
+void string_stream_clear(struct string_stream *this)
+{
+ struct string_stream_fragment *fragment, *fragment_safe;
+ unsigned long flags;
+
+ spin_lock_irqsave(&this->lock, flags);
+ list_for_each_entry_safe(fragment,
+ fragment_safe,
+ &this->fragments,
+ node) {
+ list_del(&fragment->node);
+ kfree(fragment->fragment);
+ kfree(fragment);
+ }
+ this->length = 0;
+ spin_unlock_irqrestore(&this->lock, flags);
+}
+
+char *string_stream_get_string(struct string_stream *this)
+{
+ struct string_stream_fragment *fragment;
+ size_t buf_len = this->length + 1; /* +1 for null byte. */
+ char *buf;
+ unsigned long flags;
+
+ buf = kzalloc(buf_len, GFP_KERNEL);
+ if (!buf)
+ return NULL;
+
+ spin_lock_irqsave(&this->lock, flags);
+ list_for_each_entry(fragment, &this->fragments, node)
+ strlcat(buf, fragment->fragment, buf_len);
+ spin_unlock_irqrestore(&this->lock, flags);
+
+ return buf;
+}
+
+bool string_stream_is_empty(struct string_stream *this)
+{
+ bool is_empty;
+ unsigned long flags;
+
+ spin_lock_irqsave(&this->lock, flags);
+ is_empty = list_empty(&this->fragments);
+ spin_unlock_irqrestore(&this->lock, flags);
+
+ return is_empty;
+}
+
+void destroy_string_stream(struct string_stream *stream)
+{
+ string_stream_clear(stream);
+ kfree(stream);
+}
+
+static void string_stream_destroy(struct kref *kref)
+{
+ struct string_stream *stream = container_of(kref,
+ struct string_stream,
+ refcount);
+ destroy_string_stream(stream);
+}
+
+struct string_stream *new_string_stream(void)
+{
+ struct string_stream *stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+
+ if (!stream)
+ return NULL;
+
+ INIT_LIST_HEAD(&stream->fragments);
+ spin_lock_init(&stream->lock);
+ kref_init(&stream->refcount);
+ return stream;
+}
+
+void string_stream_get(struct string_stream *stream)
+{
+ kref_get(&stream->refcount);
+}
+
+int string_stream_put(struct string_stream *stream)
+{
+ return kref_put(&stream->refcount, &string_stream_destroy);
+}
+
--
2.21.0.593.g511ec345e18-goog

2019-05-01 23:04:49

by Brendan Higgins

[permalink] [raw]
Subject: [PATCH v2 04/17] kunit: test: add kunit_stream a std::stream like logger

A lot of the expectation and assertion infrastructure prints out fairly
complicated test failure messages, so add a C++ style log library for
for logging test results.

Signed-off-by: Brendan Higgins <[email protected]>
---
include/kunit/kunit-stream.h | 85 ++++++++++++++++++++
include/kunit/test.h | 2 +
kunit/Makefile | 3 +-
kunit/kunit-stream.c | 149 +++++++++++++++++++++++++++++++++++
kunit/test.c | 8 ++
5 files changed, 246 insertions(+), 1 deletion(-)
create mode 100644 include/kunit/kunit-stream.h
create mode 100644 kunit/kunit-stream.c

diff --git a/include/kunit/kunit-stream.h b/include/kunit/kunit-stream.h
new file mode 100644
index 0000000000000..d457a54fe0100
--- /dev/null
+++ b/include/kunit/kunit-stream.h
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * C++ stream style string formatter and printer used in KUnit for outputting
+ * KUnit messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <[email protected]>
+ */
+
+#ifndef _KUNIT_KUNIT_STREAM_H
+#define _KUNIT_KUNIT_STREAM_H
+
+#include <linux/types.h>
+#include <kunit/string-stream.h>
+
+struct kunit;
+
+/**
+ * struct kunit_stream - a std::stream style string builder.
+ *
+ * A std::stream style string builder. Allows messages to be built up and
+ * printed all at once.
+ */
+struct kunit_stream {
+ /* private: internal use only. */
+ struct kunit *test;
+ spinlock_t lock; /* Guards level. */
+ const char *level;
+ struct string_stream *internal_stream;
+};
+
+/**
+ * kunit_new_stream() - constructs a new &struct kunit_stream.
+ * @test: The test context object.
+ *
+ * Constructs a new test managed &struct kunit_stream.
+ */
+struct kunit_stream *kunit_new_stream(struct kunit *test);
+
+/**
+ * kunit_stream_set_level(): sets the level that string should be printed at.
+ * @this: the stream being operated on.
+ * @level: the print level the stream is set to output to.
+ *
+ * Sets the print level at which the stream outputs.
+ */
+void kunit_stream_set_level(struct kunit_stream *this, const char *level);
+
+/**
+ * kunit_stream_add(): adds the formatted input to the internal buffer.
+ * @this: the stream being operated on.
+ * @fmt: printf style format string to append to stream.
+ *
+ * Appends the formatted string, @fmt, to the internal buffer.
+ */
+void __printf(2, 3) kunit_stream_add(struct kunit_stream *this,
+ const char *fmt, ...);
+
+/**
+ * kunit_stream_append(): appends the contents of @other to @this.
+ * @this: the stream to which @other is appended.
+ * @other: the stream whose contents are appended to @this.
+ *
+ * Appends the contents of @other to @this.
+ */
+void kunit_stream_append(struct kunit_stream *this, struct kunit_stream *other);
+
+/**
+ * kunit_stream_commit(): prints out the internal buffer to the user.
+ * @this: the stream being operated on.
+ *
+ * Outputs the contents of the internal buffer as a kunit_printk formatted
+ * output.
+ */
+void kunit_stream_commit(struct kunit_stream *this);
+
+/**
+ * kunit_stream_clear(): clears the internal buffer.
+ * @this: the stream being operated on.
+ *
+ * Clears the contents of the internal buffer.
+ */
+void kunit_stream_clear(struct kunit_stream *this);
+
+#endif /* _KUNIT_KUNIT_STREAM_H */
diff --git a/include/kunit/test.h b/include/kunit/test.h
index 819edd8db4e81..4668e8a635954 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -11,6 +11,7 @@

#include <linux/types.h>
#include <linux/slab.h>
+#include <kunit/kunit-stream.h>

struct kunit_resource;

@@ -171,6 +172,7 @@ struct kunit {
void (*vprintk)(const struct kunit *test,
const char *level,
struct va_format *vaf);
+ void (*fail)(struct kunit *test, struct kunit_stream *stream);
};

int kunit_init_test(struct kunit *test, const char *name);
diff --git a/kunit/Makefile b/kunit/Makefile
index 275b565a0e81f..6ddc622ee6b1c 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1,2 +1,3 @@
obj-$(CONFIG_KUNIT) += test.o \
- string-stream.o
+ string-stream.o \
+ kunit-stream.o
diff --git a/kunit/kunit-stream.c b/kunit/kunit-stream.c
new file mode 100644
index 0000000000000..93c14eec03844
--- /dev/null
+++ b/kunit/kunit-stream.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * C++ stream style string formatter and printer used in KUnit for outputting
+ * KUnit messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <[email protected]>
+ */
+
+#include <kunit/test.h>
+#include <kunit/kunit-stream.h>
+#include <kunit/string-stream.h>
+
+const char *kunit_stream_get_level(struct kunit_stream *this)
+{
+ unsigned long flags;
+ const char *level;
+
+ spin_lock_irqsave(&this->lock, flags);
+ level = this->level;
+ spin_unlock_irqrestore(&this->lock, flags);
+
+ return level;
+}
+
+void kunit_stream_set_level(struct kunit_stream *this, const char *level)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&this->lock, flags);
+ this->level = level;
+ spin_unlock_irqrestore(&this->lock, flags);
+}
+
+void kunit_stream_add(struct kunit_stream *this, const char *fmt, ...)
+{
+ va_list args;
+ struct string_stream *stream = this->internal_stream;
+
+ va_start(args, fmt);
+
+ if (string_stream_vadd(stream, fmt, args) < 0)
+ kunit_err(this->test, "Failed to allocate fragment: %s\n", fmt);
+
+ va_end(args);
+}
+
+void kunit_stream_append(struct kunit_stream *this,
+ struct kunit_stream *other)
+{
+ struct string_stream *other_stream = other->internal_stream;
+ const char *other_content;
+
+ other_content = string_stream_get_string(other_stream);
+
+ if (!other_content) {
+ kunit_err(this->test,
+ "Failed to get string from second argument for appending.\n");
+ return;
+ }
+
+ kunit_stream_add(this, other_content);
+}
+
+void kunit_stream_clear(struct kunit_stream *this)
+{
+ string_stream_clear(this->internal_stream);
+}
+
+void kunit_stream_commit(struct kunit_stream *this)
+{
+ struct string_stream *stream = this->internal_stream;
+ struct string_stream_fragment *fragment;
+ const char *level;
+ char *buf;
+
+ level = kunit_stream_get_level(this);
+ if (!level) {
+ kunit_err(this->test,
+ "Stream was committed without a specified log level.\n");
+ level = KERN_ERR;
+ kunit_stream_set_level(this, level);
+ }
+
+ buf = string_stream_get_string(stream);
+ if (!buf) {
+ kunit_err(this->test,
+ "Could not allocate buffer, dumping stream:\n");
+ list_for_each_entry(fragment, &stream->fragments, node) {
+ kunit_err(this->test, fragment->fragment);
+ }
+ kunit_err(this->test, "\n");
+ goto cleanup;
+ }
+
+ kunit_printk(level, this->test, buf);
+ kfree(buf);
+
+cleanup:
+ kunit_stream_clear(this);
+}
+
+static int kunit_stream_init(struct kunit_resource *res, void *context)
+{
+ struct kunit *test = context;
+ struct kunit_stream *stream;
+
+ stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+ if (!stream)
+ return -ENOMEM;
+ res->allocation = stream;
+ stream->test = test;
+ spin_lock_init(&stream->lock);
+ stream->internal_stream = new_string_stream();
+
+ if (!stream->internal_stream)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void kunit_stream_free(struct kunit_resource *res)
+{
+ struct kunit_stream *stream = res->allocation;
+
+ if (!string_stream_is_empty(stream->internal_stream)) {
+ kunit_err(stream->test,
+ "End of test case reached with uncommitted stream entries.\n");
+ kunit_stream_commit(stream);
+ }
+
+ destroy_string_stream(stream->internal_stream);
+ kfree(stream);
+}
+
+struct kunit_stream *kunit_new_stream(struct kunit *test)
+{
+ struct kunit_resource *res;
+
+ res = kunit_alloc_resource(test,
+ kunit_stream_init,
+ kunit_stream_free,
+ test);
+
+ if (res)
+ return res->allocation;
+ else
+ return NULL;
+}
diff --git a/kunit/test.c b/kunit/test.c
index 541f9adb1608c..f7575b127e2df 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -63,12 +63,20 @@ static void kunit_vprintk(const struct kunit *test,
"kunit %s: %pV", test->name, vaf);
}

+static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
+{
+ kunit_set_success(test, false);
+ kunit_stream_set_level(stream, KERN_ERR);
+ kunit_stream_commit(stream);
+}
+
int kunit_init_test(struct kunit *test, const char *name)
{
spin_lock_init(&test->lock);
INIT_LIST_HEAD(&test->resources);
test->name = name;
test->vprintk = kunit_vprintk;
+ test->fail = kunit_fail;

return 0;
}
--
2.21.0.593.g511ec345e18-goog

2019-05-01 23:05:12

by Brendan Higgins

[permalink] [raw]
Subject: [PATCH v2 01/17] kunit: test: add KUnit test runner core

Add core facilities for defining unit tests; this provides a common way
to define test cases, functions that execute code which is under test
and determine whether the code under test behaves as expected; this also
provides a way to group together related test cases in test suites (here
we call them test_modules).

Just define test cases and how to execute them for now; setting
expectations on code will be defined later.

Signed-off-by: Brendan Higgins <[email protected]>
---
include/kunit/test.h | 165 ++++++++++++++++++++++++++++++++++++++++++
kunit/Kconfig | 16 +++++
kunit/Makefile | 1 +
kunit/test.c | 168 +++++++++++++++++++++++++++++++++++++++++++
4 files changed, 350 insertions(+)
create mode 100644 include/kunit/test.h
create mode 100644 kunit/Kconfig
create mode 100644 kunit/Makefile
create mode 100644 kunit/test.c

diff --git a/include/kunit/test.h b/include/kunit/test.h
new file mode 100644
index 0000000000000..23c2ebedd6dd9
--- /dev/null
+++ b/include/kunit/test.h
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Base unit test (KUnit) API.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <[email protected]>
+ */
+
+#ifndef _KUNIT_TEST_H
+#define _KUNIT_TEST_H
+
+#include <linux/types.h>
+#include <linux/slab.h>
+
+struct kunit;
+
+/**
+ * struct kunit_case - represents an individual test case.
+ * @run_case: the function representing the actual test case.
+ * @name: the name of the test case.
+ *
+ * A test case is a function with the signature, ``void (*)(struct kunit *)``
+ * that makes expectations (see KUNIT_EXPECT_TRUE()) about code under test. Each
+ * test case is associated with a &struct kunit_module and will be run after the
+ * module's init function and followed by the module's exit function.
+ *
+ * A test case should be static and should only be created with the KUNIT_CASE()
+ * macro; additionally, every array of test cases should be terminated with an
+ * empty test case.
+ *
+ * Example:
+ *
+ * .. code-block:: c
+ *
+ * void add_test_basic(struct kunit *test)
+ * {
+ * KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+ * KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+ * KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
+ * KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
+ * KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
+ * }
+ *
+ * static struct kunit_case example_test_cases[] = {
+ * KUNIT_CASE(add_test_basic),
+ * {},
+ * };
+ *
+ */
+struct kunit_case {
+ void (*run_case)(struct kunit *test);
+ const char name[256];
+
+ /* private: internal use only. */
+ bool success;
+};
+
+/**
+ * KUNIT_CASE - A helper for creating a &struct kunit_case
+ * @test_name: a reference to a test case function.
+ *
+ * Takes a symbol for a function representing a test case and creates a
+ * &struct kunit_case object from it. See the documentation for
+ * &struct kunit_case for an example on how to use it.
+ */
+#define KUNIT_CASE(test_name) { .run_case = test_name, .name = #test_name }
+
+/**
+ * struct kunit_module - describes a related collection of &struct kunit_case s.
+ * @name: the name of the test. Purely informational.
+ * @init: called before every test case.
+ * @exit: called after every test case.
+ * @test_cases: a null terminated array of test cases.
+ *
+ * A kunit_module is a collection of related &struct kunit_case s, such that
+ * @init is called before every test case and @exit is called after every test
+ * case, similar to the notion of a *test fixture* or a *test class* in other
+ * unit testing frameworks like JUnit or Googletest.
+ *
+ * Every &struct kunit_case must be associated with a kunit_module for KUnit to
+ * run it.
+ */
+struct kunit_module {
+ const char name[256];
+ int (*init)(struct kunit *test);
+ void (*exit)(struct kunit *test);
+ struct kunit_case *test_cases;
+};
+
+/**
+ * struct kunit - represents a running instance of a test.
+ * @priv: for user to store arbitrary data. Commonly used to pass data created
+ * in the init function (see &struct kunit_module).
+ *
+ * Used to store information about the current context under which the test is
+ * running. Most of this data is private and should only be accessed indirectly
+ * via public functions; the one exception is @priv which can be used by the
+ * test writer to store arbitrary data.
+ */
+struct kunit {
+ void *priv;
+
+ /* private: internal use only. */
+ const char *name; /* Read only after initialization! */
+ spinlock_t lock; /* Gaurds all mutable test state. */
+ bool success; /* Protected by lock. */
+ void (*vprintk)(const struct kunit *test,
+ const char *level,
+ struct va_format *vaf);
+};
+
+int kunit_init_test(struct kunit *test, const char *name);
+
+int kunit_run_tests(struct kunit_module *module);
+
+/**
+ * module_test() - used to register a &struct kunit_module with KUnit.
+ * @module: a statically allocated &struct kunit_module.
+ *
+ * Registers @module with the test framework. See &struct kunit_module for more
+ * information.
+ */
+#define module_test(module) \
+ static int module_kunit_init##module(void) \
+ { \
+ return kunit_run_tests(&module); \
+ } \
+ late_initcall(module_kunit_init##module)
+
+void __printf(3, 4) kunit_printk(const char *level,
+ const struct kunit *test,
+ const char *fmt, ...);
+
+/**
+ * kunit_info() - Prints an INFO level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * Prints an info level message associated with the test module being run. Takes
+ * a variable number of format parameters just like printk().
+ */
+#define kunit_info(test, fmt, ...) \
+ kunit_printk(KERN_INFO, test, fmt, ##__VA_ARGS__)
+
+/**
+ * kunit_warn() - Prints a WARN level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * See kunit_info().
+ */
+#define kunit_warn(test, fmt, ...) \
+ kunit_printk(KERN_WARNING, test, fmt, ##__VA_ARGS__)
+
+/**
+ * kunit_err() - Prints an ERROR level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * See kunit_info().
+ */
+#define kunit_err(test, fmt, ...) \
+ kunit_printk(KERN_ERR, test, fmt, ##__VA_ARGS__)
+
+#endif /* _KUNIT_TEST_H */
diff --git a/kunit/Kconfig b/kunit/Kconfig
new file mode 100644
index 0000000000000..64480092b2c24
--- /dev/null
+++ b/kunit/Kconfig
@@ -0,0 +1,16 @@
+#
+# KUnit base configuration
+#
+
+menu "KUnit support"
+
+config KUNIT
+ bool "Enable support for unit tests (KUnit)"
+ help
+ Enables support for kernel unit tests (KUnit), a lightweight unit
+ testing and mocking framework for the Linux kernel. These tests are
+ able to be run locally on a developer's workstation without a VM or
+ special hardware. For more information, please see
+ Documentation/kunit/
+
+endmenu
diff --git a/kunit/Makefile b/kunit/Makefile
new file mode 100644
index 0000000000000..5efdc4dea2c08
--- /dev/null
+++ b/kunit/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_KUNIT) += test.o
diff --git a/kunit/test.c b/kunit/test.c
new file mode 100644
index 0000000000000..5bf97e2935579
--- /dev/null
+++ b/kunit/test.c
@@ -0,0 +1,168 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Base unit test (KUnit) API.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <[email protected]>
+ */
+
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <os.h>
+#include <kunit/test.h>
+
+static bool kunit_get_success(struct kunit *test)
+{
+ unsigned long flags;
+ bool success;
+
+ spin_lock_irqsave(&test->lock, flags);
+ success = test->success;
+ spin_unlock_irqrestore(&test->lock, flags);
+
+ return success;
+}
+
+static void kunit_set_success(struct kunit *test, bool success)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&test->lock, flags);
+ test->success = success;
+ spin_unlock_irqrestore(&test->lock, flags);
+}
+
+static int kunit_vprintk_emit(const struct kunit *test,
+ int level,
+ const char *fmt,
+ va_list args)
+{
+ return vprintk_emit(0, level, NULL, 0, fmt, args);
+}
+
+static int kunit_printk_emit(const struct kunit *test,
+ int level,
+ const char *fmt, ...)
+{
+ va_list args;
+ int ret;
+
+ va_start(args, fmt);
+ ret = kunit_vprintk_emit(test, level, fmt, args);
+ va_end(args);
+
+ return ret;
+}
+
+static void kunit_vprintk(const struct kunit *test,
+ const char *level,
+ struct va_format *vaf)
+{
+ kunit_printk_emit(test,
+ level[1] - '0',
+ "kunit %s: %pV", test->name, vaf);
+}
+
+int kunit_init_test(struct kunit *test, const char *name)
+{
+ spin_lock_init(&test->lock);
+ test->name = name;
+ test->vprintk = kunit_vprintk;
+
+ return 0;
+}
+
+/*
+ * Initializes and runs test case. Does not clean up or do post validations.
+ */
+static void kunit_run_case_internal(struct kunit *test,
+ struct kunit_module *module,
+ struct kunit_case *test_case)
+{
+ int ret;
+
+ if (module->init) {
+ ret = module->init(test);
+ if (ret) {
+ kunit_err(test, "failed to initialize: %d\n", ret);
+ kunit_set_success(test, false);
+ return;
+ }
+ }
+
+ test_case->run_case(test);
+}
+
+/*
+ * Performs post validations and cleanup after a test case was run.
+ * XXX: Should ONLY BE CALLED AFTER kunit_run_case_internal!
+ */
+static void kunit_run_case_cleanup(struct kunit *test,
+ struct kunit_module *module,
+ struct kunit_case *test_case)
+{
+ if (module->exit)
+ module->exit(test);
+}
+
+/*
+ * Performs all logic to run a test case.
+ */
+static bool kunit_run_case(struct kunit *test,
+ struct kunit_module *module,
+ struct kunit_case *test_case)
+{
+ kunit_set_success(test, true);
+
+ kunit_run_case_internal(test, module, test_case);
+ kunit_run_case_cleanup(test, module, test_case);
+
+ return kunit_get_success(test);
+}
+
+int kunit_run_tests(struct kunit_module *module)
+{
+ bool all_passed = true, success;
+ struct kunit_case *test_case;
+ struct kunit test;
+ int ret;
+
+ ret = kunit_init_test(&test, module->name);
+ if (ret)
+ return ret;
+
+ for (test_case = module->test_cases; test_case->run_case; test_case++) {
+ success = kunit_run_case(&test, module, test_case);
+ if (!success)
+ all_passed = false;
+
+ kunit_info(&test,
+ "%s %s\n",
+ test_case->name,
+ success ? "passed" : "failed");
+ }
+
+ if (all_passed)
+ kunit_info(&test, "all tests passed\n");
+ else
+ kunit_info(&test, "one or more tests failed\n");
+
+ return 0;
+}
+
+void kunit_printk(const char *level,
+ const struct kunit *test,
+ const char *fmt, ...)
+{
+ struct va_format vaf;
+ va_list args;
+
+ va_start(args, fmt);
+
+ vaf.fmt = fmt;
+ vaf.va = &args;
+
+ test->vprintk(test, level, &vaf);
+
+ va_end(args);
+}
--
2.21.0.593.g511ec345e18-goog

2019-05-01 23:05:20

by Brendan Higgins

[permalink] [raw]
Subject: [PATCH v2 06/17] kbuild: enable building KUnit

Add KUnit to root Kconfig and Makefile allowing it to actually be built.

Signed-off-by: Brendan Higgins <[email protected]>
---
Kconfig | 2 ++
Makefile | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/Kconfig b/Kconfig
index 48a80beab6853..10428501edb78 100644
--- a/Kconfig
+++ b/Kconfig
@@ -30,3 +30,5 @@ source "crypto/Kconfig"
source "lib/Kconfig"

source "lib/Kconfig.debug"
+
+source "kunit/Kconfig"
diff --git a/Makefile b/Makefile
index 2b99679148dc7..77368f498d84c 100644
--- a/Makefile
+++ b/Makefile
@@ -969,7 +969,7 @@ endif
PHONY += prepare0

ifeq ($(KBUILD_EXTMOD),)
-core-y += kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/
+core-y += kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/ kunit/

vmlinux-dirs := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
$(core-y) $(core-m) $(drivers-y) $(drivers-m) \
--
2.21.0.593.g511ec345e18-goog

2019-05-01 23:05:32

by Brendan Higgins

[permalink] [raw]
Subject: [PATCH v2 07/17] kunit: test: add initial tests

Add a test for string stream along with a simpler example.

Signed-off-by: Brendan Higgins <[email protected]>
---
kunit/Kconfig | 12 ++++++
kunit/Makefile | 4 ++
kunit/example-test.c | 88 ++++++++++++++++++++++++++++++++++++++
kunit/string-stream-test.c | 61 ++++++++++++++++++++++++++
4 files changed, 165 insertions(+)
create mode 100644 kunit/example-test.c
create mode 100644 kunit/string-stream-test.c

diff --git a/kunit/Kconfig b/kunit/Kconfig
index 64480092b2c24..5cb500355c873 100644
--- a/kunit/Kconfig
+++ b/kunit/Kconfig
@@ -13,4 +13,16 @@ config KUNIT
special hardware. For more information, please see
Documentation/kunit/

+config KUNIT_TEST
+ bool "KUnit test for KUnit"
+ depends on KUNIT
+ help
+ Enables KUnit test to test KUnit.
+
+config KUNIT_EXAMPLE_TEST
+ bool "Example test for KUnit"
+ depends on KUNIT
+ help
+ Enables example KUnit test to demo features of KUnit.
+
endmenu
diff --git a/kunit/Makefile b/kunit/Makefile
index 6ddc622ee6b1c..60a9ea6cb4697 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1,3 +1,7 @@
obj-$(CONFIG_KUNIT) += test.o \
string-stream.o \
kunit-stream.o
+
+obj-$(CONFIG_KUNIT_TEST) += string-stream-test.o
+
+obj-$(CONFIG_KUNIT_EXAMPLE_TEST) += example-test.o
diff --git a/kunit/example-test.c b/kunit/example-test.c
new file mode 100644
index 0000000000000..3947dd7c8f922
--- /dev/null
+++ b/kunit/example-test.c
@@ -0,0 +1,88 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Example KUnit test to show how to use KUnit.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <[email protected]>
+ */
+
+#include <kunit/test.h>
+
+/*
+ * This is the most fundamental element of KUnit, the test case. A test case
+ * makes a set EXPECTATIONs and ASSERTIONs about the behavior of some code; if
+ * any expectations or assertions are not met, the test fails; otherwise, the
+ * test passes.
+ *
+ * In KUnit, a test case is just a function with the signature
+ * `void (*)(struct kunit *)`. `struct kunit` is a context object that stores
+ * information about the current test.
+ */
+static void example_simple_test(struct kunit *test)
+{
+ /*
+ * This is an EXPECTATION; it is how KUnit tests things. When you want
+ * to test a piece of code, you set some expectations about what the
+ * code should do. KUnit then runs the test and verifies that the code's
+ * behavior matched what was expected.
+ */
+ KUNIT_EXPECT_EQ(test, 1 + 1, 2);
+}
+
+/*
+ * This is run once before each test case, see the comment on
+ * example_test_module for more information.
+ */
+static int example_test_init(struct kunit *test)
+{
+ kunit_info(test, "initializing\n");
+
+ return 0;
+}
+
+/*
+ * Here we make a list of all the test cases we want to add to the test module
+ * below.
+ */
+static struct kunit_case example_test_cases[] = {
+ /*
+ * This is a helper to create a test case object from a test case
+ * function; its exact function is not important to understand how to
+ * use KUnit, just know that this is how you associate test cases with a
+ * test module.
+ */
+ KUNIT_CASE(example_simple_test),
+ {},
+};
+
+/*
+ * This defines a suite or grouping of tests.
+ *
+ * Test cases are defined as belonging to the suite by adding them to
+ * `kunit_cases`.
+ *
+ * Often it is desirable to run some function which will set up things which
+ * will be used by every test; this is accomplished with an `init` function
+ * which runs before each test case is invoked. Similarly, an `exit` function
+ * may be specified which runs after every test case and can be used to for
+ * cleanup. For clarity, running tests in a test module would behave as follows:
+ *
+ * module.init(test);
+ * module.test_case[0](test);
+ * module.exit(test);
+ * module.init(test);
+ * module.test_case[1](test);
+ * module.exit(test);
+ * ...;
+ */
+static struct kunit_module example_test_module = {
+ .name = "example",
+ .init = example_test_init,
+ .test_cases = example_test_cases,
+};
+
+/*
+ * This registers the above test module telling KUnit that this is a suite of
+ * tests that need to be run.
+ */
+module_test(example_test_module);
diff --git a/kunit/string-stream-test.c b/kunit/string-stream-test.c
new file mode 100644
index 0000000000000..b2a98576797c9
--- /dev/null
+++ b/kunit/string-stream-test.c
@@ -0,0 +1,61 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KUnit test for struct string_stream.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <[email protected]>
+ */
+
+#include <linux/slab.h>
+#include <kunit/test.h>
+#include <kunit/string-stream.h>
+
+static void string_stream_test_get_string(struct kunit *test)
+{
+ struct string_stream *stream = new_string_stream();
+ char *output;
+
+ string_stream_add(stream, "Foo");
+ string_stream_add(stream, " %s", "bar");
+
+ output = string_stream_get_string(stream);
+ KUNIT_EXPECT_STREQ(test, output, "Foo bar");
+ kfree(output);
+ destroy_string_stream(stream);
+}
+
+static void string_stream_test_add_and_clear(struct kunit *test)
+{
+ struct string_stream *stream = new_string_stream();
+ char *output;
+ int i;
+
+ for (i = 0; i < 10; i++)
+ string_stream_add(stream, "A");
+
+ output = string_stream_get_string(stream);
+ KUNIT_EXPECT_STREQ(test, output, "AAAAAAAAAA");
+ KUNIT_EXPECT_EQ(test, stream->length, 10);
+ KUNIT_EXPECT_FALSE(test, string_stream_is_empty(stream));
+ kfree(output);
+
+ string_stream_clear(stream);
+
+ output = string_stream_get_string(stream);
+ KUNIT_EXPECT_STREQ(test, output, "");
+ KUNIT_EXPECT_TRUE(test, string_stream_is_empty(stream));
+ destroy_string_stream(stream);
+}
+
+static struct kunit_case string_stream_test_cases[] = {
+ KUNIT_CASE(string_stream_test_get_string),
+ KUNIT_CASE(string_stream_test_add_and_clear),
+ {}
+};
+
+static struct kunit_module string_stream_test_module = {
+ .name = "string-stream-test",
+ .test_cases = string_stream_test_cases
+};
+module_test(string_stream_test_module);
+
--
2.21.0.593.g511ec345e18-goog

2019-05-01 23:05:54

by Brendan Higgins

[permalink] [raw]
Subject: [PATCH v2 09/17] kunit: test: add tests for kunit test abort

Add KUnit tests for the KUnit test abort mechanism (see preceding
commit). Add tests both for general try catch mechanism as well as
non-architecture specific mechanism.

Signed-off-by: Brendan Higgins <[email protected]>
---
kunit/Makefile | 3 +-
kunit/test-test.c | 135 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 137 insertions(+), 1 deletion(-)
create mode 100644 kunit/test-test.c

diff --git a/kunit/Makefile b/kunit/Makefile
index 1f7680cfa11ad..533355867abd2 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -3,6 +3,7 @@ obj-$(CONFIG_KUNIT) += test.o \
kunit-stream.o \
try-catch.o

-obj-$(CONFIG_KUNIT_TEST) += string-stream-test.o
+obj-$(CONFIG_KUNIT_TEST) += test-test.o \
+ string-stream-test.o

obj-$(CONFIG_KUNIT_EXAMPLE_TEST) += example-test.o
diff --git a/kunit/test-test.c b/kunit/test-test.c
new file mode 100644
index 0000000000000..c81ae6efb959f
--- /dev/null
+++ b/kunit/test-test.c
@@ -0,0 +1,135 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KUnit test for core test infrastructure.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <[email protected]>
+ */
+#include <kunit/test.h>
+
+struct kunit_try_catch_test_context {
+ struct kunit_try_catch *try_catch;
+ bool function_called;
+};
+
+void kunit_test_successful_try(void *data)
+{
+ struct kunit *test = data;
+ struct kunit_try_catch_test_context *ctx = test->priv;
+
+ ctx->function_called = true;
+}
+
+void kunit_test_no_catch(void *data)
+{
+ struct kunit *test = data;
+
+ KUNIT_FAIL(test, "Catch should not be called.\n");
+}
+
+static void kunit_test_try_catch_successful_try_no_catch(struct kunit *test)
+{
+ struct kunit_try_catch_test_context *ctx = test->priv;
+ struct kunit_try_catch *try_catch = ctx->try_catch;
+
+ kunit_try_catch_init(try_catch,
+ test,
+ kunit_test_successful_try,
+ kunit_test_no_catch);
+ kunit_try_catch_run(try_catch, test);
+
+ KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+void kunit_test_unsuccessful_try(void *data)
+{
+ struct kunit *test = data;
+ struct kunit_try_catch_test_context *ctx = test->priv;
+ struct kunit_try_catch *try_catch = ctx->try_catch;
+
+ kunit_try_catch_throw(try_catch);
+ KUNIT_FAIL(test, "This line should never be reached.\n");
+}
+
+void kunit_test_catch(void *data)
+{
+ struct kunit *test = data;
+ struct kunit_try_catch_test_context *ctx = test->priv;
+
+ ctx->function_called = true;
+}
+
+static void kunit_test_try_catch_unsuccessful_try_does_catch(struct kunit *test)
+{
+ struct kunit_try_catch_test_context *ctx = test->priv;
+ struct kunit_try_catch *try_catch = ctx->try_catch;
+
+ kunit_try_catch_init(try_catch,
+ test,
+ kunit_test_unsuccessful_try,
+ kunit_test_catch);
+ kunit_try_catch_run(try_catch, test);
+
+ KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+static void kunit_test_generic_try_catch_successful_try_no_catch(
+ struct kunit *test)
+{
+ struct kunit_try_catch_test_context *ctx = test->priv;
+ struct kunit_try_catch *try_catch = ctx->try_catch;
+
+ try_catch->test = test;
+ kunit_generic_try_catch_init(try_catch);
+ try_catch->try = kunit_test_successful_try;
+ try_catch->catch = kunit_test_no_catch;
+
+ kunit_try_catch_run(try_catch, test);
+
+ KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+static void kunit_test_generic_try_catch_unsuccessful_try_does_catch(
+ struct kunit *test)
+{
+ struct kunit_try_catch_test_context *ctx = test->priv;
+ struct kunit_try_catch *try_catch = ctx->try_catch;
+
+ try_catch->test = test;
+ kunit_generic_try_catch_init(try_catch);
+ try_catch->try = kunit_test_unsuccessful_try;
+ try_catch->catch = kunit_test_catch;
+
+ kunit_try_catch_run(try_catch, test);
+
+ KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+static int kunit_try_catch_test_init(struct kunit *test)
+{
+ struct kunit_try_catch_test_context *ctx;
+
+ ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+ test->priv = ctx;
+
+ ctx->try_catch = kunit_kmalloc(test,
+ sizeof(*ctx->try_catch),
+ GFP_KERNEL);
+
+ return 0;
+}
+
+static struct kunit_case kunit_try_catch_test_cases[] = {
+ KUNIT_CASE(kunit_test_try_catch_successful_try_no_catch),
+ KUNIT_CASE(kunit_test_try_catch_unsuccessful_try_does_catch),
+ KUNIT_CASE(kunit_test_generic_try_catch_successful_try_no_catch),
+ KUNIT_CASE(kunit_test_generic_try_catch_unsuccessful_try_does_catch),
+ {},
+};
+
+static struct kunit_module kunit_try_catch_test_module = {
+ .name = "kunit-try-catch-test",
+ .init = kunit_try_catch_test_init,
+ .test_cases = kunit_try_catch_test_cases,
+};
+module_test(kunit_try_catch_test_module);
--
2.21.0.593.g511ec345e18-goog

2019-05-01 23:06:11

by Brendan Higgins

[permalink] [raw]
Subject: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

From: Felix Guo <[email protected]>

The ultimate goal is to create minimal isolated test binaries; in the
meantime we are using UML to provide the infrastructure to run tests, so
define an abstract way to configure and run tests that allow us to
change the context in which tests are built without affecting the user.
This also makes pretty and dynamic error reporting, and a lot of other
nice features easier.

kunit_config.py:
- parse .config and Kconfig files.

kunit_kernel.py: provides helper functions to:
- configure the kernel using kunitconfig.
- build the kernel with the appropriate configuration.
- provide function to invoke the kernel and stream the output back.

Signed-off-by: Felix Guo <[email protected]>
Signed-off-by: Brendan Higgins <[email protected]>
---
tools/testing/kunit/.gitignore | 3 +
tools/testing/kunit/kunit.py | 78 +++++++++++++++
tools/testing/kunit/kunit_config.py | 66 +++++++++++++
tools/testing/kunit/kunit_kernel.py | 148 ++++++++++++++++++++++++++++
tools/testing/kunit/kunit_parser.py | 119 ++++++++++++++++++++++
5 files changed, 414 insertions(+)
create mode 100644 tools/testing/kunit/.gitignore
create mode 100755 tools/testing/kunit/kunit.py
create mode 100644 tools/testing/kunit/kunit_config.py
create mode 100644 tools/testing/kunit/kunit_kernel.py
create mode 100644 tools/testing/kunit/kunit_parser.py

diff --git a/tools/testing/kunit/.gitignore b/tools/testing/kunit/.gitignore
new file mode 100644
index 0000000000000..c791ff59a37a9
--- /dev/null
+++ b/tools/testing/kunit/.gitignore
@@ -0,0 +1,3 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
\ No newline at end of file
diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
new file mode 100755
index 0000000000000..7413ec7351a20
--- /dev/null
+++ b/tools/testing/kunit/kunit.py
@@ -0,0 +1,78 @@
+#!/usr/bin/python3
+# SPDX-License-Identifier: GPL-2.0
+#
+# A thin wrapper on top of the KUnit Kernel
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <[email protected]>
+# Author: Brendan Higgins <[email protected]>
+
+import argparse
+import sys
+import os
+import time
+
+import kunit_config
+import kunit_kernel
+import kunit_parser
+
+parser = argparse.ArgumentParser(description='Runs KUnit tests.')
+
+parser.add_argument('--raw_output', help='don\'t format output from kernel',
+ action='store_true')
+
+parser.add_argument('--timeout', help='maximum number of seconds to allow for '
+ 'all tests to run. This does not include time taken to '
+ 'build the tests.', type=int, default=300,
+ metavar='timeout')
+
+parser.add_argument('--jobs',
+ help='As in the make command, "Specifies the number of '
+ 'jobs (commands) to run simultaneously."',
+ type=int, default=8, metavar='jobs')
+
+parser.add_argument('--build_dir',
+ help='As in the make command, it specifies the build '
+ 'directory.',
+ type=str, default=None, metavar='build_dir')
+
+cli_args = parser.parse_args()
+
+linux = kunit_kernel.LinuxSourceTree()
+
+build_dir = None
+if cli_args.build_dir:
+ build_dir = cli_args.build_dir
+
+config_start = time.time()
+success = linux.build_reconfig(build_dir)
+config_end = time.time()
+if not success:
+ quit()
+
+kunit_parser.print_with_timestamp('Building KUnit Kernel ...')
+
+build_start = time.time()
+
+success = linux.build_um_kernel(jobs=cli_args.jobs, build_dir=build_dir)
+build_end = time.time()
+if not success:
+ quit()
+
+kunit_parser.print_with_timestamp('Starting KUnit Kernel ...')
+test_start = time.time()
+
+if cli_args.raw_output:
+ kunit_parser.raw_output(linux.run_kernel(timeout=cli_args.timeout,
+ build_dir=build_dir))
+else:
+ kunit_parser.parse_run_tests(linux.run_kernel(timeout=cli_args.timeout,
+ build_dir=build_dir))
+
+test_end = time.time()
+
+kunit_parser.print_with_timestamp((
+ "Elapsed time: %.3fs total, %.3fs configuring, %.3fs " +
+ "building, %.3fs running.\n") % (test_end - config_start,
+ config_end - config_start, build_end - build_start,
+ test_end - test_start))
diff --git a/tools/testing/kunit/kunit_config.py b/tools/testing/kunit/kunit_config.py
new file mode 100644
index 0000000000000..167f47d9ab8e4
--- /dev/null
+++ b/tools/testing/kunit/kunit_config.py
@@ -0,0 +1,66 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Builds a .config from a kunitconfig.
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <[email protected]>
+# Author: Brendan Higgins <[email protected]>
+
+import collections
+import re
+
+CONFIG_IS_NOT_SET_PATTERN = r'^# CONFIG_\w+ is not set$'
+CONFIG_PATTERN = r'^CONFIG_\w+=\S+$'
+
+KconfigEntryBase = collections.namedtuple('KconfigEntry', ['raw_entry'])
+
+
+class KconfigEntry(KconfigEntryBase):
+
+ def __str__(self) -> str:
+ return self.raw_entry
+
+
+class KconfigParseError(Exception):
+ """Error parsing Kconfig defconfig or .config."""
+
+
+class Kconfig(object):
+ """Represents defconfig or .config specified using the Kconfig language."""
+
+ def __init__(self):
+ self._entries = []
+
+ def entries(self):
+ return set(self._entries)
+
+ def add_entry(self, entry: KconfigEntry) -> None:
+ self._entries.append(entry)
+
+ def is_subset_of(self, other: "Kconfig") -> bool:
+ return self.entries().issubset(other.entries())
+
+ def write_to_file(self, path: str) -> None:
+ with open(path, 'w') as f:
+ for entry in self.entries():
+ f.write(str(entry) + '\n')
+
+ def parse_from_string(self, blob: str) -> None:
+ """Parses a string containing KconfigEntrys and populates this Kconfig."""
+ self._entries = []
+ is_not_set_matcher = re.compile(CONFIG_IS_NOT_SET_PATTERN)
+ config_matcher = re.compile(CONFIG_PATTERN)
+ for line in blob.split('\n'):
+ line = line.strip()
+ if not line:
+ continue
+ elif config_matcher.match(line) or is_not_set_matcher.match(line):
+ self._entries.append(KconfigEntry(line))
+ elif line[0] == '#':
+ continue
+ else:
+ raise KconfigParseError('Failed to parse: ' + line)
+
+ def read_from_file(self, path: str) -> None:
+ with open(path, 'r') as f:
+ self.parse_from_string(f.read())
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
new file mode 100644
index 0000000000000..07c0abf2f47df
--- /dev/null
+++ b/tools/testing/kunit/kunit_kernel.py
@@ -0,0 +1,148 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Runs UML kernel, collects output, and handles errors.
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <[email protected]>
+# Author: Brendan Higgins <[email protected]>
+
+
+import logging
+import subprocess
+import os
+
+import kunit_config
+
+KCONFIG_PATH = '.config'
+
+class ConfigError(Exception):
+ """Represents an error trying to configure the Linux kernel."""
+
+
+class BuildError(Exception):
+ """Represents an error trying to build the Linux kernel."""
+
+
+class LinuxSourceTreeOperations(object):
+ """An abstraction over command line operations performed on a source tree."""
+
+ def make_mrproper(self):
+ try:
+ subprocess.check_output(['make', 'mrproper'])
+ except OSError as e:
+ raise ConfigError('Could not call make command: ' + e)
+ except subprocess.CalledProcessError as e:
+ raise ConfigError(e.output)
+
+ def make_olddefconfig(self, build_dir):
+ command = ['make', 'ARCH=um', 'olddefconfig']
+ if build_dir:
+ command += ['O=' + build_dir]
+ try:
+ subprocess.check_output(command)
+ except OSError as e:
+ raise ConfigError('Could not call make command: ' + e)
+ except subprocess.CalledProcessError as e:
+ raise ConfigError(e.output)
+
+ def make(self, jobs, build_dir):
+ command = ['make', 'ARCH=um', '--jobs=' + str(jobs)]
+ if build_dir:
+ command += ['O=' + build_dir]
+ try:
+ subprocess.check_output(command)
+ except OSError as e:
+ raise BuildError('Could not call execute make: ' + e)
+ except subprocess.CalledProcessError as e:
+ raise BuildError(e.output)
+
+ def linux_bin(self, params, timeout, build_dir):
+ """Runs the Linux UML binary. Must be named 'linux'."""
+ linux_bin = './linux'
+ if build_dir:
+ linux_bin = os.path.join(build_dir, 'linux')
+ process = subprocess.Popen(
+ [linux_bin] + params,
+ stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE)
+ process.wait(timeout=timeout)
+ return process
+
+
+def get_kconfig_path(build_dir):
+ kconfig_path = KCONFIG_PATH
+ if build_dir:
+ kconfig_path = os.path.join(build_dir, KCONFIG_PATH)
+ return kconfig_path
+
+class LinuxSourceTree(object):
+ """Represents a Linux kernel source tree with KUnit tests."""
+
+ def __init__(self):
+ self._kconfig = kunit_config.Kconfig()
+ self._kconfig.read_from_file('kunitconfig')
+ self._ops = LinuxSourceTreeOperations()
+
+ def clean(self):
+ try:
+ self._ops.make_mrproper()
+ except ConfigError as e:
+ logging.error(e)
+ return False
+ return True
+
+ def build_config(self, build_dir):
+ kconfig_path = get_kconfig_path(build_dir)
+ if build_dir and not os.path.exists(build_dir):
+ os.mkdir(build_dir)
+ self._kconfig.write_to_file(kconfig_path)
+ try:
+ self._ops.make_olddefconfig(build_dir)
+ except ConfigError as e:
+ logging.error(e)
+ return False
+ validated_kconfig = kunit_config.Kconfig()
+ validated_kconfig.read_from_file(kconfig_path)
+ if not self._kconfig.is_subset_of(validated_kconfig):
+ logging.error('Provided Kconfig is not contained in validated .config!')
+ return False
+ return True
+
+ def build_reconfig(self, build_dir):
+ """Creates a new .config if it is not a subset of the kunitconfig."""
+ kconfig_path = get_kconfig_path(build_dir)
+ if os.path.exists(kconfig_path):
+ existing_kconfig = kunit_config.Kconfig()
+ existing_kconfig.read_from_file(kconfig_path)
+ if not self._kconfig.is_subset_of(existing_kconfig):
+ print('Regenerating .config ...')
+ os.remove(kconfig_path)
+ return self.build_config(build_dir)
+ else:
+ return True
+ else:
+ print('Generating .config ...')
+ return self.build_config(build_dir)
+
+ def build_um_kernel(self, jobs, build_dir):
+ try:
+ self._ops.make_olddefconfig(build_dir)
+ self._ops.make(jobs, build_dir)
+ except (ConfigError, BuildError) as e:
+ logging.error(e)
+ return False
+ used_kconfig = kunit_config.Kconfig()
+ used_kconfig.read_from_file(get_kconfig_path(build_dir))
+ if not self._kconfig.is_subset_of(used_kconfig):
+ logging.error('Provided Kconfig is not contained in final config!')
+ return False
+ return True
+
+ def run_kernel(self, args=[], timeout=None, build_dir=None):
+ args.extend(['mem=256M'])
+ process = self._ops.linux_bin(args, timeout, build_dir)
+ with open('test.log', 'w') as f:
+ for line in process.stdout:
+ f.write(line.rstrip().decode('ascii') + '\n')
+ yield line.rstrip().decode('ascii')
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
new file mode 100644
index 0000000000000..6c81d4dcfabb5
--- /dev/null
+++ b/tools/testing/kunit/kunit_parser.py
@@ -0,0 +1,119 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Parses test results from a kernel dmesg log.
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <[email protected]>
+# Author: Brendan Higgins <[email protected]>
+
+import re
+from datetime import datetime
+
+kunit_start_re = re.compile('printk: console .* enabled')
+kunit_end_re = re.compile('List of all partitions:')
+
+def isolate_kunit_output(kernel_output):
+ started = False
+ for line in kernel_output:
+ if kunit_start_re.match(line):
+ started = True
+ elif kunit_end_re.match(line):
+ break
+ elif started:
+ yield line
+
+def raw_output(kernel_output):
+ for line in kernel_output:
+ print(line)
+
+DIVIDER = "=" * 30
+
+RESET = '\033[0;0m'
+
+def red(text):
+ return '\033[1;31m' + text + RESET
+
+def yellow(text):
+ return '\033[1;33m' + text + RESET
+
+def green(text):
+ return '\033[1;32m' + text + RESET
+
+def print_with_timestamp(message):
+ print('[%s] %s' % (datetime.now().strftime('%H:%M:%S'), message))
+
+def print_log(log):
+ for m in log:
+ print_with_timestamp(m)
+
+def parse_run_tests(kernel_output):
+ test_case_output = re.compile('^kunit .*?: (.*)$')
+
+ test_module_success = re.compile('^kunit .*: all tests passed')
+ test_module_fail = re.compile('^kunit .*: one or more tests failed')
+
+ test_case_success = re.compile('^kunit (.*): (.*) passed')
+ test_case_fail = re.compile('^kunit (.*): (.*) failed')
+ test_case_crash = re.compile('^kunit (.*): (.*) crashed')
+
+ total_tests = set()
+ failed_tests = set()
+ crashed_tests = set()
+
+ def get_test_name(match):
+ return match.group(1) + ":" + match.group(2)
+
+ current_case_log = []
+ def end_one_test(match, log):
+ log.clear()
+ total_tests.add(get_test_name(match))
+
+ print_with_timestamp(DIVIDER)
+ for line in isolate_kunit_output(kernel_output):
+ # Ignore module output:
+ if (test_module_success.match(line) or
+ test_module_fail.match(line)):
+ print_with_timestamp(DIVIDER)
+ continue
+
+ match = re.match(test_case_success, line)
+ if match:
+ print_with_timestamp(green("[PASSED] ") +
+ get_test_name(match))
+ end_one_test(match, current_case_log)
+ continue
+
+ match = re.match(test_case_fail, line)
+ # Crashed tests will report as both failed and crashed. We only
+ # want to show and count it once.
+ if match and get_test_name(match) not in crashed_tests:
+ failed_tests.add(get_test_name(match))
+ print_with_timestamp(red("[FAILED] " +
+ get_test_name(match)))
+ print_log(map(yellow, current_case_log))
+ print_with_timestamp("")
+ end_one_test(match, current_case_log)
+ continue
+
+ match = re.match(test_case_crash, line)
+ if match:
+ crashed_tests.add(get_test_name(match))
+ print_with_timestamp(yellow("[CRASH] " +
+ get_test_name(match)))
+ print_log(current_case_log)
+ print_with_timestamp("")
+ end_one_test(match, current_case_log)
+ continue
+
+ # Strip off the `kunit module-name:` prefix
+ match = re.match(test_case_output, line)
+ if match:
+ current_case_log.append(match.group(1))
+ else:
+ current_case_log.append(line)
+
+ fmt = green if (len(failed_tests) + len(crashed_tests) == 0) else red
+ print_with_timestamp(
+ fmt("Testing complete. %d tests run. %d failed. %d crashed." %
+ (len(total_tests), len(failed_tests), len(crashed_tests))))
+
--
2.21.0.593.g511ec345e18-goog

2019-05-01 23:06:18

by Brendan Higgins

[permalink] [raw]
Subject: [PATCH v2 05/17] kunit: test: add the concept of expectations

Add support for expectations, which allow properties to be specified and
then verified in tests.

Signed-off-by: Brendan Higgins <[email protected]>
---
include/kunit/test.h | 419 +++++++++++++++++++++++++++++++++++++++++++
kunit/test.c | 34 ++++
2 files changed, 453 insertions(+)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index 4668e8a635954..e441270561ece 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -273,4 +273,423 @@ void __printf(3, 4) kunit_printk(const char *level,
#define kunit_err(test, fmt, ...) \
kunit_printk(KERN_ERR, test, fmt, ##__VA_ARGS__)

+static inline struct kunit_stream *kunit_expect_start(struct kunit *test,
+ const char *file,
+ const char *line)
+{
+ struct kunit_stream *stream = kunit_new_stream(test);
+
+ kunit_stream_add(stream, "EXPECTATION FAILED at %s:%s\n\t", file, line);
+
+ return stream;
+}
+
+static inline void kunit_expect_end(struct kunit *test,
+ bool success,
+ struct kunit_stream *stream)
+{
+ if (!success)
+ test->fail(test, stream);
+ else
+ kunit_stream_clear(stream);
+}
+
+#define KUNIT_EXPECT_START(test) \
+ kunit_expect_start(test, __FILE__, __stringify(__LINE__))
+
+#define KUNIT_EXPECT_END(test, success, stream) \
+ kunit_expect_end(test, success, stream)
+
+#define KUNIT_EXPECT_MSG(test, success, message, fmt, ...) do { \
+ struct kunit_stream *__stream = KUNIT_EXPECT_START(test); \
+ \
+ kunit_stream_add(__stream, message); \
+ kunit_stream_add(__stream, fmt, ##__VA_ARGS__); \
+ KUNIT_EXPECT_END(test, success, __stream); \
+} while (0)
+
+#define KUNIT_EXPECT(test, success, message) do { \
+ struct kunit_stream *__stream = KUNIT_EXPECT_START(test); \
+ \
+ kunit_stream_add(__stream, message); \
+ KUNIT_EXPECT_END(test, success, __stream); \
+} while (0)
+
+/**
+ * KUNIT_SUCCEED() - A no-op expectation. Only exists for code clarity.
+ * @test: The test context object.
+ *
+ * The opposite of KUNIT_FAIL(), it is an expectation that cannot fail. In other
+ * words, it does nothing and only exists for code clarity. See
+ * KUNIT_EXPECT_TRUE() for more information.
+ */
+#define KUNIT_SUCCEED(test) do {} while (0)
+
+/**
+ * KUNIT_FAIL() - Always causes a test to fail when evaluated.
+ * @test: The test context object.
+ * @fmt: an informational message to be printed when the assertion is made.
+ * @...: string format arguments.
+ *
+ * The opposite of KUNIT_SUCCEED(), it is an expectation that always fails. In
+ * other words, it always results in a failed expectation, and consequently
+ * always causes the test case to fail when evaluated. See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_FAIL(test, fmt, ...) do { \
+ struct kunit_stream *__stream = KUNIT_EXPECT_START(test); \
+ \
+ kunit_stream_add(__stream, fmt, ##__VA_ARGS__); \
+ KUNIT_EXPECT_END(test, false, __stream); \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_TRUE() - Causes a test failure when the expression is not true.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails when this does
+ * not evaluate to true.
+ *
+ * This and expectations of the form `KUNIT_EXPECT_*` will cause the test case
+ * to fail when the specified condition is not met; however, it will not prevent
+ * the test case from continuing to run; this is otherwise known as an
+ * *expectation failure*.
+ */
+#define KUNIT_EXPECT_TRUE(test, condition) \
+ KUNIT_EXPECT(test, (condition), \
+ "Expected " #condition " is true, but is false.\n")
+
+#define KUNIT_EXPECT_TRUE_MSG(test, condition, fmt, ...) \
+ KUNIT_EXPECT_MSG(test, (condition), \
+ "Expected " #condition " is true, but is false.\n",\
+ fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_FALSE() - Makes a test failure when the expression is not false.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails when this does
+ * not evaluate to false.
+ *
+ * Sets an expectation that @condition evaluates to false. See
+ * KUNIT_EXPECT_TRUE() for more information.
+ */
+#define KUNIT_EXPECT_FALSE(test, condition) \
+ KUNIT_EXPECT(test, !(condition), \
+ "Expected " #condition " is false, but is true.")
+
+#define KUNIT_EXPECT_FALSE_MSG(test, condition, fmt, ...) \
+ KUNIT_EXPECT_MSG(test, !(condition), \
+ "Expected " #condition " is false, but is true.\n",\
+ fmt, ##__VA_ARGS__)
+
+void kunit_expect_binary_msg(struct kunit *test,
+ long long left, const char *left_name,
+ long long right, const char *right_name,
+ bool compare_result,
+ const char *compare_name,
+ const char *file,
+ const char *line,
+ const char *fmt, ...);
+
+static inline void kunit_expect_binary(struct kunit *test,
+ long long left, const char *left_name,
+ long long right, const char *right_name,
+ bool compare_result,
+ const char *compare_name,
+ const char *file,
+ const char *line)
+{
+ struct kunit_stream *stream = kunit_expect_start(test, file, line);
+
+ kunit_stream_add(stream,
+ "Expected %s %s %s, but\n",
+ left_name, compare_name, right_name);
+ kunit_stream_add(stream, "\t\t%s == %lld\n", left_name, left);
+ kunit_stream_add(stream, "\t\t%s == %lld\n", right_name, right);
+
+ kunit_expect_end(test, compare_result, stream);
+}
+
+/*
+ * A factory macro for defining the expectations for the basic comparisons
+ * defined for the built in types.
+ *
+ * Unfortunately, there is no common type that all types can be promoted to for
+ * which all the binary operators behave the same way as for the actual types
+ * (for example, there is no type that long long and unsigned long long can
+ * both be cast to where the comparison result is preserved for all values). So
+ * the best we can do is do the comparison in the original types and then coerce
+ * everything to long long for printing; this way, the comparison behaves
+ * correctly and the printed out value usually makes sense without
+ * interpretation, but can always be interpretted to figure out the actual
+ * value.
+ */
+#define KUNIT_EXPECT_BINARY(test, left, condition, right) do { \
+ typeof(left) __left = (left); \
+ typeof(right) __right = (right); \
+ kunit_expect_binary(test, \
+ (long long) __left, #left, \
+ (long long) __right, #right, \
+ __left condition __right, #condition, \
+ __FILE__, __stringify(__LINE__)); \
+} while (0)
+
+#define KUNIT_EXPECT_BINARY_MSG(test, left, condition, right, fmt, ...) do { \
+ typeof(left) __left = (left); \
+ typeof(right) __right = (right); \
+ kunit_expect_binary_msg(test, \
+ (long long) __left, #left, \
+ (long long) __right, #right, \
+ __left condition __right, #condition, \
+ __FILE__, __stringify(__LINE__), \
+ fmt, ##__VA_ARGS__); \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_EQ() - Sets an expectation that @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) == (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_EQ(test, left, right) \
+ KUNIT_EXPECT_BINARY(test, left, ==, right)
+
+#define KUNIT_EXPECT_EQ_MSG(test, left, right, fmt, ...) \
+ KUNIT_EXPECT_BINARY_MSG(test, \
+ left, \
+ ==, \
+ right, \
+ fmt, \
+ ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_NE() - An expectation that @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are not
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) != (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_NE(test, left, right) \
+ KUNIT_EXPECT_BINARY(test, left, !=, right)
+
+#define KUNIT_EXPECT_NE_MSG(test, left, right, fmt, ...) \
+ KUNIT_EXPECT_BINARY_MSG(test, \
+ left, \
+ !=, \
+ right, \
+ fmt, \
+ ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_LT() - An expectation that @left is less than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is less than the
+ * value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) < (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_LT(test, left, right) \
+ KUNIT_EXPECT_BINARY(test, left, <, right)
+
+#define KUNIT_EXPECT_LT_MSG(test, left, right, fmt, ...) \
+ KUNIT_EXPECT_BINARY_MSG(test, \
+ left, \
+ <, \
+ right, \
+ fmt, \
+ ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_LE() - Expects that @left is less than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is less than or
+ * equal to the value that @right evaluates to. Semantically this is equivalent
+ * to KUNIT_EXPECT_TRUE(@test, (@left) <= (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_LE(test, left, right) \
+ KUNIT_EXPECT_BINARY(test, left, <=, right)
+
+#define KUNIT_EXPECT_LE_MSG(test, left, right, fmt, ...) \
+ KUNIT_EXPECT_BINARY_MSG(test, \
+ left, \
+ <=, \
+ right, \
+ fmt, \
+ ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_GT() - An expectation that @left is greater than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is greater than
+ * the value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) > (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_GT(test, left, right) \
+ KUNIT_EXPECT_BINARY(test, left, >, right)
+
+#define KUNIT_EXPECT_GT_MSG(test, left, right, fmt, ...) \
+ KUNIT_EXPECT_BINARY_MSG(test, \
+ left, \
+ >, \
+ right, \
+ fmt, \
+ ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_GE() - Expects that @left is greater than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is greater than
+ * the value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) >= (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_GE(test, left, right) \
+ KUNIT_EXPECT_BINARY(test, left, >=, right)
+
+#define KUNIT_EXPECT_GE_MSG(test, left, right, fmt, ...) \
+ KUNIT_EXPECT_BINARY_MSG(test, \
+ left, \
+ >=, \
+ right, \
+ fmt, \
+ ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_STREQ() - Expects that strings @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, !strcmp((@left), (@right))). See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_EXPECT_STREQ(test, left, right) do { \
+ struct kunit_stream *__stream = KUNIT_EXPECT_START(test); \
+ typeof(left) __left = (left); \
+ typeof(right) __right = (right); \
+ \
+ kunit_stream_add(__stream, "Expected " #left " == " #right ", but\n"); \
+ kunit_stream_add(__stream, "\t\t%s == %s\n", #left, __left); \
+ kunit_stream_add(__stream, "\t\t%s == %s\n", #right, __right); \
+ \
+ KUNIT_EXPECT_END(test, !strcmp(left, right), __stream); \
+} while (0)
+
+#define KUNIT_EXPECT_STREQ_MSG(test, left, right, fmt, ...) do { \
+ struct kunit_stream *__stream = KUNIT_EXPECT_START(test); \
+ typeof(left) __left = (left); \
+ typeof(right) __right = (right); \
+ \
+ kunit_stream_add(__stream, "Expected " #left " == " #right ", but\n"); \
+ kunit_stream_add(__stream, "\t\t%s == %s\n", #left, __left); \
+ kunit_stream_add(__stream, "\t\t%s == %s\n", #right, __right); \
+ kunit_stream_add(__stream, fmt, ##__VA_ARGS__); \
+ \
+ KUNIT_EXPECT_END(test, !strcmp(left, right), __stream); \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_STRNEQ() - Expects that strings @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * not equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, strcmp((@left), (@right))). See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_EXPECT_STRNEQ(test, left, right) do { \
+ struct kunit_stream *__stream = KUNIT_EXPECT_START(test); \
+ typeof(left) __left = (left); \
+ typeof(right) __right = (right); \
+ \
+ kunit_stream_add(__stream, "Expected " #left " != " #right ", but\n"); \
+ kunit_stream_add(__stream, "\t\t%s == %s\n", #left, __left); \
+ kunit_stream_add(__stream, "\t\t%s == %s\n", #right, __right); \
+ \
+ KUNIT_EXPECT_END(test, strcmp(left, right), __stream); \
+} while (0)
+
+#define KUNIT_EXPECT_STRNEQ_MSG(test, left, right, fmt, ...) do { \
+ struct kunit_stream *__stream = KUNIT_EXPECT_START(test); \
+ typeof(left) __left = (left); \
+ typeof(right) __right = (right); \
+ \
+ kunit_stream_add(__stream, "Expected " #left " != " #right ", but\n"); \
+ kunit_stream_add(__stream, "\t\t%s == %s\n", #left, __left); \
+ kunit_stream_add(__stream, "\t\t%s == %s\n", #right, __right); \
+ kunit_stream_add(__stream, fmt, ##__VA_ARGS__); \
+ \
+ KUNIT_EXPECT_END(test, strcmp(left, right), __stream); \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_NOT_ERR_OR_NULL() - Expects that @ptr is not null and not err.
+ * @test: The test context object.
+ * @ptr: an arbitrary pointer.
+ *
+ * Sets an expectation that the value that @ptr evaluates to is not null and not
+ * an errno stored in a pointer. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, !IS_ERR_OR_NULL(@ptr)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_NOT_ERR_OR_NULL(test, ptr) do { \
+ struct kunit_stream *__stream = KUNIT_EXPECT_START(test); \
+ typeof(ptr) __ptr = (ptr); \
+ \
+ if (!__ptr) \
+ kunit_stream_add(__stream, \
+ "Expected " #ptr " is not null, but is."); \
+ if (IS_ERR(__ptr)) \
+ kunit_stream_add(__stream, \
+ "Expected " #ptr " is not error, but is: %ld", \
+ PTR_ERR(__ptr)); \
+ \
+ KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream); \
+} while (0)
+
+#define KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, ptr, fmt, ...) do { \
+ struct kunit_stream *__stream = KUNIT_EXPECT_START(test); \
+ typeof(ptr) __ptr = (ptr); \
+ \
+ if (!__ptr) { \
+ kunit_stream_add(__stream, \
+ "Expected " #ptr " is not null, but is."); \
+ kunit_stream_add(__stream, fmt, ##__VA_ARGS__); \
+ } \
+ if (IS_ERR(__ptr)) { \
+ kunit_stream_add(__stream, \
+ "Expected " #ptr " is not error, but is: %ld", \
+ PTR_ERR(__ptr)); \
+ \
+ kunit_stream_add(__stream, fmt, ##__VA_ARGS__); \
+ } \
+ KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream); \
+} while (0)
+
#endif /* _KUNIT_TEST_H */
diff --git a/kunit/test.c b/kunit/test.c
index f7575b127e2df..01e82c18b2fa6 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -269,3 +269,37 @@ void kunit_printk(const char *level,

va_end(args);
}
+
+void kunit_expect_binary_msg(struct kunit *test,
+ long long left, const char *left_name,
+ long long right, const char *right_name,
+ bool compare_result,
+ const char *compare_name,
+ const char *file,
+ const char *line,
+ const char *fmt, ...)
+{
+ struct kunit_stream *stream = kunit_expect_start(test, file, line);
+ struct va_format vaf;
+ va_list args;
+
+ kunit_stream_add(stream,
+ "Expected %s %s %s, but\n",
+ left_name, compare_name, right_name);
+ kunit_stream_add(stream, "\t\t%s == %lld\n", left_name, left);
+ kunit_stream_add(stream, "\t\t%s == %lld", right_name, right);
+
+ if (fmt) {
+ va_start(args, fmt);
+
+ vaf.fmt = fmt;
+ vaf.va = &args;
+
+ kunit_stream_add(stream, "\n%pV", &vaf);
+
+ va_end(args);
+ }
+
+ kunit_expect_end(test, compare_result, stream);
+}
+
--
2.21.0.593.g511ec345e18-goog

2019-05-01 23:06:51

by Brendan Higgins

[permalink] [raw]
Subject: [PATCH v2 08/17] kunit: test: add support for test abort

Add support for aborting/bailing out of test cases, which is needed for
implementing assertions.

An assertion is like an expectation, but bails out of the test case
early if the assertion is not met. The idea with assertions is that you
use them to state all the preconditions for your test. Logically
speaking, these are the premises of the test case, so if a premise isn't
true, there is no point in continuing the test case because there are no
conclusions that can be drawn without the premises. Whereas, the
expectation is the thing you are trying to prove.

Signed-off-by: Brendan Higgins <[email protected]>
---
include/kunit/test.h | 13 ++++
include/kunit/try-catch.h | 91 +++++++++++++++++++++++++
kunit/Makefile | 3 +-
kunit/test.c | 138 +++++++++++++++++++++++++++++++++++---
kunit/try-catch.c | 96 ++++++++++++++++++++++++++
5 files changed, 332 insertions(+), 9 deletions(-)
create mode 100644 include/kunit/try-catch.h
create mode 100644 kunit/try-catch.c

diff --git a/include/kunit/test.h b/include/kunit/test.h
index e441270561ece..1b77caeb5d51f 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -12,6 +12,7 @@
#include <linux/types.h>
#include <linux/slab.h>
#include <kunit/kunit-stream.h>
+#include <kunit/try-catch.h>

struct kunit_resource;

@@ -166,15 +167,27 @@ struct kunit {

/* private: internal use only. */
const char *name; /* Read only after initialization! */
+ struct kunit_try_catch try_catch;
spinlock_t lock; /* Gaurds all mutable test state. */
bool success; /* Protected by lock. */
+ bool death_test; /* Protected by lock. */
struct list_head resources; /* Protected by lock. */
void (*vprintk)(const struct kunit *test,
const char *level,
struct va_format *vaf);
void (*fail)(struct kunit *test, struct kunit_stream *stream);
+ void (*abort)(struct kunit *test);
};

+static inline void kunit_set_death_test(struct kunit *test, bool death_test)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&test->lock, flags);
+ test->death_test = death_test;
+ spin_unlock_irqrestore(&test->lock, flags);
+}
+
int kunit_init_test(struct kunit *test, const char *name);

int kunit_run_tests(struct kunit_module *module);
diff --git a/include/kunit/try-catch.h b/include/kunit/try-catch.h
new file mode 100644
index 0000000000000..e85abe044b6b5
--- /dev/null
+++ b/include/kunit/try-catch.h
@@ -0,0 +1,91 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * An API to allow a function, that may fail, to be executed, and recover in a
+ * controlled manner.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <[email protected]>
+ */
+
+#ifndef _KUNIT_TRY_CATCH_H
+#define _KUNIT_TRY_CATCH_H
+
+#include <linux/types.h>
+
+typedef void (*kunit_try_catch_func_t)(void *);
+
+struct kunit;
+
+/*
+ * struct kunit_try_catch - provides a generic way to run code which might fail.
+ * @context: used to pass user data to the try and catch functions.
+ *
+ * kunit_try_catch provides a generic, architecture independent way to execute
+ * an arbitrary function of type kunit_try_catch_func_t which may bail out by
+ * calling kunit_try_catch_throw(). If kunit_try_catch_throw() is called, @try
+ * is stopped at the site of invocation and @catch is catch is called.
+ *
+ * struct kunit_try_catch provides a generic interface for the functionality
+ * needed to implement kunit->abort() which in turn is needed for implementing
+ * assertions. Assertions allow stating a precondition for a test simplifying
+ * how test cases are written and presented.
+ *
+ * Assertions are like expectations, except they abort (call
+ * kunit_try_catch_throw()) when the specified condition is not met. This is
+ * useful when you look at a test case as a logical statement about some piece
+ * of code, where assertions are the premises for the test case, and the
+ * conclusion is a set of predicates, rather expectations, that must all be
+ * true. If your premises are violated, it does not makes sense to continue.
+ */
+struct kunit_try_catch {
+ /* private: internal use only. */
+ void (*run)(struct kunit_try_catch *try_catch);
+ void __noreturn (*throw)(struct kunit_try_catch *try_catch);
+ struct kunit *test;
+ struct completion *try_completion;
+ int try_result;
+ kunit_try_catch_func_t try;
+ kunit_try_catch_func_t catch;
+ void *context;
+};
+
+/*
+ * Exposed to be overridden for other architectures.
+ */
+void kunit_try_catch_init_internal(struct kunit_try_catch *try_catch);
+
+static inline void kunit_try_catch_init(struct kunit_try_catch *try_catch,
+ struct kunit *test,
+ kunit_try_catch_func_t try,
+ kunit_try_catch_func_t catch)
+{
+ try_catch->test = test;
+ kunit_try_catch_init_internal(try_catch);
+ try_catch->try = try;
+ try_catch->catch = catch;
+}
+
+static inline void kunit_try_catch_run(struct kunit_try_catch *try_catch,
+ void *context)
+{
+ try_catch->context = context;
+ try_catch->run(try_catch);
+}
+
+static inline void __noreturn kunit_try_catch_throw(
+ struct kunit_try_catch *try_catch)
+{
+ try_catch->throw(try_catch);
+}
+
+static inline int kunit_try_catch_get_result(struct kunit_try_catch *try_catch)
+{
+ return try_catch->try_result;
+}
+
+/*
+ * Exposed for testing only.
+ */
+void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch);
+
+#endif /* _KUNIT_TRY_CATCH_H */
diff --git a/kunit/Makefile b/kunit/Makefile
index 60a9ea6cb4697..1f7680cfa11ad 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1,6 +1,7 @@
obj-$(CONFIG_KUNIT) += test.o \
string-stream.o \
- kunit-stream.o
+ kunit-stream.o \
+ try-catch.o

obj-$(CONFIG_KUNIT_TEST) += string-stream-test.o

diff --git a/kunit/test.c b/kunit/test.c
index 01e82c18b2fa6..5eace2a527a4e 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -6,10 +6,9 @@
* Author: Brendan Higgins <[email protected]>
*/

-#include <linux/sched.h>
#include <linux/sched/debug.h>
-#include <os.h>
#include <kunit/test.h>
+#include <kunit/try-catch.h>

static bool kunit_get_success(struct kunit *test)
{
@@ -32,6 +31,18 @@ static void kunit_set_success(struct kunit *test, bool success)
spin_unlock_irqrestore(&test->lock, flags);
}

+static bool kunit_get_death_test(struct kunit *test)
+{
+ unsigned long flags;
+ bool death_test;
+
+ spin_lock_irqsave(&test->lock, flags);
+ death_test = test->death_test;
+ spin_unlock_irqrestore(&test->lock, flags);
+
+ return death_test;
+}
+
static int kunit_vprintk_emit(const struct kunit *test,
int level,
const char *fmt,
@@ -70,6 +81,21 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
kunit_stream_commit(stream);
}

+static void __noreturn kunit_abort(struct kunit *test)
+{
+ kunit_set_death_test(test, true);
+
+ kunit_try_catch_throw(&test->try_catch);
+
+ /*
+ * Throw could not abort from test.
+ *
+ * XXX: we should never reach this line! As kunit_try_catch_throw is
+ * marked __noreturn.
+ */
+ WARN_ONCE(true, "Throw could not abort from test!\n");
+}
+
int kunit_init_test(struct kunit *test, const char *name)
{
spin_lock_init(&test->lock);
@@ -77,6 +103,7 @@ int kunit_init_test(struct kunit *test, const char *name)
test->name = name;
test->vprintk = kunit_vprintk;
test->fail = kunit_fail;
+ test->abort = kunit_abort;

return 0;
}
@@ -122,16 +149,111 @@ static void kunit_run_case_cleanup(struct kunit *test,
}

/*
- * Performs all logic to run a test case.
+ * Handles an unexpected crash in a test case.
*/
-static bool kunit_run_case(struct kunit *test,
- struct kunit_module *module,
- struct kunit_case *test_case)
+static void kunit_handle_test_crash(struct kunit *test,
+ struct kunit_module *module,
+ struct kunit_case *test_case)
{
- kunit_set_success(test, true);
+ kunit_err(test, "%s crashed", test_case->name);
+ /*
+ * TODO([email protected]): This prints the stack trace up
+ * through this frame, not up to the frame that caused the crash.
+ */
+ show_stack(NULL, NULL);
+
+ kunit_case_internal_cleanup(test);
+}

+struct kunit_try_catch_context {
+ struct kunit *test;
+ struct kunit_module *module;
+ struct kunit_case *test_case;
+};
+
+static void kunit_try_run_case(void *data)
+{
+ struct kunit_try_catch_context *ctx = data;
+ struct kunit *test = ctx->test;
+ struct kunit_module *module = ctx->module;
+ struct kunit_case *test_case = ctx->test_case;
+
+ /*
+ * kunit_run_case_internal may encounter a fatal error; if it does,
+ * abort will be called, this thread will exit, and finally the parent
+ * thread will resume control and handle any necessary clean up.
+ */
kunit_run_case_internal(test, module, test_case);
+ /* This line may never be reached. */
kunit_run_case_cleanup(test, module, test_case);
+}
+
+static void kunit_catch_run_case(void *data)
+{
+ struct kunit_try_catch_context *ctx = data;
+ struct kunit *test = ctx->test;
+ struct kunit_module *module = ctx->module;
+ struct kunit_case *test_case = ctx->test_case;
+ int try_exit_code = kunit_try_catch_get_result(&test->try_catch);
+
+ if (try_exit_code) {
+ kunit_set_success(test, false);
+ /*
+ * Test case could not finish, we have no idea what state it is
+ * in, so don't do clean up.
+ */
+ if (try_exit_code == -ETIMEDOUT)
+ kunit_err(test, "test case timed out\n");
+ /*
+ * Unknown internal error occurred preventing test case from
+ * running, so there is nothing to clean up.
+ */
+ else
+ kunit_err(test, "internal error occurred preventing test case from running: %d\n",
+ try_exit_code);
+ return;
+ }
+
+ if (kunit_get_death_test(test)) {
+ /*
+ * EXPECTED DEATH: kunit_run_case_internal encountered
+ * anticipated fatal error. Everything should be in a safe
+ * state.
+ */
+ kunit_run_case_cleanup(test, module, test_case);
+ } else {
+ /*
+ * UNEXPECTED DEATH: kunit_run_case_internal encountered an
+ * unanticipated fatal error. We have no idea what the state of
+ * the test case is in.
+ */
+ kunit_handle_test_crash(test, module, test_case);
+ kunit_set_success(test, false);
+ }
+}
+
+/*
+ * Performs all logic to run a test case. It also catches most errors that
+ * occurs in a test case and reports them as failures.
+ */
+static bool kunit_run_case_catch_errors(struct kunit *test,
+ struct kunit_module *module,
+ struct kunit_case *test_case)
+{
+ struct kunit_try_catch *try_catch = &test->try_catch;
+ struct kunit_try_catch_context context;
+
+ kunit_set_success(test, true);
+ kunit_set_death_test(test, false);
+
+ kunit_try_catch_init(try_catch,
+ test,
+ kunit_try_run_case,
+ kunit_catch_run_case);
+ context.test = test;
+ context.module = module;
+ context.test_case = test_case;
+ kunit_try_catch_run(try_catch, &context);

return kunit_get_success(test);
}
@@ -148,7 +270,7 @@ int kunit_run_tests(struct kunit_module *module)
return ret;

for (test_case = module->test_cases; test_case->run_case; test_case++) {
- success = kunit_run_case(&test, module, test_case);
+ success = kunit_run_case_catch_errors(&test, module, test_case);
if (!success)
all_passed = false;

diff --git a/kunit/try-catch.c b/kunit/try-catch.c
new file mode 100644
index 0000000000000..c4cdb30880714
--- /dev/null
+++ b/kunit/try-catch.c
@@ -0,0 +1,96 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * An API to allow a function, that may fail, to be executed, and recover in a
+ * controlled manner.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <[email protected]>
+ */
+
+#include <kunit/try-catch.h>
+#include <kunit/test.h>
+#include <linux/completion.h>
+#include <linux/kthread.h>
+
+static void __noreturn kunit_generic_throw(struct kunit_try_catch *try_catch)
+{
+ try_catch->try_result = -EFAULT;
+ complete_and_exit(try_catch->try_completion, -EFAULT);
+}
+
+static int kunit_generic_run_threadfn_adapter(void *data)
+{
+ struct kunit_try_catch *try_catch = data;
+
+ try_catch->try(try_catch->context);
+
+ complete_and_exit(try_catch->try_completion, 0);
+}
+
+static void kunit_generic_run_try_catch(struct kunit_try_catch *try_catch)
+{
+ DECLARE_COMPLETION_ONSTACK(try_completion);
+ struct kunit *test = try_catch->test;
+ struct task_struct *task_struct;
+ int exit_code, status;
+
+ try_catch->try_completion = &try_completion;
+ try_catch->try_result = 0;
+ task_struct = kthread_run(kunit_generic_run_threadfn_adapter,
+ try_catch,
+ "kunit_try_catch_thread");
+ if (IS_ERR(task_struct)) {
+ try_catch->catch(try_catch->context);
+ return;
+ }
+
+ /*
+ * TODO([email protected]): We should probably have some type of
+ * variable timeout here. The only question is what that timeout value
+ * should be.
+ *
+ * The intention has always been, at some point, to be able to label
+ * tests with some type of size bucket (unit/small, integration/medium,
+ * large/system/end-to-end, etc), where each size bucket would get a
+ * default timeout value kind of like what Bazel does:
+ * https://docs.bazel.build/versions/master/be/common-definitions.html#test.size
+ * There is still some debate to be had on exactly how we do this. (For
+ * one, we probably want to have some sort of test runner level
+ * timeout.)
+ *
+ * For more background on this topic, see:
+ * https://mike-bland.com/2011/11/01/small-medium-large.html
+ */
+ status = wait_for_completion_timeout(&try_completion,
+ 300 * MSEC_PER_SEC); /* 5 min */
+ if (status < 0) {
+ kunit_err(test, "try timed out\n");
+ try_catch->try_result = -ETIMEDOUT;
+ }
+
+ exit_code = try_catch->try_result;
+
+ if (!exit_code)
+ return;
+
+ if (exit_code == -EFAULT)
+ try_catch->try_result = 0;
+ else if (exit_code == -EINTR)
+ kunit_err(test, "wake_up_process() was never called\n");
+ else if (exit_code)
+ kunit_err(test, "Unknown error: %d\n", exit_code);
+
+ try_catch->catch(try_catch->context);
+}
+
+void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
+{
+ try_catch->run = kunit_generic_run_try_catch;
+ try_catch->throw = kunit_generic_throw;
+}
+
+void __weak kunit_try_catch_init_internal(struct kunit_try_catch *try_catch)
+{
+ kunit_generic_try_catch_init(try_catch);
+}
+
--
2.21.0.593.g511ec345e18-goog

2019-05-01 23:07:04

by Brendan Higgins

[permalink] [raw]
Subject: [PATCH v2 15/17] MAINTAINERS: add entry for KUnit the unit testing framework

Add myself as maintainer of KUnit, the Linux kernel's unit testing
framework.

Signed-off-by: Brendan Higgins <[email protected]>
---
MAINTAINERS | 10 ++++++++++
1 file changed, 10 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 5c38f21aee787..c78ae95c56b80 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8448,6 +8448,16 @@ S: Maintained
F: tools/testing/selftests/
F: Documentation/dev-tools/kselftest*

+KERNEL UNIT TESTING FRAMEWORK (KUnit)
+M: Brendan Higgins <[email protected]>
+L: [email protected]
+W: https://google.github.io/kunit-docs/third_party/kernel/docs/
+S: Maintained
+F: Documentation/kunit/
+F: include/kunit/
+F: kunit/
+F: tools/testing/kunit/
+
KERNEL USERMODE HELPER
M: Luis Chamberlain <[email protected]>
L: [email protected]
--
2.21.0.593.g511ec345e18-goog

2019-05-01 23:07:14

by Brendan Higgins

[permalink] [raw]
Subject: [PATCH v2 10/17] kunit: test: add the concept of assertions

Add support for assertions which are like expectations except the test
terminates if the assertion is not satisfied.

The idea with assertions is that you use them to state all the
preconditions for your test. Logically speaking, these are the premises
of the test case, so if a premise isn't true, there is no point in
continuing the test case because there are no conclusions that can be
drawn without the premises. Whereas, the expectation is the thing you
are trying to prove. It is not used universally in x-unit style test
frameworks, but I really like it as a convention. You could still
express the idea of a premise using the above idiom, but I think
KUNIT_ASSERT_* states the intended idea perfectly.

Signed-off-by: Brendan Higgins <[email protected]>
---
include/kunit/test.h | 401 ++++++++++++++++++++++++++++++++++++-
kunit/string-stream-test.c | 12 +-
kunit/test-test.c | 2 +
kunit/test.c | 33 +++
4 files changed, 439 insertions(+), 9 deletions(-)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index 1b77caeb5d51f..bb2f3e63a3522 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -85,9 +85,10 @@ struct kunit;
* @name: the name of the test case.
*
* A test case is a function with the signature, ``void (*)(struct kunit *)``
- * that makes expectations (see KUNIT_EXPECT_TRUE()) about code under test. Each
- * test case is associated with a &struct kunit_module and will be run after the
- * module's init function and followed by the module's exit function.
+ * that makes expectations and assertions (see KUNIT_EXPECT_TRUE() and
+ * KUNIT_ASSERT_TRUE()) about code under test. Each test case is associated with
+ * a &struct kunit_module and will be run after the module's init function and
+ * followed by the module's exit function.
*
* A test case should be static and should only be created with the KUNIT_CASE()
* macro; additionally, every array of test cases should be terminated with an
@@ -705,4 +706,398 @@ static inline void kunit_expect_binary(struct kunit *test,
KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream); \
} while (0)

+static inline struct kunit_stream *kunit_assert_start(struct kunit *test,
+ const char *file,
+ const char *line)
+{
+ struct kunit_stream *stream = kunit_new_stream(test);
+
+ kunit_stream_add(stream, "ASSERTION FAILED at %s:%s\n\t", file, line);
+
+ return stream;
+}
+
+static inline void kunit_assert_end(struct kunit *test,
+ bool success,
+ struct kunit_stream *stream)
+{
+ if (!success) {
+ test->fail(test, stream);
+ test->abort(test);
+ } else {
+ kunit_stream_clear(stream);
+ }
+}
+
+#define KUNIT_ASSERT_START(test) \
+ kunit_assert_start(test, __FILE__, __stringify(__LINE__))
+
+#define KUNIT_ASSERT_END(test, success, stream) \
+ kunit_assert_end(test, success, stream)
+
+#define KUNIT_ASSERT(test, success, message) do { \
+ struct kunit_stream *__stream = KUNIT_ASSERT_START(test); \
+ \
+ kunit_stream_add(__stream, message); \
+ KUNIT_ASSERT_END(test, success, __stream); \
+} while (0)
+
+#define KUNIT_ASSERT_MSG(test, success, message, fmt, ...) do { \
+ struct kunit_stream *__stream = KUNIT_ASSERT_START(test); \
+ \
+ kunit_stream_add(__stream, message); \
+ kunit_stream_add(__stream, fmt, ##__VA_ARGS__); \
+ KUNIT_ASSERT_END(test, success, __stream); \
+} while (0)
+
+#define KUNIT_ASSERT_FAILURE(test, fmt, ...) do { \
+ struct kunit_stream *__stream = KUNIT_ASSERT_START(test); \
+ \
+ kunit_stream_add(__stream, fmt, ##__VA_ARGS__); \
+ KUNIT_ASSERT_END(test, false, __stream); \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_TRUE() - Sets an assertion that @condition is true.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails and aborts when
+ * this does not evaluate to true.
+ *
+ * This and assertions of the form `KUNIT_ASSERT_*` will cause the test case to
+ * fail *and immediately abort* when the specified condition is not met. Unlike
+ * an expectation failure, it will prevent the test case from continuing to run;
+ * this is otherwise known as an *assertion failure*.
+ */
+#define KUNIT_ASSERT_TRUE(test, condition) \
+ KUNIT_ASSERT(test, (condition), \
+ "Asserted " #condition " is true, but is false.\n")
+
+#define KUNIT_ASSERT_TRUE_MSG(test, condition, fmt, ...) \
+ KUNIT_ASSERT_MSG(test, (condition), \
+ "Asserted " #condition " is true, but is false.\n",\
+ fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_FALSE() - Sets an assertion that @condition is false.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression.
+ *
+ * Sets an assertion that the value that @condition evaluates to is false. This
+ * is the same as KUNIT_EXPECT_FALSE(), except it causes an assertion failure
+ * (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_FALSE(test, condition) \
+ KUNIT_ASSERT(test, !(condition), \
+ "Asserted " #condition " is false, but is true.\n")
+
+#define KUNIT_ASSERT_FALSE_MSG(test, condition, fmt, ...) \
+ KUNIT_ASSERT_MSG(test, !(condition), \
+ "Asserted " #condition " is false, but is true.\n",\
+ fmt, ##__VA_ARGS__)
+
+void kunit_assert_binary_msg(struct kunit *test,
+ long long left, const char *left_name,
+ long long right, const char *right_name,
+ bool compare_result,
+ const char *compare_name,
+ const char *file,
+ const char *line,
+ const char *fmt, ...);
+
+static inline void kunit_assert_binary(struct kunit *test,
+ long long left, const char *left_name,
+ long long right, const char *right_name,
+ bool compare_result,
+ const char *compare_name,
+ const char *file,
+ const char *line)
+{
+ kunit_assert_binary_msg(test,
+ left, left_name,
+ right, right_name,
+ compare_result,
+ compare_name,
+ file,
+ line,
+ NULL);
+}
+
+/*
+ * A factory macro for defining the expectations for the basic comparisons
+ * defined for the built in types.
+ *
+ * Unfortunately, there is no common type that all types can be promoted to for
+ * which all the binary operators behave the same way as for the actual types
+ * (for example, there is no type that long long and unsigned long long can
+ * both be cast to where the comparison result is preserved for all values). So
+ * the best we can do is do the comparison in the original types and then coerce
+ * everything to long long for printing; this way, the comparison behaves
+ * correctly and the printed out value usually makes sense without
+ * interpretation, but can always be interpretted to figure out the actual
+ * value.
+ */
+#define KUNIT_ASSERT_BINARY(test, left, condition, right) do { \
+ typeof(left) __left = (left); \
+ typeof(right) __right = (right); \
+ kunit_assert_binary(test, \
+ (long long) __left, #left, \
+ (long long) __right, #right, \
+ __left condition __right, #condition, \
+ __FILE__, __stringify(__LINE__)); \
+} while (0)
+
+#define KUNIT_ASSERT_BINARY_MSG(test, left, condition, right, fmt, ...) do { \
+ typeof(left) __left = (left); \
+ typeof(right) __right = (right); \
+ kunit_assert_binary_msg(test, \
+ (long long) __left, #left, \
+ (long long) __right, #right, \
+ __left condition __right, #condition, \
+ __FILE__, __stringify(__LINE__), \
+ fmt, ##__VA_ARGS__); \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_EQ() - Sets an assertion that @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are
+ * equal. This is the same as KUNIT_EXPECT_EQ(), except it causes an assertion
+ * failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_EQ(test, left, right) \
+ KUNIT_ASSERT_BINARY(test, left, ==, right)
+
+#define KUNIT_ASSERT_EQ_MSG(test, left, right, fmt, ...) \
+ KUNIT_ASSERT_BINARY_MSG(test, \
+ left, \
+ ==, \
+ right, \
+ fmt, \
+ ##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_NE() - An assertion that @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are not
+ * equal. This is the same as KUNIT_EXPECT_NE(), except it causes an assertion
+ * failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_NE(test, left, right) \
+ KUNIT_ASSERT_BINARY(test, left, !=, right)
+
+#define KUNIT_ASSERT_NE_MSG(test, left, right, fmt, ...) \
+ KUNIT_ASSERT_BINARY_MSG(test, \
+ left, \
+ !=, \
+ right, \
+ fmt, \
+ ##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_LT() - An assertion that @left is less than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is less than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_LT(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_LT(test, left, right) \
+ KUNIT_ASSERT_BINARY(test, left, <, right)
+
+#define KUNIT_ASSERT_LT_MSG(test, left, right, fmt, ...) \
+ KUNIT_ASSERT_BINARY_MSG(test, \
+ left, \
+ <, \
+ right, \
+ fmt, \
+ ##__VA_ARGS__)
+/**
+ * KUNIT_ASSERT_LE() - An assertion that @left is less than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is less than or
+ * equal to the value that @right evaluates to. This is the same as
+ * KUNIT_EXPECT_LE(), except it causes an assertion failure (see
+ * KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_LE(test, left, right) \
+ KUNIT_ASSERT_BINARY(test, left, <=, right)
+
+#define KUNIT_ASSERT_LE_MSG(test, left, right, fmt, ...) \
+ KUNIT_ASSERT_BINARY_MSG(test, \
+ left, \
+ <=, \
+ right, \
+ fmt, \
+ ##__VA_ARGS__)
+/**
+ * KUNIT_ASSERT_GT() - An assertion that @left is greater than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is greater than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_GT(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_GT(test, left, right) \
+ KUNIT_ASSERT_BINARY(test, left, >, right)
+
+#define KUNIT_ASSERT_GT_MSG(test, left, right, fmt, ...) \
+ KUNIT_ASSERT_BINARY_MSG(test, \
+ left, \
+ >, \
+ right, \
+ fmt, \
+ ##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_GE() - Assertion that @left is greater than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is greater than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_GE(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_GE(test, left, right) \
+ KUNIT_ASSERT_BINARY(test, left, >=, right)
+
+#define KUNIT_ASSERT_GE_MSG(test, left, right, fmt, ...) \
+ KUNIT_ASSERT_BINARY_MSG(test, \
+ left, \
+ >=, \
+ right, \
+ fmt, \
+ ##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_STREQ() - An assertion that strings @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are
+ * equal. This is the same as KUNIT_EXPECT_STREQ(), except it causes an
+ * assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_STREQ(test, left, right) do { \
+ struct kunit_stream *__stream = KUNIT_ASSERT_START(test); \
+ typeof(left) __left = (left); \
+ typeof(right) __right = (right); \
+ \
+ kunit_stream_add(__stream, "Asserted " #left " == " #right ", but\n"); \
+ kunit_stream_add(__stream, "\t\t%s == %s\n", #left, __left); \
+ kunit_stream_add(__stream, "\t\t%s == %s\n", #right, __right); \
+ \
+ KUNIT_ASSERT_END(test, !strcmp(left, right), __stream); \
+} while (0)
+
+#define KUNIT_ASSERT_STREQ_MSG(test, left, right, fmt, ...) do { \
+ struct kunit_stream *__stream = KUNIT_ASSERT_START(test); \
+ typeof(left) __left = (left); \
+ typeof(right) __right = (right); \
+ \
+ kunit_stream_add(__stream, "Asserted " #left " == " #right ", but\n"); \
+ kunit_stream_add(__stream, "\t\t%s == %s\n", #left, __left); \
+ kunit_stream_add(__stream, "\t\t%s == %s\n", #right, __right); \
+ kunit_stream_add(__stream, fmt, ##__VA_ARGS__); \
+ \
+ KUNIT_ASSERT_END(test, !strcmp(left, right), __stream); \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_STRNEQ() - Expects that strings @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * not equal. This is semantically equivalent to
+ * KUNIT_ASSERT_TRUE(@test, strcmp((@left), (@right))). See KUNIT_ASSERT_TRUE()
+ * for more information.
+ */
+#define KUNIT_ASSERT_STRNEQ(test, left, right) do { \
+ struct kunit_stream *__stream = KUNIT_ASSERT_START(test); \
+ typeof(left) __left = (left); \
+ typeof(right) __right = (right); \
+ \
+ kunit_stream_add(__stream, "Asserted " #left " == " #right ", but\n"); \
+ kunit_stream_add(__stream, "\t\t%s == %s\n", #left, __left); \
+ kunit_stream_add(__stream, "\t\t%s == %s\n", #right, __right); \
+ \
+ KUNIT_ASSERT_END(test, strcmp(left, right), __stream); \
+} while (0)
+
+#define KUNIT_ASSERT_STRNEQ_MSG(test, left, right, fmt, ...) do { \
+ struct kunit_stream *__stream = KUNIT_ASSERT_START(test); \
+ typeof(left) __left = (left); \
+ typeof(right) __right = (right); \
+ \
+ kunit_stream_add(__stream, "Asserted " #left " == " #right ", but\n"); \
+ kunit_stream_add(__stream, "\t\t%s == %s\n", #left, __left); \
+ kunit_stream_add(__stream, "\t\t%s == %s\n", #right, __right); \
+ kunit_stream_add(__stream, fmt, ##__VA_ARGS__); \
+ \
+ KUNIT_ASSERT_END(test, strcmp(left, right), __stream); \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_NOT_ERR_OR_NULL() - Assertion that @ptr is not null and not err.
+ * @test: The test context object.
+ * @ptr: an arbitrary pointer.
+ *
+ * Sets an assertion that the value that @ptr evaluates to is not null and not
+ * an errno stored in a pointer. This is the same as
+ * KUNIT_EXPECT_NOT_ERR_OR_NULL(), except it causes an assertion failure (see
+ * KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr) do { \
+ struct kunit_stream *__stream = KUNIT_ASSERT_START(test); \
+ typeof(ptr) __ptr = (ptr); \
+ \
+ if (!__ptr) \
+ kunit_stream_add(__stream, \
+ "Asserted " #ptr " is not null, but is.\n"); \
+ if (IS_ERR(__ptr)) \
+ kunit_stream_add(__stream, \
+ "Asserted " #ptr " is not error, but is: %ld\n",\
+ PTR_ERR(__ptr)); \
+ \
+ KUNIT_ASSERT_END(test, !IS_ERR_OR_NULL(__ptr), __stream); \
+} while (0)
+
+#define KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, ptr, fmt, ...) do { \
+ struct kunit_stream *__stream = KUNIT_ASSERT_START(test); \
+ typeof(ptr) __ptr = (ptr); \
+ \
+ if (!__ptr) { \
+ kunit_stream_add(__stream, \
+ "Asserted " #ptr " is not null, but is.\n"); \
+ kunit_stream_add(__stream, fmt, ##__VA_ARGS__); \
+ } \
+ if (IS_ERR(__ptr)) { \
+ kunit_stream_add(__stream, \
+ "Asserted " #ptr " is not error, but is: %ld\n",\
+ PTR_ERR(__ptr)); \
+ \
+ kunit_stream_add(__stream, fmt, ##__VA_ARGS__); \
+ } \
+ KUNIT_ASSERT_END(test, !IS_ERR_OR_NULL(__ptr), __stream); \
+} while (0)
+
#endif /* _KUNIT_TEST_H */
diff --git a/kunit/string-stream-test.c b/kunit/string-stream-test.c
index b2a98576797c9..96819b99b5713 100644
--- a/kunit/string-stream-test.c
+++ b/kunit/string-stream-test.c
@@ -19,7 +19,7 @@ static void string_stream_test_get_string(struct kunit *test)
string_stream_add(stream, " %s", "bar");

output = string_stream_get_string(stream);
- KUNIT_EXPECT_STREQ(test, output, "Foo bar");
+ KUNIT_ASSERT_STREQ(test, output, "Foo bar");
kfree(output);
destroy_string_stream(stream);
}
@@ -34,16 +34,16 @@ static void string_stream_test_add_and_clear(struct kunit *test)
string_stream_add(stream, "A");

output = string_stream_get_string(stream);
- KUNIT_EXPECT_STREQ(test, output, "AAAAAAAAAA");
- KUNIT_EXPECT_EQ(test, stream->length, 10);
- KUNIT_EXPECT_FALSE(test, string_stream_is_empty(stream));
+ KUNIT_ASSERT_STREQ(test, output, "AAAAAAAAAA");
+ KUNIT_ASSERT_EQ(test, stream->length, 10);
+ KUNIT_ASSERT_FALSE(test, string_stream_is_empty(stream));
kfree(output);

string_stream_clear(stream);

output = string_stream_get_string(stream);
- KUNIT_EXPECT_STREQ(test, output, "");
- KUNIT_EXPECT_TRUE(test, string_stream_is_empty(stream));
+ KUNIT_ASSERT_STREQ(test, output, "");
+ KUNIT_ASSERT_TRUE(test, string_stream_is_empty(stream));
destroy_string_stream(stream);
}

diff --git a/kunit/test-test.c b/kunit/test-test.c
index c81ae6efb959f..4bd7a34d0a6cb 100644
--- a/kunit/test-test.c
+++ b/kunit/test-test.c
@@ -110,11 +110,13 @@ static int kunit_try_catch_test_init(struct kunit *test)
struct kunit_try_catch_test_context *ctx;

ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
test->priv = ctx;

ctx->try_catch = kunit_kmalloc(test,
sizeof(*ctx->try_catch),
GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->try_catch);

return 0;
}
diff --git a/kunit/test.c b/kunit/test.c
index 5eace2a527a4e..55561eadc459d 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -425,3 +425,36 @@ void kunit_expect_binary_msg(struct kunit *test,
kunit_expect_end(test, compare_result, stream);
}

+void kunit_assert_binary_msg(struct kunit *test,
+ long long left, const char *left_name,
+ long long right, const char *right_name,
+ bool compare_result,
+ const char *compare_name,
+ const char *file,
+ const char *line,
+ const char *fmt, ...)
+{
+ struct kunit_stream *stream = kunit_assert_start(test, file, line);
+ struct va_format vaf;
+ va_list args;
+
+ kunit_stream_add(stream,
+ "Asserted %s %s %s, but\n",
+ left_name, compare_name, right_name);
+ kunit_stream_add(stream, "\t\t%s == %lld\n", left_name, left);
+ kunit_stream_add(stream, "\t\t%s == %lld\n", right_name, right);
+
+ if (fmt) {
+ va_start(args, fmt);
+
+ vaf.fmt = fmt;
+ vaf.va = &args;
+
+ kunit_stream_add(stream, "\n%pV", &vaf);
+
+ va_end(args);
+ }
+
+ kunit_assert_end(test, compare_result, stream);
+}
+
--
2.21.0.593.g511ec345e18-goog

2019-05-01 23:07:17

by Brendan Higgins

[permalink] [raw]
Subject: [PATCH v2 11/17] kunit: test: add test managed resource tests

From: Avinash Kondareddy <[email protected]>

Tests how tests interact with test managed resources in their lifetime.

Signed-off-by: Avinash Kondareddy <[email protected]>
Signed-off-by: Brendan Higgins <[email protected]>
---
kunit/test-test.c | 122 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 122 insertions(+)

diff --git a/kunit/test-test.c b/kunit/test-test.c
index 4bd7a34d0a6cb..54add8ca418a0 100644
--- a/kunit/test-test.c
+++ b/kunit/test-test.c
@@ -135,3 +135,125 @@ static struct kunit_module kunit_try_catch_test_module = {
.test_cases = kunit_try_catch_test_cases,
};
module_test(kunit_try_catch_test_module);
+
+/*
+ * Context for testing test managed resources
+ * is_resource_initialized is used to test arbitrary resources
+ */
+struct kunit_test_resource_context {
+ struct kunit test;
+ bool is_resource_initialized;
+};
+
+static int fake_resource_init(struct kunit_resource *res, void *context)
+{
+ struct kunit_test_resource_context *ctx = context;
+
+ res->allocation = &ctx->is_resource_initialized;
+ ctx->is_resource_initialized = true;
+ return 0;
+}
+
+static void fake_resource_free(struct kunit_resource *res)
+{
+ bool *is_resource_initialized = res->allocation;
+
+ *is_resource_initialized = false;
+}
+
+static void kunit_resource_test_init_resources(struct kunit *test)
+{
+ struct kunit_test_resource_context *ctx = test->priv;
+
+ kunit_init_test(&ctx->test, "testing_test_init_test");
+
+ KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+static void kunit_resource_test_alloc_resource(struct kunit *test)
+{
+ struct kunit_test_resource_context *ctx = test->priv;
+ struct kunit_resource *res;
+ kunit_resource_free_t free = fake_resource_free;
+
+ res = kunit_alloc_resource(&ctx->test,
+ fake_resource_init,
+ fake_resource_free,
+ ctx);
+
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, res);
+ KUNIT_EXPECT_EQ(test, &ctx->is_resource_initialized, res->allocation);
+ KUNIT_EXPECT_TRUE(test, list_is_last(&res->node, &ctx->test.resources));
+ KUNIT_EXPECT_EQ(test, free, res->free);
+}
+
+static void kunit_resource_test_free_resource(struct kunit *test)
+{
+ struct kunit_test_resource_context *ctx = test->priv;
+ struct kunit_resource *res = kunit_alloc_resource(&ctx->test,
+ fake_resource_init,
+ fake_resource_free,
+ ctx);
+
+ kunit_free_resource(&ctx->test, res);
+
+ KUNIT_EXPECT_EQ(test, false, ctx->is_resource_initialized);
+ KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+static void kunit_resource_test_cleanup_resources(struct kunit *test)
+{
+ int i;
+ struct kunit_test_resource_context *ctx = test->priv;
+ struct kunit_resource *resources[5];
+
+ for (i = 0; i < ARRAY_SIZE(resources); i++) {
+ resources[i] = kunit_alloc_resource(&ctx->test,
+ fake_resource_init,
+ fake_resource_free,
+ ctx);
+ }
+
+ kunit_cleanup(&ctx->test);
+
+ KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+static int kunit_resource_test_init(struct kunit *test)
+{
+ struct kunit_test_resource_context *ctx =
+ kzalloc(sizeof(*ctx), GFP_KERNEL);
+
+ if (!ctx)
+ return -ENOMEM;
+
+ test->priv = ctx;
+
+ kunit_init_test(&ctx->test, "test_test_context");
+
+ return 0;
+}
+
+static void kunit_resource_test_exit(struct kunit *test)
+{
+ struct kunit_test_resource_context *ctx = test->priv;
+
+ kunit_cleanup(&ctx->test);
+ kfree(ctx);
+}
+
+static struct kunit_case kunit_resource_test_cases[] = {
+ KUNIT_CASE(kunit_resource_test_init_resources),
+ KUNIT_CASE(kunit_resource_test_alloc_resource),
+ KUNIT_CASE(kunit_resource_test_free_resource),
+ KUNIT_CASE(kunit_resource_test_cleanup_resources),
+ {},
+};
+
+static struct kunit_module kunit_resource_test_module = {
+ .name = "kunit-resource-test",
+ .init = kunit_resource_test_init,
+ .exit = kunit_resource_test_exit,
+ .test_cases = kunit_resource_test_cases,
+};
+module_test(kunit_resource_test_module);
--
2.21.0.593.g511ec345e18-goog

2019-05-01 23:07:35

by Brendan Higgins

[permalink] [raw]
Subject: [PATCH v2 13/17] kunit: defconfig: add defconfigs for building KUnit tests

Add defconfig for UML and a fragment that can be used to configure other
architectures for building KUnit tests. Add option to kunit_tool to use
a defconfig to create the kunitconfig.

Signed-off-by: Brendan Higgins <[email protected]>
---
arch/um/configs/kunit_defconfig | 8 ++++++++
tools/testing/kunit/configs/all_tests.config | 8 ++++++++
tools/testing/kunit/kunit.py | 17 +++++++++++++++--
tools/testing/kunit/kunit_kernel.py | 3 ++-
4 files changed, 33 insertions(+), 3 deletions(-)
create mode 100644 arch/um/configs/kunit_defconfig
create mode 100644 tools/testing/kunit/configs/all_tests.config

diff --git a/arch/um/configs/kunit_defconfig b/arch/um/configs/kunit_defconfig
new file mode 100644
index 0000000000000..bfe49689038f1
--- /dev/null
+++ b/arch/um/configs/kunit_defconfig
@@ -0,0 +1,8 @@
+CONFIG_OF=y
+CONFIG_OF_UNITTEST=y
+CONFIG_OF_OVERLAY=y
+CONFIG_I2C=y
+CONFIG_I2C_MUX=y
+CONFIG_KUNIT=y
+CONFIG_KUNIT_TEST=y
+CONFIG_KUNIT_EXAMPLE_TEST=y
diff --git a/tools/testing/kunit/configs/all_tests.config b/tools/testing/kunit/configs/all_tests.config
new file mode 100644
index 0000000000000..bfe49689038f1
--- /dev/null
+++ b/tools/testing/kunit/configs/all_tests.config
@@ -0,0 +1,8 @@
+CONFIG_OF=y
+CONFIG_OF_UNITTEST=y
+CONFIG_OF_OVERLAY=y
+CONFIG_I2C=y
+CONFIG_I2C_MUX=y
+CONFIG_KUNIT=y
+CONFIG_KUNIT_TEST=y
+CONFIG_KUNIT_EXAMPLE_TEST=y
diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
index 7413ec7351a20..63e9fb3b60200 100755
--- a/tools/testing/kunit/kunit.py
+++ b/tools/testing/kunit/kunit.py
@@ -11,6 +11,7 @@ import argparse
import sys
import os
import time
+import shutil

import kunit_config
import kunit_kernel
@@ -36,14 +37,26 @@ parser.add_argument('--build_dir',
'directory.',
type=str, default=None, metavar='build_dir')

-cli_args = parser.parse_args()
+parser.add_argument('--defconfig',
+ help='Uses a default kunitconfig.',
+ action='store_true')

-linux = kunit_kernel.LinuxSourceTree()
+def create_default_kunitconfig():
+ if not os.path.exists(kunit_kernel.KUNITCONFIG_PATH):
+ shutil.copyfile('arch/um/configs/kunit_defconfig',
+ kunit_kernel.KUNITCONFIG_PATH)
+
+cli_args = parser.parse_args()

build_dir = None
if cli_args.build_dir:
build_dir = cli_args.build_dir

+if cli_args.defconfig:
+ create_default_kunitconfig()
+
+linux = kunit_kernel.LinuxSourceTree()
+
config_start = time.time()
success = linux.build_reconfig(build_dir)
config_end = time.time()
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
index 07c0abf2f47df..bf38768353313 100644
--- a/tools/testing/kunit/kunit_kernel.py
+++ b/tools/testing/kunit/kunit_kernel.py
@@ -14,6 +14,7 @@ import os
import kunit_config

KCONFIG_PATH = '.config'
+KUNITCONFIG_PATH = 'kunitconfig'

class ConfigError(Exception):
"""Represents an error trying to configure the Linux kernel."""
@@ -81,7 +82,7 @@ class LinuxSourceTree(object):

def __init__(self):
self._kconfig = kunit_config.Kconfig()
- self._kconfig.read_from_file('kunitconfig')
+ self._kconfig.read_from_file(KUNITCONFIG_PATH)
self._ops = LinuxSourceTreeOperations()

def clean(self):
--
2.21.0.593.g511ec345e18-goog

2019-05-01 23:07:57

by Brendan Higgins

[permalink] [raw]
Subject: [PATCH v2 17/17] MAINTAINERS: add proc sysctl KUnit test to PROC SYSCTL section

Add entry for the new proc sysctl KUnit test to the PROC SYSCTL section.

Signed-off-by: Brendan Higgins <[email protected]>
---
MAINTAINERS | 1 +
1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index c78ae95c56b80..23cc97332c3d7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -12524,6 +12524,7 @@ S: Maintained
F: fs/proc/proc_sysctl.c
F: include/linux/sysctl.h
F: kernel/sysctl.c
+F: kernel/sysctl-test.c
F: tools/testing/selftests/sysctl/

PS3 NETWORK SUPPORT
--
2.21.0.593.g511ec345e18-goog

2019-05-01 23:07:56

by Brendan Higgins

[permalink] [raw]
Subject: [PATCH v2 14/17] Documentation: kunit: add documentation for KUnit

Add documentation for KUnit, the Linux kernel unit testing framework.
- Add intro and usage guide for KUnit
- Add API reference

Signed-off-by: Felix Guo <[email protected]>
Signed-off-by: Brendan Higgins <[email protected]>
---
Documentation/index.rst | 1 +
Documentation/kunit/api/index.rst | 16 ++
Documentation/kunit/api/test.rst | 15 +
Documentation/kunit/faq.rst | 46 +++
Documentation/kunit/index.rst | 80 ++++++
Documentation/kunit/start.rst | 180 ++++++++++++
Documentation/kunit/usage.rst | 447 ++++++++++++++++++++++++++++++
7 files changed, 785 insertions(+)
create mode 100644 Documentation/kunit/api/index.rst
create mode 100644 Documentation/kunit/api/test.rst
create mode 100644 Documentation/kunit/faq.rst
create mode 100644 Documentation/kunit/index.rst
create mode 100644 Documentation/kunit/start.rst
create mode 100644 Documentation/kunit/usage.rst

diff --git a/Documentation/index.rst b/Documentation/index.rst
index 80a421cb935e7..264cfd613a774 100644
--- a/Documentation/index.rst
+++ b/Documentation/index.rst
@@ -65,6 +65,7 @@ merged much easier.
kernel-hacking/index
trace/index
maintainer/index
+ kunit/index

Kernel API documentation
------------------------
diff --git a/Documentation/kunit/api/index.rst b/Documentation/kunit/api/index.rst
new file mode 100644
index 0000000000000..c31c530088153
--- /dev/null
+++ b/Documentation/kunit/api/index.rst
@@ -0,0 +1,16 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============
+API Reference
+=============
+.. toctree::
+
+ test
+
+This section documents the KUnit kernel testing API. It is divided into 3
+sections:
+
+================================= ==============================================
+:doc:`test` documents all of the standard testing API
+ excluding mocking or mocking related features.
+================================= ==============================================
diff --git a/Documentation/kunit/api/test.rst b/Documentation/kunit/api/test.rst
new file mode 100644
index 0000000000000..7c926014f047c
--- /dev/null
+++ b/Documentation/kunit/api/test.rst
@@ -0,0 +1,15 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+========
+Test API
+========
+
+This file documents all of the standard testing API excluding mocking or mocking
+related features.
+
+.. kernel-doc:: include/kunit/test.h
+ :internal:
+
+.. kernel-doc:: include/kunit/kunit-stream.h
+ :internal:
+
diff --git a/Documentation/kunit/faq.rst b/Documentation/kunit/faq.rst
new file mode 100644
index 0000000000000..cb8e4fb2257a0
--- /dev/null
+++ b/Documentation/kunit/faq.rst
@@ -0,0 +1,46 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================================
+Frequently Asked Questions
+=========================================
+
+How is this different from Autotest, kselftest, etc?
+====================================================
+KUnit is a unit testing framework. Autotest, kselftest (and some others) are
+not.
+
+A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is supposed to
+test a single unit of code in isolation, hence the name. A unit test should be
+the finest granularity of testing and as such should allow all possible code
+paths to be tested in the code under test; this is only possible if the code
+under test is very small and does not have any external dependencies outside of
+the test's control like hardware.
+
+There are no testing frameworks currently available for the kernel that do not
+require installing the kernel on a test machine or in a VM and all require
+tests to be written in userspace and run on the kernel under test; this is true
+for Autotest, kselftest, and some others, disqualifying any of them from being
+considered unit testing frameworks.
+
+What is the difference between a unit test and these other kinds of tests?
+==========================================================================
+Most existing tests for the Linux kernel would be categorized as an integration
+test, or an end-to-end test.
+
+- A unit test is supposed to test a single unit of code in isolation, hence the
+ name. A unit test should be the finest granularity of testing and as such
+ should allow all possible code paths to be tested in the code under test; this
+ is only possible if the code under test is very small and does not have any
+ external dependencies outside of the test's control like hardware.
+- An integration test tests the interaction between a minimal set of components,
+ usually just two or three. For example, someone might write an integration
+ test to test the interaction between a driver and a piece of hardware, or to
+ test the interaction between the userspace libraries the kernel provides and
+ the kernel itself; however, one of these tests would probably not test the
+ entire kernel along with hardware interactions and interactions with the
+ userspace.
+- An end-to-end test usually tests the entire system from the perspective of the
+ code under test. For example, someone might write an end-to-end test for the
+ kernel by installing a production configuration of the kernel on production
+ hardware with a production userspace and then trying to exercise some behavior
+ that depends on interactions between the hardware, the kernel, and userspace.
diff --git a/Documentation/kunit/index.rst b/Documentation/kunit/index.rst
new file mode 100644
index 0000000000000..c6710211b647f
--- /dev/null
+++ b/Documentation/kunit/index.rst
@@ -0,0 +1,80 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================================
+KUnit - Unit Testing for the Linux Kernel
+=========================================
+
+.. toctree::
+ :maxdepth: 2
+
+ start
+ usage
+ api/index
+ faq
+
+What is KUnit?
+==============
+
+KUnit is a lightweight unit testing and mocking framework for the Linux kernel.
+These tests are able to be run locally on a developer's workstation without a VM
+or special hardware.
+
+KUnit is heavily inspired by JUnit, Python's unittest.mock, and
+Googletest/Googlemock for C++. KUnit provides facilities for defining unit test
+cases, grouping related test cases into test suites, providing common
+infrastructure for running tests, and much more.
+
+Get started now: :doc:`start`
+
+Why KUnit?
+==========
+
+A unit test is supposed to test a single unit of code in isolation, hence the
+name. A unit test should be the finest granularity of testing and as such should
+allow all possible code paths to be tested in the code under test; this is only
+possible if the code under test is very small and does not have any external
+dependencies outside of the test's control like hardware.
+
+Outside of KUnit, there are no testing frameworks currently
+available for the kernel that do not require installing the kernel on a test
+machine or in a VM and all require tests to be written in userspace running on
+the kernel; this is true for Autotest, and kselftest, disqualifying
+any of them from being considered unit testing frameworks.
+
+KUnit addresses the problem of being able to run tests without needing a virtual
+machine or actual hardware with User Mode Linux. User Mode Linux is a Linux
+architecture, like ARM or x86; however, unlike other architectures it compiles
+to a standalone program that can be run like any other program directly inside
+of a host operating system; to be clear, it does not require any virtualization
+support; it is just a regular program.
+
+KUnit is fast. Excluding build time, from invocation to completion KUnit can run
+several dozen tests in only 10 to 20 seconds; this might not sound like a big
+deal to some people, but having such fast and easy to run tests fundamentally
+changes the way you go about testing and even writing code in the first place.
+Linus himself said in his `git talk at Google
+<https://gist.github.com/lorn/1272686/revisions#diff-53c65572127855f1b003db4064a94573R874>`_:
+
+ "... a lot of people seem to think that performance is about doing the
+ same thing, just doing it faster, and that is not true. That is not what
+ performance is all about. If you can do something really fast, really
+ well, people will start using it differently."
+
+In this context Linus was talking about branching and merging,
+but this point also applies to testing. If your tests are slow, unreliable, are
+difficult to write, and require a special setup or special hardware to run,
+then you wait a lot longer to write tests, and you wait a lot longer to run
+tests; this means that tests are likely to break, unlikely to test a lot of
+things, and are unlikely to be rerun once they pass. If your tests are really
+fast, you run them all the time, every time you make a change, and every time
+someone sends you some code. Why trust that someone ran all their tests
+correctly on every change when you can just run them yourself in less time than
+it takes to read his / her test log?
+
+How do I use it?
+===================
+
+* :doc:`start` - for new users of KUnit
+* :doc:`usage` - for a more detailed explanation of KUnit features
+* :doc:`api/index` - for the list of KUnit APIs used for testing
+
diff --git a/Documentation/kunit/start.rst b/Documentation/kunit/start.rst
new file mode 100644
index 0000000000000..5cdba5091905e
--- /dev/null
+++ b/Documentation/kunit/start.rst
@@ -0,0 +1,180 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===============
+Getting Started
+===============
+
+Installing dependencies
+=======================
+KUnit has the same dependencies as the Linux kernel. As long as you can build
+the kernel, you can run KUnit.
+
+KUnit Wrapper
+=============
+Included with KUnit is a simple Python wrapper that helps format the output to
+easily use and read KUnit output. It handles building and running the kernel, as
+well as formatting the output.
+
+The wrapper can be run with:
+
+.. code-block:: bash
+
+ ./tools/testing/kunit/kunit.py
+
+Creating a kunitconfig
+======================
+The Python script is a thin wrapper around Kbuild as such, it needs to be
+configured with a ``kunitconfig`` file. This file essentially contains the
+regular Kernel config, with the specific test targets as well.
+
+.. code-block:: bash
+
+ git clone -b master https://kunit.googlesource.com/kunitconfig $PATH_TO_KUNITCONFIG_REPO
+ cd $PATH_TO_LINUX_REPO
+ ln -s $PATH_TO_KUNIT_CONFIG_REPO/kunitconfig kunitconfig
+
+You may want to add kunitconfig to your local gitignore.
+
+Verifying KUnit Works
+-------------------------
+
+To make sure that everything is set up correctly, simply invoke the Python
+wrapper from your kernel repo:
+
+.. code-block:: bash
+
+ ./tools/testing/kunit/kunit.py
+
+.. note::
+ You may want to run ``make mrproper`` first.
+
+If everything worked correctly, you should see the following:
+
+.. code-block:: bash
+
+ Generating .config ...
+ Building KUnit Kernel ...
+ Starting KUnit Kernel ...
+
+followed by a list of tests that are run. All of them should be passing.
+
+.. note::
+ Because it is building a lot of sources for the first time, the ``Building
+ kunit kernel`` step may take a while.
+
+Writing your first test
+==========================
+
+In your kernel repo let's add some code that we can test. Create a file
+``drivers/misc/example.h`` with the contents:
+
+.. code-block:: c
+
+ int misc_example_add(int left, int right);
+
+create a file ``drivers/misc/example.c``:
+
+.. code-block:: c
+
+ #include <linux/errno.h>
+
+ #include "example.h"
+
+ int misc_example_add(int left, int right)
+ {
+ return left + right;
+ }
+
+Now add the following lines to ``drivers/misc/Kconfig``:
+
+.. code-block:: kconfig
+
+ config MISC_EXAMPLE
+ bool "My example"
+
+and the following lines to ``drivers/misc/Makefile``:
+
+.. code-block:: make
+
+ obj-$(CONFIG_MISC_EXAMPLE) += example.o
+
+Now we are ready to write the test. The test will be in
+``drivers/misc/example-test.c``:
+
+.. code-block:: c
+
+ #include <kunit/test.h>
+ #include "example.h"
+
+ /* Define the test cases. */
+
+ static void misc_example_add_test_basic(struct kunit *test)
+ {
+ KUNIT_EXPECT_EQ(test, 1, misc_example_add(1, 0));
+ KUNIT_EXPECT_EQ(test, 2, misc_example_add(1, 1));
+ KUNIT_EXPECT_EQ(test, 0, misc_example_add(-1, 1));
+ KUNIT_EXPECT_EQ(test, INT_MAX, misc_example_add(0, INT_MAX));
+ KUNIT_EXPECT_EQ(test, -1, misc_example_add(INT_MAX, INT_MIN));
+ }
+
+ static void misc_example_test_failure(struct kunit *test)
+ {
+ KUNIT_FAIL(test, "This test never passes.");
+ }
+
+ static struct kunit_case misc_example_test_cases[] = {
+ KUNIT_CASE(misc_example_add_test_basic),
+ KUNIT_CASE(misc_example_test_failure),
+ {},
+ };
+
+ static struct kunit_module misc_example_test_module = {
+ .name = "misc-example",
+ .test_cases = misc_example_test_cases,
+ };
+ module_test(misc_example_test_module);
+
+Now add the following to ``drivers/misc/Kconfig``:
+
+.. code-block:: kconfig
+
+ config MISC_EXAMPLE_TEST
+ bool "Test for my example"
+ depends on MISC_EXAMPLE && KUNIT
+
+and the following to ``drivers/misc/Makefile``:
+
+.. code-block:: make
+
+ obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o
+
+Now add it to your ``kunitconfig``:
+
+.. code-block:: none
+
+ CONFIG_MISC_EXAMPLE=y
+ CONFIG_MISC_EXAMPLE_TEST=y
+
+Now you can run the test:
+
+.. code-block:: bash
+
+ ./tools/testing/kunit/kunit.py
+
+You should see the following failure:
+
+.. code-block:: none
+
+ ...
+ [16:08:57] [PASSED] misc-example:misc_example_add_test_basic
+ [16:08:57] [FAILED] misc-example:misc_example_test_failure
+ [16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17
+ [16:08:57] This test never passes.
+ ...
+
+Congrats! You just wrote your first KUnit test!
+
+Next Steps
+=============
+* Check out the :doc:`usage` page for a more
+ in-depth explanation of KUnit.
diff --git a/Documentation/kunit/usage.rst b/Documentation/kunit/usage.rst
new file mode 100644
index 0000000000000..5c83ea9e21bc5
--- /dev/null
+++ b/Documentation/kunit/usage.rst
@@ -0,0 +1,447 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============
+Using KUnit
+=============
+
+The purpose of this document is to describe what KUnit is, how it works, how it
+is intended to be used, and all the concepts and terminology that are needed to
+understand it. This guide assumes a working knowledge of the Linux kernel and
+some basic knowledge of testing.
+
+For a high level introduction to KUnit, including setting up KUnit for your
+project, see :doc:`start`.
+
+Organization of this document
+=================================
+
+This document is organized into two main sections: Testing and Isolating
+Behavior. The first covers what a unit test is and how to use KUnit to write
+them. The second covers how to use KUnit to isolate code and make it possible
+to unit test code that was otherwise un-unit-testable.
+
+Testing
+==========
+
+What is KUnit?
+------------------
+
+"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing
+Framework." KUnit is intended first and foremost for writing unit tests; it is
+general enough that it can be used to write integration tests; however, this is
+a secondary goal. KUnit has no ambition of being the only testing framework for
+the kernel; for example, it does not intend to be an end-to-end testing
+framework.
+
+What is Unit Testing?
+-------------------------
+
+A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is a test that
+tests code at the smallest possible scope, a *unit* of code. In the C
+programming language that's a function.
+
+Unit tests should be written for all the publicly exposed functions in a
+compilation unit; so that is all the functions that are exported in either a
+*class* (defined below) or all functions which are **not** static.
+
+Writing Tests
+-------------
+
+Test Cases
+~~~~~~~~~~
+
+The fundamental unit in KUnit is the test case. A test case is a function with
+the signature ``void (*)(struct kunit *test)``. It calls a function to be tested
+and then sets *expectations* for what should happen. For example:
+
+.. code-block:: c
+
+ void example_test_success(struct kunit *test)
+ {
+ }
+
+ void example_test_failure(struct kunit *test)
+ {
+ KUNIT_FAIL(test, "This test never passes.");
+ }
+
+In the above example ``example_test_success`` always passes because it does
+nothing; no expectations are set, so all expectations pass. On the other hand
+``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is
+a special expectation that logs a message and causes the test case to fail.
+
+Expectations
+~~~~~~~~~~~~
+An *expectation* is a way to specify that you expect a piece of code to do
+something in a test. An expectation is called like a function. A test is made
+by setting expectations about the behavior of a piece of code under test; when
+one or more of the expectations fail, the test case fails and information about
+the failure is logged. For example:
+
+.. code-block:: c
+
+ void add_test_basic(struct kunit *test)
+ {
+ KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+ KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+ }
+
+In the above example ``add_test_basic`` makes a number of assertions about the
+behavior of a function called ``add``; the first parameter is always of type
+``struct kunit *``, which contains information about the current test context;
+the second parameter, in this case, is what the value is expected to be; the
+last value is what the value actually is. If ``add`` passes all of these
+expectations, the test case, ``add_test_basic`` will pass; if any one of these
+expectations fail, the test case will fail.
+
+It is important to understand that a test case *fails* when any expectation is
+violated; however, the test will continue running, potentially trying other
+expectations until the test case ends or is otherwise terminated. This is as
+opposed to *assertions* which are discussed later.
+
+To learn about more expectations supported by KUnit, see :doc:`api/test`.
+
+.. note::
+ A single test case should be pretty short, pretty easy to understand,
+ focused on a single behavior.
+
+For example, if we wanted to properly test the add function above, we would
+create additional tests cases which would each test a different property that an
+add function should have like this:
+
+.. code-block:: c
+
+ void add_test_basic(struct kunit *test)
+ {
+ KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+ KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+ }
+
+ void add_test_negative(struct kunit *test)
+ {
+ KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
+ }
+
+ void add_test_max(struct kunit *test)
+ {
+ KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
+ KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
+ }
+
+ void add_test_overflow(struct kunit *test)
+ {
+ KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1));
+ }
+
+Notice how it is immediately obvious what all the properties that we are testing
+for are.
+
+Assertions
+~~~~~~~~~~
+
+KUnit also has the concept of an *assertion*. An assertion is just like an
+expectation except the assertion immediately terminates the test case if it is
+not satisfied.
+
+For example:
+
+.. code-block:: c
+
+ static void mock_test_do_expect_default_return(struct kunit *test)
+ {
+ struct mock_test_context *ctx = test->priv;
+ struct mock *mock = ctx->mock;
+ int param0 = 5, param1 = -5;
+ const char *two_param_types[] = {"int", "int"};
+ const void *two_params[] = {&param0, &param1};
+ const void *ret;
+
+ ret = mock->do_expect(mock,
+ "test_printk", test_printk,
+ two_param_types, two_params,
+ ARRAY_SIZE(two_params));
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
+ KUNIT_EXPECT_EQ(test, -4, *((int *) ret));
+ }
+
+In this example, the method under test should return a pointer to a value, so
+if the pointer returned by the method is null or an errno, we don't want to
+bother continuing the test since the following expectation could crash the test
+case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if
+the appropriate conditions have not been satisfied to complete the test.
+
+Modules / Test Suites
+~~~~~~~~~~~~~~~~~~~~~
+
+Now obviously one unit test isn't very helpful; the power comes from having
+many test cases covering all of your behaviors. Consequently it is common to
+have many *similar* tests; in order to reduce duplication in these closely
+related tests most unit testing frameworks provide the concept of a *test
+suite*, in KUnit we call it a *test module*; all it is is just a collection of
+test cases for a unit of code with a set up function that gets invoked before
+every test cases and then a tear down function that gets invoked after every
+test case completes.
+
+Example:
+
+.. code-block:: c
+
+ static struct kunit_case example_test_cases[] = {
+ KUNIT_CASE(example_test_foo),
+ KUNIT_CASE(example_test_bar),
+ KUNIT_CASE(example_test_baz),
+ {},
+ };
+
+ static struct kunit_module example_test_module = {
+ .name = "example",
+ .init = example_test_init,
+ .exit = example_test_exit,
+ .test_cases = example_test_cases,
+ };
+ module_test(example_test_module);
+
+In the above example the test suite, ``example_test_module``, would run the test
+cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``, each
+would have ``example_test_init`` called immediately before it and would have
+``example_test_exit`` called immediately after it.
+``module_test(example_test_module)`` registers the test suite with the KUnit
+test framework.
+
+.. note::
+ A test case will only be run if it is associated with a test suite.
+
+For a more information on these types of things see the :doc:`api/test`.
+
+Isolating Behavior
+==================
+
+The most important aspect of unit testing that other forms of testing do not
+provide is the ability to limit the amount of code under test to a single unit.
+In practice, this is only possible by being able to control what code gets run
+when the unit under test calls a function and this is usually accomplished
+through some sort of indirection where a function is exposed as part of an API
+such that the definition of that function can be changed without affecting the
+rest of the code base. In the kernel this primarily comes from two constructs,
+classes, structs that contain function pointers that are provided by the
+implementer, and architecture specific functions which have definitions selected
+at compile time.
+
+Classes
+-------
+
+Classes are not a construct that is built into the C programming language;
+however, it is an easily derived concept. Accordingly, pretty much every project
+that does not use a standardized object oriented library (like GNOME's GObject)
+has their own slightly different way of doing object oriented programming; the
+Linux kernel is no exception.
+
+The central concept in kernel object oriented programming is the class. In the
+kernel, a *class* is a struct that contains function pointers. This creates a
+contract between *implementers* and *users* since it forces them to use the
+same function signature without having to call the function directly. In order
+for it to truly be a class, the function pointers must specify that a pointer
+to the class, known as a *class handle*, be one of the parameters; this makes
+it possible for the member functions (also known as *methods*) to have access
+to member variables (more commonly known as *fields*) allowing the same
+implementation to have multiple *instances*.
+
+Typically a class can be *overridden* by *child classes* by embedding the
+*parent class* in the child class. Then when a method provided by the child
+class is called, the child implementation knows that the pointer passed to it is
+of a parent contained within the child; because of this, the child can compute
+the pointer to itself because the pointer to the parent is always a fixed offset
+from the pointer to the child; this offset is the offset of the parent contained
+in the child struct. For example:
+
+.. code-block:: c
+
+ struct shape {
+ int (*area)(struct shape *this);
+ };
+
+ struct rectangle {
+ struct shape parent;
+ int length;
+ int width;
+ };
+
+ int rectangle_area(struct shape *this)
+ {
+ struct rectangle *self = container_of(this, struct shape, parent);
+
+ return self->length * self->width;
+ };
+
+ void rectangle_new(struct rectangle *self, int length, int width)
+ {
+ self->parent.area = rectangle_area;
+ self->length = length;
+ self->width = width;
+ }
+
+In this example (as in most kernel code) the operation of computing the pointer
+to the child from the pointer to the parent is done by ``container_of``.
+
+Faking Classes
+~~~~~~~~~~~~~~
+
+In order to unit test a piece of code that calls a method in a class, the
+behavior of the method must be controllable, otherwise the test ceases to be a
+unit test and becomes an integration test.
+
+A fake just provides an implementation of a piece of code that is different than
+what runs in a production instance, but behaves identically from the standpoint
+of the callers; this is usually done to replace a dependency that is hard to
+deal with, or is slow.
+
+A good example for this might be implementing a fake EEPROM that just stores the
+"contents" in an internal buffer. For example, let's assume we have a class that
+represents an EEPROM:
+
+.. code-block:: c
+
+ struct eeprom {
+ ssize_t (*read)(struct eeprom *this, size_t offset, char *buffer, size_t count);
+ ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count);
+ };
+
+And we want to test some code that buffers writes to the EEPROM:
+
+.. code-block:: c
+
+ struct eeprom_buffer {
+ ssize_t (*write)(struct eeprom_buffer *this, const char *buffer, size_t count);
+ int flush(struct eeprom_buffer *this);
+ size_t flush_count; /* Flushes when buffer exceeds flush_count. */
+ };
+
+ struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom);
+ void destroy_eeprom_buffer(struct eeprom *eeprom);
+
+We can easily test this code by *faking out* the underlying EEPROM:
+
+.. code-block:: c
+
+ struct fake_eeprom {
+ struct eeprom parent;
+ char contents[FAKE_EEPROM_CONTENTS_SIZE];
+ };
+
+ ssize_t fake_eeprom_read(struct eeprom *parent, size_t offset, char *buffer, size_t count)
+ {
+ struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
+
+ count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
+ memcpy(buffer, this->contents + offset, count);
+
+ return count;
+ }
+
+ ssize_t fake_eeprom_write(struct eeprom *this, size_t offset, const char *buffer, size_t count)
+ {
+ struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
+
+ count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
+ memcpy(this->contents + offset, buffer, count);
+
+ return count;
+ }
+
+ void fake_eeprom_init(struct fake_eeprom *this)
+ {
+ this->parent.read = fake_eeprom_read;
+ this->parent.write = fake_eeprom_write;
+ memset(this->contents, 0, FAKE_EEPROM_CONTENTS_SIZE);
+ }
+
+We can now use it to test ``struct eeprom_buffer``:
+
+.. code-block:: c
+
+ struct eeprom_buffer_test {
+ struct fake_eeprom *fake_eeprom;
+ struct eeprom_buffer *eeprom_buffer;
+ };
+
+ static void eeprom_buffer_test_does_not_write_until_flush(struct kunit *test)
+ {
+ struct eeprom_buffer_test *ctx = test->priv;
+ struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+ struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+ char buffer[] = {0xff};
+
+ eeprom_buffer->flush_count = SIZE_MAX;
+
+ eeprom_buffer->write(eeprom_buffer, buffer, 1);
+ KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+ eeprom_buffer->write(eeprom_buffer, buffer, 1);
+ KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0);
+
+ eeprom_buffer->flush(eeprom_buffer);
+ KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+ KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+ }
+
+ static void eeprom_buffer_test_flushes_after_flush_count_met(struct kunit *test)
+ {
+ struct eeprom_buffer_test *ctx = test->priv;
+ struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+ struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+ char buffer[] = {0xff};
+
+ eeprom_buffer->flush_count = 2;
+
+ eeprom_buffer->write(eeprom_buffer, buffer, 1);
+ KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+ eeprom_buffer->write(eeprom_buffer, buffer, 1);
+ KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+ KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+ }
+
+ static void eeprom_buffer_test_flushes_increments_of_flush_count(struct kunit *test)
+ {
+ struct eeprom_buffer_test *ctx = test->priv;
+ struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+ struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+ char buffer[] = {0xff, 0xff};
+
+ eeprom_buffer->flush_count = 2;
+
+ eeprom_buffer->write(eeprom_buffer, buffer, 1);
+ KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+ eeprom_buffer->write(eeprom_buffer, buffer, 2);
+ KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+ KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+ /* Should have only flushed the first two bytes. */
+ KUNIT_EXPECT_EQ(test, fake_eeprom->contents[2], 0);
+ }
+
+ static int eeprom_buffer_test_init(struct kunit *test)
+ {
+ struct eeprom_buffer_test *ctx;
+
+ ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+ ASSERT_NOT_ERR_OR_NULL(test, ctx);
+
+ ctx->fake_eeprom = kunit_kzalloc(test, sizeof(*ctx->fake_eeprom), GFP_KERNEL);
+ ASSERT_NOT_ERR_OR_NULL(test, ctx->fake_eeprom);
+
+ ctx->eeprom_buffer = new_eeprom_buffer(&ctx->fake_eeprom->parent);
+ ASSERT_NOT_ERR_OR_NULL(test, ctx->eeprom_buffer);
+
+ test->priv = ctx;
+
+ return 0;
+ }
+
+ static void eeprom_buffer_test_exit(struct kunit *test)
+ {
+ struct eeprom_buffer_test *ctx = test->priv;
+
+ destroy_eeprom_buffer(ctx->eeprom_buffer);
+ }
+
--
2.21.0.593.g511ec345e18-goog

2019-05-01 23:08:15

by Brendan Higgins

[permalink] [raw]
Subject: [PATCH v2 16/17] kernel/sysctl-test: Add null pointer test for sysctl.c:proc_dointvec()

From: Iurii Zaikin <[email protected]>

KUnit tests for initialized data behavior of proc_dointvec that is
explicitly checked in the code. Includes basic parsing tests including
int min/max overflow.

Signed-off-by: Iurii Zaikin <[email protected]>
Signed-off-by: Brendan Higgins <[email protected]>
---
kernel/Makefile | 2 +
kernel/sysctl-test.c | 292 +++++++++++++++++++++++++++++++++++++++++++
lib/Kconfig.debug | 6 +
3 files changed, 300 insertions(+)
create mode 100644 kernel/sysctl-test.c

diff --git a/kernel/Makefile b/kernel/Makefile
index 6c57e78817dad..c81a8976b6a4b 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -112,6 +112,8 @@ obj-$(CONFIG_HAS_IOMEM) += iomem.o
obj-$(CONFIG_ZONE_DEVICE) += memremap.o
obj-$(CONFIG_RSEQ) += rseq.o

+obj-$(CONFIG_SYSCTL_KUNIT_TEST) += sysctl-test.o
+
obj-$(CONFIG_GCC_PLUGIN_STACKLEAK) += stackleak.o
KASAN_SANITIZE_stackleak.o := n
KCOV_INSTRUMENT_stackleak.o := n
diff --git a/kernel/sysctl-test.c b/kernel/sysctl-test.c
new file mode 100644
index 0000000000000..a0fba6b6fc2dc
--- /dev/null
+++ b/kernel/sysctl-test.c
@@ -0,0 +1,292 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KUnit test of proc sysctl.
+ */
+
+#include <kunit/test.h>
+#include <linux/printk.h>
+#include <linux/sysctl.h>
+#include <linux/uaccess.h>
+
+static int i_zero;
+static int i_one_hundred = 100;
+
+struct test_sysctl_data {
+ int int_0001;
+ int int_0002;
+ int int_0003[4];
+
+ unsigned int uint_0001;
+
+ char string_0001[65];
+};
+
+static struct test_sysctl_data test_data = {
+ .int_0001 = 60,
+ .int_0002 = 1,
+
+ .int_0003[0] = 0,
+ .int_0003[1] = 1,
+ .int_0003[2] = 2,
+ .int_0003[3] = 3,
+
+ .uint_0001 = 314,
+
+ .string_0001 = "(none)",
+};
+
+static void sysctl_test_dointvec_null_tbl_data(struct kunit *test)
+{
+ struct ctl_table table = {
+ .procname = "foo",
+ .data = NULL,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec,
+ .extra1 = &i_zero,
+ .extra2 = &i_one_hundred,
+ };
+ void *buffer = kunit_kzalloc(test, sizeof(int), GFP_USER);
+ size_t len;
+ loff_t pos;
+
+ len = 1234;
+ KUNIT_EXPECT_EQ(test, 0, proc_dointvec(&table, 0, buffer, &len, &pos));
+ KUNIT_EXPECT_EQ(test, 0, len);
+ len = 1234;
+ KUNIT_EXPECT_EQ(test, 0, proc_dointvec(&table, 1, buffer, &len, &pos));
+ KUNIT_EXPECT_EQ(test, 0, len);
+}
+
+static void sysctl_test_dointvec_table_maxlen_unset(struct kunit *test)
+{
+ struct ctl_table table = {
+ .procname = "foo",
+ .data = &test_data.int_0001,
+ .maxlen = 0,
+ .mode = 0644,
+ .proc_handler = proc_dointvec,
+ .extra1 = &i_zero,
+ .extra2 = &i_one_hundred,
+ };
+ void *buffer = kunit_kzalloc(test, sizeof(int), GFP_USER);
+ size_t len;
+ loff_t pos;
+
+ len = 1234;
+ KUNIT_EXPECT_EQ(test, 0, proc_dointvec(&table, 0, buffer, &len, &pos));
+ KUNIT_EXPECT_EQ(test, 0, len);
+ len = 1234;
+ KUNIT_EXPECT_EQ(test, 0, proc_dointvec(&table, 1, buffer, &len, &pos));
+ KUNIT_EXPECT_EQ(test, 0, len);
+}
+
+static void sysctl_test_dointvec_table_len_is_zero(struct kunit *test)
+{
+ struct ctl_table table = {
+ .procname = "foo",
+ .data = &test_data.int_0001,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec,
+ .extra1 = &i_zero,
+ .extra2 = &i_one_hundred,
+ };
+ void *buffer = kunit_kzalloc(test, sizeof(int), GFP_USER);
+ size_t len;
+ loff_t pos;
+
+ len = 0;
+ KUNIT_EXPECT_EQ(test, 0, proc_dointvec(&table, 0, buffer, &len, &pos));
+ KUNIT_EXPECT_EQ(test, 0, len);
+ KUNIT_EXPECT_EQ(test, 0, proc_dointvec(&table, 1, buffer, &len, &pos));
+ KUNIT_EXPECT_EQ(test, 0, len);
+}
+
+static void sysctl_test_dointvec_table_read_but_position_set(struct kunit *test)
+{
+ struct ctl_table table = {
+ .procname = "foo",
+ .data = &test_data.int_0001,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec,
+ .extra1 = &i_zero,
+ .extra2 = &i_one_hundred,
+ };
+ void *buffer = kunit_kzalloc(test, sizeof(int), GFP_USER);
+ size_t len;
+ loff_t pos;
+
+ len = 1234;
+ pos = 1;
+ KUNIT_EXPECT_EQ(test, 0, proc_dointvec(&table, 0, buffer, &len, &pos));
+ KUNIT_EXPECT_EQ(test, 0, len);
+}
+
+static void sysctl_test_dointvec_happy_single_positive(struct kunit *test)
+{
+ struct ctl_table table = {
+ .procname = "foo",
+ .data = &test_data.int_0001,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec,
+ .extra1 = &i_zero,
+ .extra2 = &i_one_hundred,
+ };
+ char input[] = "9";
+ size_t len = sizeof(input) - 1;
+ loff_t pos = 0;
+
+ table.data = kunit_kzalloc(test, sizeof(int), GFP_USER);
+ KUNIT_EXPECT_EQ(test, 0, proc_dointvec(&table, 1, input, &len, &pos));
+ KUNIT_EXPECT_EQ(test, sizeof(input) - 1, len);
+ KUNIT_EXPECT_EQ(test, sizeof(input) - 1, pos);
+ KUNIT_EXPECT_EQ(test, 9, *(int *)table.data);
+}
+
+static void sysctl_test_dointvec_happy_single_negative(struct kunit *test)
+{
+ struct ctl_table table = {
+ .procname = "foo",
+ .data = &test_data.int_0001,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec,
+ .extra1 = &i_zero,
+ .extra2 = &i_one_hundred,
+ };
+ char input[] = "-9";
+ size_t len = sizeof(input) - 1;
+ loff_t pos = 0;
+
+ table.data = kunit_kzalloc(test, sizeof(int), GFP_USER);
+ KUNIT_EXPECT_EQ(test, 0, proc_dointvec(&table, 1, input, &len, &pos));
+ KUNIT_EXPECT_EQ(test, sizeof(input) - 1, len);
+ KUNIT_EXPECT_EQ(test, sizeof(input) - 1, pos);
+ KUNIT_EXPECT_EQ(test, -9, *(int *)table.data);
+}
+
+static void sysctl_test_dointvec_single_less_int_min(struct kunit *test)
+{
+ struct ctl_table table = {
+ .procname = "foo",
+ .data = &test_data.int_0001,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec,
+ .extra1 = &i_zero,
+ .extra2 = &i_one_hundred,
+ };
+ char input[32];
+ size_t len = sizeof(input) - 1;
+ loff_t pos = 0;
+ long less_than_min = (long)INT_MIN - 1;
+
+ KUNIT_EXPECT_LT(test, less_than_min, INT_MIN);
+ KUNIT_EXPECT_LT(test,
+ snprintf(input, sizeof(input), "%ld", less_than_min),
+ sizeof(input));
+
+ table.data = kunit_kzalloc(test, sizeof(int), GFP_USER);
+ KUNIT_EXPECT_EQ(test, -EINVAL,
+ proc_dointvec(&table, 1, input, &len, &pos));
+ KUNIT_EXPECT_EQ(test, sizeof(input) - 1, len);
+ KUNIT_EXPECT_EQ(test, 0, *(int *)table.data);
+}
+
+static void sysctl_test_dointvec_single_greater_int_max(struct kunit *test)
+{
+ struct ctl_table table = {
+ .procname = "foo",
+ .data = &test_data.int_0001,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec,
+ .extra1 = &i_zero,
+ .extra2 = &i_one_hundred,
+ };
+ char input[32];
+ size_t len = sizeof(input) - 1;
+ loff_t pos = 0;
+ long greater_than_max = (long)INT_MAX + 1;
+
+ KUNIT_EXPECT_GT(test, greater_than_max, INT_MAX);
+ KUNIT_EXPECT_LT(test, snprintf(input, sizeof(input), "%ld",
+ greater_than_max),
+ sizeof(input));
+ table.data = kunit_kzalloc(test, sizeof(int), GFP_USER);
+ KUNIT_EXPECT_EQ(test, -EINVAL,
+ proc_dointvec(&table, 1, input, &len, &pos));
+ KUNIT_EXPECT_EQ(test, sizeof(input) - 1, len);
+ KUNIT_EXPECT_EQ(test, 0, *(int *)table.data);
+}
+
+static int sysctl_test_init(struct kunit *test)
+{
+ return 0;
+}
+
+/*
+ * This is run once after each test case, see the comment on example_test_module
+ * for more information.
+ */
+static void sysctl_test_exit(struct kunit *test)
+{
+}
+
+/*
+ * Here we make a list of all the test cases we want to add to the test module
+ * below.
+ */
+static struct kunit_case sysctl_test_cases[] = {
+ /*
+ * This is a helper to create a test case object from a test case
+ * function; its exact function is not important to understand how to
+ * use KUnit, just know that this is how you associate test cases with a
+ * test module.
+ */
+ KUNIT_CASE(sysctl_test_dointvec_null_tbl_data),
+ KUNIT_CASE(sysctl_test_dointvec_table_maxlen_unset),
+ KUNIT_CASE(sysctl_test_dointvec_table_len_is_zero),
+ KUNIT_CASE(sysctl_test_dointvec_table_read_but_position_set),
+ KUNIT_CASE(sysctl_test_dointvec_happy_single_positive),
+ KUNIT_CASE(sysctl_test_dointvec_happy_single_negative),
+ KUNIT_CASE(sysctl_test_dointvec_single_less_int_min),
+ KUNIT_CASE(sysctl_test_dointvec_single_greater_int_max),
+ {},
+};
+
+/*
+ * This defines a suite or grouping of tests.
+ *
+ * Test cases are defined as belonging to the suite by adding them to
+ * `test_cases`.
+ *
+ * Often it is desirable to run some function which will set up things which
+ * will be used by every test; this is accomplished with an `init` function
+ * which runs before each test case is invoked. Similarly, an `exit` function
+ * may be specified which runs after every test case and can be used to for
+ * cleanup. For clarity, running tests in a test module would behave as follows:
+ *
+ * module.init(test);
+ * module.test_case[0](test);
+ * module.exit(test);
+ * module.init(test);
+ * module.test_case[1](test);
+ * module.exit(test);
+ * ...;
+ */
+static struct kunit_module sysctl_test_module = {
+ .name = "sysctl_test",
+ .init = sysctl_test_init,
+ .exit = sysctl_test_exit,
+ .test_cases = sysctl_test_cases,
+};
+
+/*
+ * This registers the above test module telling KUnit that this is a suite of
+ * tests that need to be run.
+ */
+module_test(sysctl_test_module);
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index d5a4a4036d2f8..772af4ec70111 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1908,6 +1908,12 @@ config TEST_SYSCTL

If unsure, say N.

+config SYSCTL_KUNIT_TEST
+ bool "KUnit test for sysctl"
+ depends on KUNIT
+ help
+ Enables KUnit sysctl test.
+
config TEST_UDELAY
tristate "udelay test driver"
help
--
2.21.0.593.g511ec345e18-goog

2019-05-02 10:52:50

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Wed, May 01, 2019 at 04:01:09PM -0700, Brendan Higgins wrote:
> ## TLDR
>
> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> 5.2.

That might be rushing it, normally trees are already closed now for
5.2-rc1 if 5.1-final comes out this Sunday.

> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
> we would merge through your tree when the time came? Am I remembering
> correctly?

No objection from me.

Let me go review the latest round of patches now.

thanks,

greg k-h

2019-05-02 11:00:01

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2 07/17] kunit: test: add initial tests

On Wed, May 01, 2019 at 04:01:16PM -0700, Brendan Higgins wrote:
> Add a test for string stream along with a simpler example.
>
> Signed-off-by: Brendan Higgins <[email protected]>
> ---
> kunit/Kconfig | 12 ++++++
> kunit/Makefile | 4 ++
> kunit/example-test.c | 88 ++++++++++++++++++++++++++++++++++++++
> kunit/string-stream-test.c | 61 ++++++++++++++++++++++++++
> 4 files changed, 165 insertions(+)
> create mode 100644 kunit/example-test.c
> create mode 100644 kunit/string-stream-test.c
>
> diff --git a/kunit/Kconfig b/kunit/Kconfig
> index 64480092b2c24..5cb500355c873 100644
> --- a/kunit/Kconfig
> +++ b/kunit/Kconfig
> @@ -13,4 +13,16 @@ config KUNIT
> special hardware. For more information, please see
> Documentation/kunit/
>
> +config KUNIT_TEST
> + bool "KUnit test for KUnit"
> + depends on KUNIT
> + help
> + Enables KUnit test to test KUnit.
> +
> +config KUNIT_EXAMPLE_TEST
> + bool "Example test for KUnit"
> + depends on KUNIT
> + help
> + Enables example KUnit test to demo features of KUnit.

Can't these tests be module?

Or am I mis-reading the previous logic?

Anyway, just a question, nothing objecting to this as-is for now.

thanks,

greg k-h

2019-05-02 11:02:36

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2 04/17] kunit: test: add kunit_stream a std::stream like logger

On Wed, May 01, 2019 at 04:01:13PM -0700, Brendan Higgins wrote:
> A lot of the expectation and assertion infrastructure prints out fairly
> complicated test failure messages, so add a C++ style log library for
> for logging test results.

Ideally we would always use a standard logging format, like the
kselftest tests all are aiming to do. That way the output can be easily
parsed by tools to see if the tests succeed/fail easily.

Any chance of having this logging framework enforcing that format as
well?

thanks,

greg k-h

2019-05-02 11:04:47

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

On Wed, May 01, 2019 at 04:01:21PM -0700, Brendan Higgins wrote:
> From: Felix Guo <[email protected]>
>
> The ultimate goal is to create minimal isolated test binaries; in the
> meantime we are using UML to provide the infrastructure to run tests, so
> define an abstract way to configure and run tests that allow us to
> change the context in which tests are built without affecting the user.
> This also makes pretty and dynamic error reporting, and a lot of other
> nice features easier.
>
> kunit_config.py:
> - parse .config and Kconfig files.
>
> kunit_kernel.py: provides helper functions to:
> - configure the kernel using kunitconfig.
> - build the kernel with the appropriate configuration.
> - provide function to invoke the kernel and stream the output back.
>
> Signed-off-by: Felix Guo <[email protected]>
> Signed-off-by: Brendan Higgins <[email protected]>

Ah, here's probably my answer to my previous logging format question,
right? What's the chance that these wrappers output stuff in a standard
format that test-framework-tools can already parse? :)

thanks,

greg k-h

2019-05-02 11:05:44

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2 16/17] kernel/sysctl-test: Add null pointer test for sysctl.c:proc_dointvec()

On Wed, May 01, 2019 at 04:01:25PM -0700, Brendan Higgins wrote:
> From: Iurii Zaikin <[email protected]>
>
> KUnit tests for initialized data behavior of proc_dointvec that is
> explicitly checked in the code. Includes basic parsing tests including
> int min/max overflow.
>
> Signed-off-by: Iurii Zaikin <[email protected]>
> Signed-off-by: Brendan Higgins <[email protected]>
> ---
> kernel/Makefile | 2 +
> kernel/sysctl-test.c | 292 +++++++++++++++++++++++++++++++++++++++++++
> lib/Kconfig.debug | 6 +
> 3 files changed, 300 insertions(+)
> create mode 100644 kernel/sysctl-test.c
>
> diff --git a/kernel/Makefile b/kernel/Makefile
> index 6c57e78817dad..c81a8976b6a4b 100644
> --- a/kernel/Makefile
> +++ b/kernel/Makefile
> @@ -112,6 +112,8 @@ obj-$(CONFIG_HAS_IOMEM) += iomem.o
> obj-$(CONFIG_ZONE_DEVICE) += memremap.o
> obj-$(CONFIG_RSEQ) += rseq.o
>
> +obj-$(CONFIG_SYSCTL_KUNIT_TEST) += sysctl-test.o

You are going to have to have a "standard" naming scheme for test
modules, are you going to recommend "foo-test" over "test-foo"? If so,
that's fine, we should just be consistant and document it somewhere.

Personally, I'd prefer "test-foo", but that's just me, naming is hard...

thanks,

greg k-h

2019-05-02 11:06:39

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Thu, May 02, 2019 at 12:50:53PM +0200, Greg KH wrote:
> On Wed, May 01, 2019 at 04:01:09PM -0700, Brendan Higgins wrote:
> > ## TLDR
> >
> > I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> > 5.2.
>
> That might be rushing it, normally trees are already closed now for
> 5.2-rc1 if 5.1-final comes out this Sunday.
>
> > Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
> > we would merge through your tree when the time came? Am I remembering
> > correctly?
>
> No objection from me.
>
> Let me go review the latest round of patches now.

Overall, looks good to me, and provides a framework we can build on.
I'm a bit annoyed at the reliance on uml at the moment, but we can work
on that in the future :)

Thanks for sticking with this, now the real work begins...

Reviewed-by: Greg Kroah-Hartman <[email protected]>

2019-05-02 14:08:09

by Shuah Khan

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On 5/2/19 4:50 AM, Greg KH wrote:
> On Wed, May 01, 2019 at 04:01:09PM -0700, Brendan Higgins wrote:
>> ## TLDR
>>
>> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
>> 5.2.
>
> That might be rushing it, normally trees are already closed now for
> 5.2-rc1 if 5.1-final comes out this Sunday.
>
>> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
>> we would merge through your tree when the time came? Am I remembering
>> correctly?
>
> No objection from me.
>

Yes. I can take these through kselftest tree when the time comes.
Agree with Greg that 5.2 might be rushing it. 5.3 would be a good
target.

thanks,
-- Shuah



2019-05-02 18:10:54

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

On Thu, May 2, 2019 at 4:02 AM Greg KH <[email protected]> wrote:
>
> On Wed, May 01, 2019 at 04:01:21PM -0700, Brendan Higgins wrote:
> > From: Felix Guo <[email protected]>
> >
> > The ultimate goal is to create minimal isolated test binaries; in the
> > meantime we are using UML to provide the infrastructure to run tests, so
> > define an abstract way to configure and run tests that allow us to
> > change the context in which tests are built without affecting the user.
> > This also makes pretty and dynamic error reporting, and a lot of other
> > nice features easier.
> >
> > kunit_config.py:
> > - parse .config and Kconfig files.
> >
> > kunit_kernel.py: provides helper functions to:
> > - configure the kernel using kunitconfig.
> > - build the kernel with the appropriate configuration.
> > - provide function to invoke the kernel and stream the output back.
> >
> > Signed-off-by: Felix Guo <[email protected]>
> > Signed-off-by: Brendan Higgins <[email protected]>
>
> Ah, here's probably my answer to my previous logging format question,
> right? What's the chance that these wrappers output stuff in a standard
> format that test-framework-tools can already parse? :)

It should be pretty easy to do. I had some patches that pack up the
results into a serialized format for a presubmit service; it should be
pretty straightforward to take the same logic and just change the
output format.

Cheers

2019-05-02 18:16:37

by Bird, Tim

[permalink] [raw]
Subject: RE: [PATCH v2 16/17] kernel/sysctl-test: Add null pointer test for sysctl.c:proc_dointvec()



> -----Original Message-----
> From: Greg KH
>
> On Wed, May 01, 2019 at 04:01:25PM -0700, Brendan Higgins wrote:
> > From: Iurii Zaikin <[email protected]>
> >
> > KUnit tests for initialized data behavior of proc_dointvec that is
> > explicitly checked in the code. Includes basic parsing tests including
> > int min/max overflow.
> >
> > Signed-off-by: Iurii Zaikin <[email protected]>
> > Signed-off-by: Brendan Higgins <[email protected]>
> > ---
> > kernel/Makefile | 2 +
> > kernel/sysctl-test.c | 292
> +++++++++++++++++++++++++++++++++++++++++++
> > lib/Kconfig.debug | 6 +
> > 3 files changed, 300 insertions(+)
> > create mode 100644 kernel/sysctl-test.c
> >
> > diff --git a/kernel/Makefile b/kernel/Makefile
> > index 6c57e78817dad..c81a8976b6a4b 100644
> > --- a/kernel/Makefile
> > +++ b/kernel/Makefile
> > @@ -112,6 +112,8 @@ obj-$(CONFIG_HAS_IOMEM) += iomem.o
> > obj-$(CONFIG_ZONE_DEVICE) += memremap.o
> > obj-$(CONFIG_RSEQ) += rseq.o
> >
> > +obj-$(CONFIG_SYSCTL_KUNIT_TEST) += sysctl-test.o
>
> You are going to have to have a "standard" naming scheme for test
> modules, are you going to recommend "foo-test" over "test-foo"? If so,
> that's fine, we should just be consistant and document it somewhere.
>
> Personally, I'd prefer "test-foo", but that's just me, naming is hard...

My preference would be "test-foo" as well. Just my 2 cents.
-- Tim

2019-05-02 18:48:14

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 16/17] kernel/sysctl-test: Add null pointer test for sysctl.c:proc_dointvec()

On Thu, May 2, 2019 at 11:15 AM <[email protected]> wrote:
>
>
>
> > -----Original Message-----
> > From: Greg KH
> >
> > On Wed, May 01, 2019 at 04:01:25PM -0700, Brendan Higgins wrote:
> > > From: Iurii Zaikin <[email protected]>
> > >
> > > KUnit tests for initialized data behavior of proc_dointvec that is
> > > explicitly checked in the code. Includes basic parsing tests including
> > > int min/max overflow.
> > >
> > > Signed-off-by: Iurii Zaikin <[email protected]>
> > > Signed-off-by: Brendan Higgins <[email protected]>
> > > ---
> > > kernel/Makefile | 2 +
> > > kernel/sysctl-test.c | 292
> > +++++++++++++++++++++++++++++++++++++++++++
> > > lib/Kconfig.debug | 6 +
> > > 3 files changed, 300 insertions(+)
> > > create mode 100644 kernel/sysctl-test.c
> > >
> > > diff --git a/kernel/Makefile b/kernel/Makefile
> > > index 6c57e78817dad..c81a8976b6a4b 100644
> > > --- a/kernel/Makefile
> > > +++ b/kernel/Makefile
> > > @@ -112,6 +112,8 @@ obj-$(CONFIG_HAS_IOMEM) += iomem.o
> > > obj-$(CONFIG_ZONE_DEVICE) += memremap.o
> > > obj-$(CONFIG_RSEQ) += rseq.o
> > >
> > > +obj-$(CONFIG_SYSCTL_KUNIT_TEST) += sysctl-test.o
> >
> > You are going to have to have a "standard" naming scheme for test
> > modules, are you going to recommend "foo-test" over "test-foo"? If so,
> > that's fine, we should just be consistant and document it somewhere.
> >
> > Personally, I'd prefer "test-foo", but that's just me, naming is hard...
>
> My preference would be "test-foo" as well. Just my 2 cents.

I definitely agree we should be consistent. My personal bias
(unsurprisingly) is "foo-test," but this is just because that is the
convention I am used to in other projects I have worked on.

On an unbiased note, we are currently almost evenly split between the
two conventions with *slight* preference for "foo-test": I ran the two
following grep commands on v5.1-rc7:

grep -Hrn --exclude-dir="build" -e "config [a-zA-Z_0-9]\+_TEST$" | wc -l
grep -Hrn --exclude-dir="build" -e "config TEST_[a-zA-Z_0-9]\+" | wc -l

"foo-test" has 36 occurrences.
"test-foo" has 33 occurrences.

The things I am more concerned about is how this would affect file
naming. If we have a unit test for foo.c, I think foo_test.c is more
consistent with our namespacing conventions. The other thing, is if we
already have a Kconfig symbol called FOO_TEST (or TEST_FOO) what
should we name the KUnit test in this case? FOO_UNIT_TEST?
FOO_KUNIT_TEST, like I did above?

Cheers

2019-05-02 20:29:00

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 04/17] kunit: test: add kunit_stream a std::stream like logger

On Thu, May 2, 2019 at 4:00 AM Greg KH <[email protected]> wrote:
>
> On Wed, May 01, 2019 at 04:01:13PM -0700, Brendan Higgins wrote:
> > A lot of the expectation and assertion infrastructure prints out fairly
> > complicated test failure messages, so add a C++ style log library for
> > for logging test results.
>
> Ideally we would always use a standard logging format, like the
> kselftest tests all are aiming to do. That way the output can be easily
> parsed by tools to see if the tests succeed/fail easily.
>
> Any chance of having this logging framework enforcing that format as
> well?

I agree with your comment on the later patch that we should handle
this at the wrapper script layer (KUnit tool).

2019-05-02 20:31:35

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 07/17] kunit: test: add initial tests

On Thu, May 2, 2019 at 3:58 AM Greg KH <[email protected]> wrote:
>
> On Wed, May 01, 2019 at 04:01:16PM -0700, Brendan Higgins wrote:
> > Add a test for string stream along with a simpler example.
> >
> > Signed-off-by: Brendan Higgins <[email protected]>
> > ---
> > kunit/Kconfig | 12 ++++++
> > kunit/Makefile | 4 ++
> > kunit/example-test.c | 88 ++++++++++++++++++++++++++++++++++++++
> > kunit/string-stream-test.c | 61 ++++++++++++++++++++++++++
> > 4 files changed, 165 insertions(+)
> > create mode 100644 kunit/example-test.c
> > create mode 100644 kunit/string-stream-test.c
> >
> > diff --git a/kunit/Kconfig b/kunit/Kconfig
> > index 64480092b2c24..5cb500355c873 100644
> > --- a/kunit/Kconfig
> > +++ b/kunit/Kconfig
> > @@ -13,4 +13,16 @@ config KUNIT
> > special hardware. For more information, please see
> > Documentation/kunit/
> >
> > +config KUNIT_TEST
> > + bool "KUnit test for KUnit"
> > + depends on KUNIT
> > + help
> > + Enables KUnit test to test KUnit.
> > +
> > +config KUNIT_EXAMPLE_TEST
> > + bool "Example test for KUnit"
> > + depends on KUNIT
> > + help
> > + Enables example KUnit test to demo features of KUnit.
>
> Can't these tests be module?

At this time, no. KUnit doesn't support loading tests as kernel
modules; it is something we could add in in the future, but I would
rather not open that can of worms right now. There are some other
things I would like to do that would probably be easier to do before
adding support for tests as loadable modules.

>
> Or am I mis-reading the previous logic?
>
> Anyway, just a question, nothing objecting to this as-is for now.

Cool

Cheers!

2019-05-02 21:17:35

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

On 5/2/19 11:07 AM, Brendan Higgins wrote:
> On Thu, May 2, 2019 at 4:02 AM Greg KH <[email protected]> wrote:
>>
>> On Wed, May 01, 2019 at 04:01:21PM -0700, Brendan Higgins wrote:
>>> From: Felix Guo <[email protected]>
>>>
>>> The ultimate goal is to create minimal isolated test binaries; in the
>>> meantime we are using UML to provide the infrastructure to run tests, so
>>> define an abstract way to configure and run tests that allow us to
>>> change the context in which tests are built without affecting the user.
>>> This also makes pretty and dynamic error reporting, and a lot of other
>>> nice features easier.
>>>
>>> kunit_config.py:
>>> - parse .config and Kconfig files.
>>>
>>> kunit_kernel.py: provides helper functions to:
>>> - configure the kernel using kunitconfig.
>>> - build the kernel with the appropriate configuration.
>>> - provide function to invoke the kernel and stream the output back.
>>>
>>> Signed-off-by: Felix Guo <[email protected]>
>>> Signed-off-by: Brendan Higgins <[email protected]>
>>
>> Ah, here's probably my answer to my previous logging format question,
>> right? What's the chance that these wrappers output stuff in a standard
>> format that test-framework-tools can already parse? :)
>
> It should be pretty easy to do. I had some patches that pack up the
> results into a serialized format for a presubmit service; it should be
> pretty straightforward to take the same logic and just change the
> output format.

When examining and trying out the previous versions of the patch I found
the wrappers useful to provide information about how to control and use
the tests, but I had no interest in using the scripts as they do not
fit in with my personal environment and workflow.

In the previous versions of the patch, these helper scripts are optional,
which is good for my use case. If the helper scripts are required to
get the data into the proper format then the scripts are not quite so
optional, they become the expected environment. I think the proper
format should exist without the helper scripts.

-Frank

2019-05-02 21:21:24

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 04/17] kunit: test: add kunit_stream a std::stream like logger

On 5/2/19 1:25 PM, Brendan Higgins wrote:
> On Thu, May 2, 2019 at 4:00 AM Greg KH <[email protected]> wrote:
>>
>> On Wed, May 01, 2019 at 04:01:13PM -0700, Brendan Higgins wrote:
>>> A lot of the expectation and assertion infrastructure prints out fairly
>>> complicated test failure messages, so add a C++ style log library for
>>> for logging test results.
>>
>> Ideally we would always use a standard logging format, like the
>> kselftest tests all are aiming to do. That way the output can be easily
>> parsed by tools to see if the tests succeed/fail easily.
>>
>> Any chance of having this logging framework enforcing that format as
>> well?
>
> I agree with your comment on the later patch that we should handle
> this at the wrapper script layer (KUnit tool).

This discussion is a little confusing, because it is spread across two
patches.

I do not agree that this should be handled in the wrapper script, as
noted in my reply to patch 12, so not repeating it here.

-Frank

2019-05-03 00:00:33

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

On Thu, May 2, 2019 at 2:16 PM Frank Rowand <[email protected]> wrote:
>
> On 5/2/19 11:07 AM, Brendan Higgins wrote:
> > On Thu, May 2, 2019 at 4:02 AM Greg KH <[email protected]> wrote:
> >>
> >> On Wed, May 01, 2019 at 04:01:21PM -0700, Brendan Higgins wrote:
> >>> From: Felix Guo <[email protected]>
> >>>
> >>> The ultimate goal is to create minimal isolated test binaries; in the
> >>> meantime we are using UML to provide the infrastructure to run tests, so
> >>> define an abstract way to configure and run tests that allow us to
> >>> change the context in which tests are built without affecting the user.
> >>> This also makes pretty and dynamic error reporting, and a lot of other
> >>> nice features easier.
> >>>
> >>> kunit_config.py:
> >>> - parse .config and Kconfig files.
> >>>
> >>> kunit_kernel.py: provides helper functions to:
> >>> - configure the kernel using kunitconfig.
> >>> - build the kernel with the appropriate configuration.
> >>> - provide function to invoke the kernel and stream the output back.
> >>>
> >>> Signed-off-by: Felix Guo <[email protected]>
> >>> Signed-off-by: Brendan Higgins <[email protected]>
> >>
> >> Ah, here's probably my answer to my previous logging format question,
> >> right? What's the chance that these wrappers output stuff in a standard
> >> format that test-framework-tools can already parse? :)

To be clear, the test-framework-tools format we are talking about is
TAP13[1], correct?

My understanding is that is what kselftest is being converted to use.

> >
> > It should be pretty easy to do. I had some patches that pack up the
> > results into a serialized format for a presubmit service; it should be
> > pretty straightforward to take the same logic and just change the
> > output format.
>
> When examining and trying out the previous versions of the patch I found
> the wrappers useful to provide information about how to control and use
> the tests, but I had no interest in using the scripts as they do not
> fit in with my personal environment and workflow.
>
> In the previous versions of the patch, these helper scripts are optional,
> which is good for my use case. If the helper scripts are required to

They are still optional.

> get the data into the proper format then the scripts are not quite so
> optional, they become the expected environment. I think the proper
> format should exist without the helper scripts.

That's a good point. A couple things,

First off, supporting TAP13, either in the kernel or the wrapper
script is not hard, but I don't think that is the real issue that you
raise.

If your only concern is that you will always be able to have human
readable KUnit results printed to the kernel log, that is a guarantee
I feel comfortable making. Beyond that, I think it is going to take a
long while before I would feel comfortable guaranteeing anything about
how will KUnit work, what kind of data it will want to expose, and how
it will be organized. I think the wrapper script provides a nice
facade that I can maintain, can mediate between the implementation
details and the user, and can mediate between the implementation
details and other pieces of software that might want to consume
results.

[1] https://testanything.org/tap-version-13-specification.html

2019-05-03 00:43:57

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Thu, May 2, 2019 at 4:05 AM Greg KH <[email protected]> wrote:
>
> On Thu, May 02, 2019 at 12:50:53PM +0200, Greg KH wrote:
> > On Wed, May 01, 2019 at 04:01:09PM -0700, Brendan Higgins wrote:
> > > ## TLDR
> > >
> > > I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> > > 5.2.
> >
> > That might be rushing it, normally trees are already closed now for
> > 5.2-rc1 if 5.1-final comes out this Sunday.
> >
> > > Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
> > > we would merge through your tree when the time came? Am I remembering
> > > correctly?
> >
> > No objection from me.
> >
> > Let me go review the latest round of patches now.
>
> Overall, looks good to me, and provides a framework we can build on.
> I'm a bit annoyed at the reliance on uml at the moment, but we can work
> on that in the future :)

Eh, I mostly fixed that.

I removed the KUnit framework's reliance on UML i.e. the actual tests
now run on any architecture.

The only UML dependent bit is the KUnit wrapper scripts, which could
be made to work to support other architectures pretty trivially. The
only limitation here is that it would be dependent on the actual
workflow you are using.

In anycase, if you are comfortable reading the results in the kernel
logs, then there is no dependence on UML. (I should probably provide
some documentation on that...)

>
> Thanks for sticking with this, now the real work begins...

I don't doubt it.

>
> Reviewed-by: Greg Kroah-Hartman <[email protected]>

Does this cover all the patches in this set?

Thanks!

2019-05-03 00:45:59

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Thu, May 2, 2019 at 7:04 AM shuah <[email protected]> wrote:
>
> On 5/2/19 4:50 AM, Greg KH wrote:
> > On Wed, May 01, 2019 at 04:01:09PM -0700, Brendan Higgins wrote:
> >> ## TLDR
> >>
> >> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> >> 5.2.
> >
> > That might be rushing it, normally trees are already closed now for
> > 5.2-rc1 if 5.1-final comes out this Sunday.
> >
> >> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
> >> we would merge through your tree when the time came? Am I remembering
> >> correctly?
> >
> > No objection from me.
> >
>
> Yes. I can take these through kselftest tree when the time comes.

Awesome.

> Agree with Greg that 5.2 might be rushing it. 5.3 would be a good
> target.

Whoops. I guess I should have sent this out a bit earlier. Oh well, as
long as we are on our way!

2019-05-03 01:31:32

by Shuah Khan

[permalink] [raw]
Subject: Re: [PATCH v2 03/17] kunit: test: add string_stream a std::stream like string builder

On 5/1/19 5:01 PM, Brendan Higgins wrote:
> A number of test features need to do pretty complicated string printing
> where it may not be possible to rely on a single preallocated string
> with parameters.
>
> So provide a library for constructing the string as you go similar to
> C++'s std::string.
>
> Signed-off-by: Brendan Higgins <[email protected]>
> ---
> include/kunit/string-stream.h | 51 ++++++++++++
> kunit/Makefile | 3 +-
> kunit/string-stream.c | 144 ++++++++++++++++++++++++++++++++++
> 3 files changed, 197 insertions(+), 1 deletion(-)
> create mode 100644 include/kunit/string-stream.h
> create mode 100644 kunit/string-stream.c
>
> diff --git a/include/kunit/string-stream.h b/include/kunit/string-stream.h
> new file mode 100644
> index 0000000000000..567a4629406da
> --- /dev/null
> +++ b/include/kunit/string-stream.h
> @@ -0,0 +1,51 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * C++ stream style string builder used in KUnit for building messages.
> + *
> + * Copyright (C) 2019, Google LLC.
> + * Author: Brendan Higgins <[email protected]>
> + */
> +
> +#ifndef _KUNIT_STRING_STREAM_H
> +#define _KUNIT_STRING_STREAM_H
> +
> +#include <linux/types.h>
> +#include <linux/spinlock.h>
> +#include <linux/kref.h>
> +#include <stdarg.h>
> +
> +struct string_stream_fragment {
> + struct list_head node;
> + char *fragment;
> +};
> +
> +struct string_stream {
> + size_t length;
> + struct list_head fragments;
> +
> + /* length and fragments are protected by this lock */
> + spinlock_t lock;
> + struct kref refcount;
> +};
> +
> +struct string_stream *new_string_stream(void);
> +
> +void destroy_string_stream(struct string_stream *stream);
> +
> +void string_stream_get(struct string_stream *stream);
> +
> +int string_stream_put(struct string_stream *stream);
> +
> +int string_stream_add(struct string_stream *this, const char *fmt, ...);
> +
> +int string_stream_vadd(struct string_stream *this,
> + const char *fmt,
> + va_list args);
> +
> +char *string_stream_get_string(struct string_stream *this);
> +
> +void string_stream_clear(struct string_stream *this);
> +
> +bool string_stream_is_empty(struct string_stream *this);
> +
> +#endif /* _KUNIT_STRING_STREAM_H */
> diff --git a/kunit/Makefile b/kunit/Makefile
> index 5efdc4dea2c08..275b565a0e81f 100644
> --- a/kunit/Makefile
> +++ b/kunit/Makefile
> @@ -1 +1,2 @@
> -obj-$(CONFIG_KUNIT) += test.o
> +obj-$(CONFIG_KUNIT) += test.o \
> + string-stream.o
> diff --git a/kunit/string-stream.c b/kunit/string-stream.c
> new file mode 100644
> index 0000000000000..7018194ecf2fa
> --- /dev/null
> +++ b/kunit/string-stream.c
> @@ -0,0 +1,144 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * C++ stream style string builder used in KUnit for building messages.
> + *
> + * Copyright (C) 2019, Google LLC.
> + * Author: Brendan Higgins <[email protected]>
> + */
> +
> +#include <linux/list.h>
> +#include <linux/slab.h>
> +#include <kunit/string-stream.h>
> +
> +int string_stream_vadd(struct string_stream *this,
> + const char *fmt,
> + va_list args)
> +{
> + struct string_stream_fragment *fragment;

Since there is field with the same name, please use a different
name. Using the same name for the struct which contains a field
of the same name get very confusing and will hard to maintain
the code.

> + int len;
> + va_list args_for_counting;
> + unsigned long flags;
> +
> + /* Make a copy because `vsnprintf` could change it */
> + va_copy(args_for_counting, args);
> +
> + /* Need space for null byte. */
> + len = vsnprintf(NULL, 0, fmt, args_for_counting) + 1;
> +
> + va_end(args_for_counting);
> +
> + fragment = kmalloc(sizeof(*fragment), GFP_KERNEL);
> + if (!fragment)
> + return -ENOMEM;
> +
> + fragment->fragment = kmalloc(len, GFP_KERNEL);

This is confusing. See above comment.


> + if (!fragment->fragment) {
> + kfree(fragment);
> + return -ENOMEM;
> + }
> +
> + len = vsnprintf(fragment->fragment, len, fmt, args);
> + spin_lock_irqsave(&this->lock, flags);
> + this->length += len;
> + list_add_tail(&fragment->node, &this->fragments);
> + spin_unlock_irqrestore(&this->lock, flags);
> + return 0;
> +}
> +
> +int string_stream_add(struct string_stream *this, const char *fmt, ...)
> +{
> + va_list args;
> + int result;
> +
> + va_start(args, fmt);
> + result = string_stream_vadd(this, fmt, args);
> + va_end(args);
> + return result;
> +}
> +
> +void string_stream_clear(struct string_stream *this)
> +{
> + struct string_stream_fragment *fragment, *fragment_safe;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&this->lock, flags);
> + list_for_each_entry_safe(fragment,
> + fragment_safe,
> + &this->fragments,
> + node) {
> + list_del(&fragment->node);
> + kfree(fragment->fragment);
> + kfree(fragment);

This is what git me down the road of checking the structure
name to begin with. :)

> + }
> + this->length = 0;
> + spin_unlock_irqrestore(&this->lock, flags);
> +}
> +
> +char *string_stream_get_string(struct string_stream *this)
> +{
> + struct string_stream_fragment *fragment;
> + size_t buf_len = this->length + 1; /* +1 for null byte. */
> + char *buf;
> + unsigned long flags;
> +
> + buf = kzalloc(buf_len, GFP_KERNEL);
> + if (!buf)
> + return NULL;
> +
> + spin_lock_irqsave(&this->lock, flags);
> + list_for_each_entry(fragment, &this->fragments, node)
> + strlcat(buf, fragment->fragment, buf_len);
> + spin_unlock_irqrestore(&this->lock, flags);
> +
> + return buf;
> +}
> +
> +bool string_stream_is_empty(struct string_stream *this)
> +{
> + bool is_empty;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&this->lock, flags);
> + is_empty = list_empty(&this->fragments);
> + spin_unlock_irqrestore(&this->lock, flags);
> +
> + return is_empty;
> +}
> +
> +void destroy_string_stream(struct string_stream *stream)
> +{
> + string_stream_clear(stream);
> + kfree(stream);
> +}
> +
> +static void string_stream_destroy(struct kref *kref)
> +{
> + struct string_stream *stream = container_of(kref,
> + struct string_stream,
> + refcount);
> + destroy_string_stream(stream);
> +}
> +
> +struct string_stream *new_string_stream(void)
> +{
> + struct string_stream *stream = kzalloc(sizeof(*stream), GFP_KERNEL);
> +
> + if (!stream)
> + return NULL;
> +
> + INIT_LIST_HEAD(&stream->fragments);
> + spin_lock_init(&stream->lock);
> + kref_init(&stream->refcount);
> + return stream;
> +}
> +
> +void string_stream_get(struct string_stream *stream)
> +{
> + kref_get(&stream->refcount);
> +}
> +
> +int string_stream_put(struct string_stream *stream)
> +{
> + return kref_put(&stream->refcount, &string_stream_destroy);
> +}
> +
>

thanks,
-- Shuah

2019-05-03 01:33:54

by Shuah Khan

[permalink] [raw]
Subject: Re: [PATCH v2 07/17] kunit: test: add initial tests

On 5/1/19 5:01 PM, Brendan Higgins wrote:
> Add a test for string stream along with a simpler example.
>
> Signed-off-by: Brendan Higgins <[email protected]>
> ---
> kunit/Kconfig | 12 ++++++
> kunit/Makefile | 4 ++
> kunit/example-test.c | 88 ++++++++++++++++++++++++++++++++++++++
> kunit/string-stream-test.c | 61 ++++++++++++++++++++++++++
> 4 files changed, 165 insertions(+)
> create mode 100644 kunit/example-test.c
> create mode 100644 kunit/string-stream-test.c
>
> diff --git a/kunit/Kconfig b/kunit/Kconfig
> index 64480092b2c24..5cb500355c873 100644
> --- a/kunit/Kconfig
> +++ b/kunit/Kconfig
> @@ -13,4 +13,16 @@ config KUNIT
> special hardware. For more information, please see
> Documentation/kunit/
>
> +config KUNIT_TEST
> + bool "KUnit test for KUnit"
> + depends on KUNIT
> + help
> + Enables KUnit test to test KUnit.
> +

Please add a bit more information on what this config option
does. Why should user care to enable it?

> +config KUNIT_EXAMPLE_TEST
> + bool "Example test for KUnit"
> + depends on KUNIT
> + help
> + Enables example KUnit test to demo features of KUnit.
> +

Same here.

> endmenu
> diff --git a/kunit/Makefile b/kunit/Makefile
> index 6ddc622ee6b1c..60a9ea6cb4697 100644
> --- a/kunit/Makefile
> +++ b/kunit/Makefile
> @@ -1,3 +1,7 @@
> obj-$(CONFIG_KUNIT) += test.o \
> string-stream.o \
> kunit-stream.o
> +
> +obj-$(CONFIG_KUNIT_TEST) += string-stream-test.o
> +
> +obj-$(CONFIG_KUNIT_EXAMPLE_TEST) += example-test.o
> diff --git a/kunit/example-test.c b/kunit/example-test.c
> new file mode 100644
> index 0000000000000..3947dd7c8f922
> --- /dev/null
> +++ b/kunit/example-test.c
> @@ -0,0 +1,88 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Example KUnit test to show how to use KUnit.
> + *
> + * Copyright (C) 2019, Google LLC.
> + * Author: Brendan Higgins <[email protected]>
> + */
> +
> +#include <kunit/test.h>
> +
> +/*
> + * This is the most fundamental element of KUnit, the test case. A test case
> + * makes a set EXPECTATIONs and ASSERTIONs about the behavior of some code; if
> + * any expectations or assertions are not met, the test fails; otherwise, the
> + * test passes.
> + *
> + * In KUnit, a test case is just a function with the signature
> + * `void (*)(struct kunit *)`. `struct kunit` is a context object that stores
> + * information about the current test.
> + */
> +static void example_simple_test(struct kunit *test)
> +{
> + /*
> + * This is an EXPECTATION; it is how KUnit tests things. When you want
> + * to test a piece of code, you set some expectations about what the
> + * code should do. KUnit then runs the test and verifies that the code's
> + * behavior matched what was expected.
> + */
> + KUNIT_EXPECT_EQ(test, 1 + 1, 2);
> +}
> +
> +/*
> + * This is run once before each test case, see the comment on
> + * example_test_module for more information.
> + */
> +static int example_test_init(struct kunit *test)
> +{
> + kunit_info(test, "initializing\n");
> +
> + return 0;
> +}
> +
> +/*
> + * Here we make a list of all the test cases we want to add to the test module
> + * below.
> + */
> +static struct kunit_case example_test_cases[] = {
> + /*
> + * This is a helper to create a test case object from a test case
> + * function; its exact function is not important to understand how to
> + * use KUnit, just know that this is how you associate test cases with a
> + * test module.
> + */
> + KUNIT_CASE(example_simple_test),
> + {},
> +};
> +
> +/*
> + * This defines a suite or grouping of tests.
> + *
> + * Test cases are defined as belonging to the suite by adding them to
> + * `kunit_cases`.
> + *
> + * Often it is desirable to run some function which will set up things which
> + * will be used by every test; this is accomplished with an `init` function
> + * which runs before each test case is invoked. Similarly, an `exit` function
> + * may be specified which runs after every test case and can be used to for
> + * cleanup. For clarity, running tests in a test module would behave as follows:
> + *
> + * module.init(test);
> + * module.test_case[0](test);
> + * module.exit(test);
> + * module.init(test);
> + * module.test_case[1](test);
> + * module.exit(test);
> + * ...;
> + */
> +static struct kunit_module example_test_module = {
> + .name = "example",
> + .init = example_test_init,
> + .test_cases = example_test_cases,
> +};
> +
> +/*
> + * This registers the above test module telling KUnit that this is a suite of
> + * tests that need to be run.
> + */
> +module_test(example_test_module);
> diff --git a/kunit/string-stream-test.c b/kunit/string-stream-test.c
> new file mode 100644
> index 0000000000000..b2a98576797c9
> --- /dev/null
> +++ b/kunit/string-stream-test.c
> @@ -0,0 +1,61 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * KUnit test for struct string_stream.
> + *
> + * Copyright (C) 2019, Google LLC.
> + * Author: Brendan Higgins <[email protected]>
> + */
> +
> +#include <linux/slab.h>
> +#include <kunit/test.h>
> +#include <kunit/string-stream.h>
> +
> +static void string_stream_test_get_string(struct kunit *test)
> +{
> + struct string_stream *stream = new_string_stream();
> + char *output;
> +
> + string_stream_add(stream, "Foo");
> + string_stream_add(stream, " %s", "bar");
> +
> + output = string_stream_get_string(stream);
> + KUNIT_EXPECT_STREQ(test, output, "Foo bar");
> + kfree(output);
> + destroy_string_stream(stream);
> +}
> +
> +static void string_stream_test_add_and_clear(struct kunit *test)
> +{
> + struct string_stream *stream = new_string_stream();
> + char *output;
> + int i;
> +
> + for (i = 0; i < 10; i++)
> + string_stream_add(stream, "A");
> +
> + output = string_stream_get_string(stream);
> + KUNIT_EXPECT_STREQ(test, output, "AAAAAAAAAA");
> + KUNIT_EXPECT_EQ(test, stream->length, 10);
> + KUNIT_EXPECT_FALSE(test, string_stream_is_empty(stream));
> + kfree(output);
> +
> + string_stream_clear(stream);
> +
> + output = string_stream_get_string(stream);
> + KUNIT_EXPECT_STREQ(test, output, "");
> + KUNIT_EXPECT_TRUE(test, string_stream_is_empty(stream));
> + destroy_string_stream(stream);
> +}
> +
> +static struct kunit_case string_stream_test_cases[] = {
> + KUNIT_CASE(string_stream_test_get_string),
> + KUNIT_CASE(string_stream_test_add_and_clear),
> + {}
> +};
> +
> +static struct kunit_module string_stream_test_module = {
> + .name = "string-stream-test",
> + .test_cases = string_stream_test_cases
> +};
> +module_test(string_stream_test_module);
> +
>

2019-05-03 02:01:18

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

On 5/2/19 4:45 PM, Brendan Higgins wrote:
> On Thu, May 2, 2019 at 2:16 PM Frank Rowand <[email protected]> wrote:
>>
>> On 5/2/19 11:07 AM, Brendan Higgins wrote:
>>> On Thu, May 2, 2019 at 4:02 AM Greg KH <[email protected]> wrote:
>>>>
>>>> On Wed, May 01, 2019 at 04:01:21PM -0700, Brendan Higgins wrote:
>>>>> From: Felix Guo <[email protected]>
>>>>>
>>>>> The ultimate goal is to create minimal isolated test binaries; in the
>>>>> meantime we are using UML to provide the infrastructure to run tests, so
>>>>> define an abstract way to configure and run tests that allow us to
>>>>> change the context in which tests are built without affecting the user.
>>>>> This also makes pretty and dynamic error reporting, and a lot of other
>>>>> nice features easier.
>>>>>
>>>>> kunit_config.py:
>>>>> - parse .config and Kconfig files.
>>>>>
>>>>> kunit_kernel.py: provides helper functions to:
>>>>> - configure the kernel using kunitconfig.
>>>>> - build the kernel with the appropriate configuration.
>>>>> - provide function to invoke the kernel and stream the output back.
>>>>>
>>>>> Signed-off-by: Felix Guo <[email protected]>
>>>>> Signed-off-by: Brendan Higgins <[email protected]>
>>>>
>>>> Ah, here's probably my answer to my previous logging format question,
>>>> right? What's the chance that these wrappers output stuff in a standard
>>>> format that test-framework-tools can already parse? :)
>
> To be clear, the test-framework-tools format we are talking about is
> TAP13[1], correct?

I'm not sure what the test community prefers for a format. I'll let them
jump in and debate that question.


>
> My understanding is that is what kselftest is being converted to use.
>
>>>
>>> It should be pretty easy to do. I had some patches that pack up the
>>> results into a serialized format for a presubmit service; it should be
>>> pretty straightforward to take the same logic and just change the
>>> output format.
>>
>> When examining and trying out the previous versions of the patch I found
>> the wrappers useful to provide information about how to control and use
>> the tests, but I had no interest in using the scripts as they do not
>> fit in with my personal environment and workflow.
>>
>> In the previous versions of the patch, these helper scripts are optional,
>> which is good for my use case. If the helper scripts are required to
>
> They are still optional.
>
>> get the data into the proper format then the scripts are not quite so
>> optional, they become the expected environment. I think the proper
>> format should exist without the helper scripts.
>
> That's a good point. A couple things,
>
> First off, supporting TAP13, either in the kernel or the wrapper
> script is not hard, but I don't think that is the real issue that you
> raise.
>
> If your only concern is that you will always be able to have human
> readable KUnit results printed to the kernel log, that is a guarantee
> I feel comfortable making. Beyond that, I think it is going to take a
> long while before I would feel comfortable guaranteeing anything about
> how will KUnit work, what kind of data it will want to expose, and how
> it will be organized. I think the wrapper script provides a nice
> facade that I can maintain, can mediate between the implementation
> details and the user, and can mediate between the implementation
> details and other pieces of software that might want to consume
> results.
>
> [1] https://testanything.org/tap-version-13-specification.html

My concern is based on a focus on my little part of the world
(which in _previous_ versions of the patch series was the devicetree
unittest.c tests being converted to use the kunit infrastructure).
If I step back and think of the entire kernel globally I may end
up with a different conclusion - but I'm going to remain myopic
for this email.

I want the test results to be usable by me and my fellow
developers. I prefer that the test results be easily accessible
(current printk() implementation means that kunit messages are
just as accessible as the current unittest.c printk() output).
If the printk() output needs to be filtered through a script
to generate the actual test results then that is sub-optimal
to me. It is one more step added to my workflow. And
potentially with an embedded target a major pain to get a
data file (the kernel log file) transferred from a target
to my development host.

I want a reported test failure to be easy to trace back to the
point in the source where the failure is reported. With printk()
the search is a simple grep for the failure message. If the
failure message has been processed by a script, and then the
failure reported to me in an email, then I may have to look
at the script to reverse engineer how the original failure
message was transformed into the message that was reported
to me in the email. Then I search for the point in the
source where the failure is reported. So a basic task has
just become more difficult and time consuming.

-Frank

2019-05-03 02:09:56

by Shuah Khan

[permalink] [raw]
Subject: Re: [PATCH v2 04/17] kunit: test: add kunit_stream a std::stream like logger

On 5/1/19 5:01 PM, Brendan Higgins wrote:
> A lot of the expectation and assertion infrastructure prints out fairly
> complicated test failure messages, so add a C++ style log library for
> for logging test results.
>
> Signed-off-by: Brendan Higgins <[email protected]>
> ---
> include/kunit/kunit-stream.h | 85 ++++++++++++++++++++
> include/kunit/test.h | 2 +
> kunit/Makefile | 3 +-
> kunit/kunit-stream.c | 149 +++++++++++++++++++++++++++++++++++
> kunit/test.c | 8 ++
> 5 files changed, 246 insertions(+), 1 deletion(-)
> create mode 100644 include/kunit/kunit-stream.h
> create mode 100644 kunit/kunit-stream.c
>
> diff --git a/include/kunit/kunit-stream.h b/include/kunit/kunit-stream.h
> new file mode 100644
> index 0000000000000..d457a54fe0100
> --- /dev/null
> +++ b/include/kunit/kunit-stream.h
> @@ -0,0 +1,85 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * C++ stream style string formatter and printer used in KUnit for outputting
> + * KUnit messages.
> + *
> + * Copyright (C) 2019, Google LLC.
> + * Author: Brendan Higgins <[email protected]>
> + */
> +
> +#ifndef _KUNIT_KUNIT_STREAM_H
> +#define _KUNIT_KUNIT_STREAM_H
> +
> +#include <linux/types.h>
> +#include <kunit/string-stream.h>
> +
> +struct kunit;
> +
> +/**
> + * struct kunit_stream - a std::stream style string builder.
> + *
> + * A std::stream style string builder. Allows messages to be built up and
> + * printed all at once.
> + */
> +struct kunit_stream {
> + /* private: internal use only. */
> + struct kunit *test;
> + spinlock_t lock; /* Guards level. */
> + const char *level;
> + struct string_stream *internal_stream;
> +};
> +
> +/**
> + * kunit_new_stream() - constructs a new &struct kunit_stream.
> + * @test: The test context object.
> + *
> + * Constructs a new test managed &struct kunit_stream.
> + */
> +struct kunit_stream *kunit_new_stream(struct kunit *test);
> +
> +/**
> + * kunit_stream_set_level(): sets the level that string should be printed at.
> + * @this: the stream being operated on.
> + * @level: the print level the stream is set to output to.
> + *
> + * Sets the print level at which the stream outputs.
> + */
> +void kunit_stream_set_level(struct kunit_stream *this, const char *level);
> +
> +/**
> + * kunit_stream_add(): adds the formatted input to the internal buffer.
> + * @this: the stream being operated on.
> + * @fmt: printf style format string to append to stream.
> + *
> + * Appends the formatted string, @fmt, to the internal buffer.
> + */
> +void __printf(2, 3) kunit_stream_add(struct kunit_stream *this,
> + const char *fmt, ...);
> +
> +/**
> + * kunit_stream_append(): appends the contents of @other to @this.
> + * @this: the stream to which @other is appended.
> + * @other: the stream whose contents are appended to @this.
> + *
> + * Appends the contents of @other to @this.
> + */
> +void kunit_stream_append(struct kunit_stream *this, struct kunit_stream *other);
> +
> +/**
> + * kunit_stream_commit(): prints out the internal buffer to the user.
> + * @this: the stream being operated on.
> + *
> + * Outputs the contents of the internal buffer as a kunit_printk formatted
> + * output.
> + */
> +void kunit_stream_commit(struct kunit_stream *this);
> +
> +/**
> + * kunit_stream_clear(): clears the internal buffer.
> + * @this: the stream being operated on.
> + *
> + * Clears the contents of the internal buffer.
> + */
> +void kunit_stream_clear(struct kunit_stream *this);
> +
> +#endif /* _KUNIT_KUNIT_STREAM_H */
> diff --git a/include/kunit/test.h b/include/kunit/test.h
> index 819edd8db4e81..4668e8a635954 100644
> --- a/include/kunit/test.h
> +++ b/include/kunit/test.h
> @@ -11,6 +11,7 @@
>
> #include <linux/types.h>
> #include <linux/slab.h>
> +#include <kunit/kunit-stream.h>
>
> struct kunit_resource;
>
> @@ -171,6 +172,7 @@ struct kunit {
> void (*vprintk)(const struct kunit *test,
> const char *level,
> struct va_format *vaf);
> + void (*fail)(struct kunit *test, struct kunit_stream *stream);
> };
>
> int kunit_init_test(struct kunit *test, const char *name);
> diff --git a/kunit/Makefile b/kunit/Makefile
> index 275b565a0e81f..6ddc622ee6b1c 100644
> --- a/kunit/Makefile
> +++ b/kunit/Makefile
> @@ -1,2 +1,3 @@
> obj-$(CONFIG_KUNIT) += test.o \
> - string-stream.o
> + string-stream.o \
> + kunit-stream.o
> diff --git a/kunit/kunit-stream.c b/kunit/kunit-stream.c
> new file mode 100644
> index 0000000000000..93c14eec03844
> --- /dev/null
> +++ b/kunit/kunit-stream.c
> @@ -0,0 +1,149 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * C++ stream style string formatter and printer used in KUnit for outputting
> + * KUnit messages.
> + *
> + * Copyright (C) 2019, Google LLC.
> + * Author: Brendan Higgins <[email protected]>
> + */
> +
> +#include <kunit/test.h>
> +#include <kunit/kunit-stream.h>
> +#include <kunit/string-stream.h>
> +
> +const char *kunit_stream_get_level(struct kunit_stream *this)
> +{
> + unsigned long flags;
> + const char *level;
> +
> + spin_lock_irqsave(&this->lock, flags);
> + level = this->level;
> + spin_unlock_irqrestore(&this->lock, flags);
> +
> + return level;
> +}
> +
> +void kunit_stream_set_level(struct kunit_stream *this, const char *level)
> +{
> + unsigned long flags;
> +
> + spin_lock_irqsave(&this->lock, flags);
> + this->level = level;
> + spin_unlock_irqrestore(&this->lock, flags);
> +}
> +
> +void kunit_stream_add(struct kunit_stream *this, const char *fmt, ...)
> +{
> + va_list args;
> + struct string_stream *stream = this->internal_stream;
> +
> + va_start(args, fmt);
> +
> + if (string_stream_vadd(stream, fmt, args) < 0)
> + kunit_err(this->test, "Failed to allocate fragment: %s\n", fmt);
> +
> + va_end(args);
> +}
> +
> +void kunit_stream_append(struct kunit_stream *this,
> + struct kunit_stream *other)
> +{
> + struct string_stream *other_stream = other->internal_stream;
> + const char *other_content;
> +
> + other_content = string_stream_get_string(other_stream);
> +
> + if (!other_content) {
> + kunit_err(this->test,
> + "Failed to get string from second argument for appending.\n");
> + return;
> + }
> +
> + kunit_stream_add(this, other_content);
> +}
> +
> +void kunit_stream_clear(struct kunit_stream *this)
> +{
> + string_stream_clear(this->internal_stream);
> +}
> +
> +void kunit_stream_commit(struct kunit_stream *this)
> +{
> + struct string_stream *stream = this->internal_stream;
> + struct string_stream_fragment *fragment;
> + const char *level;
> + char *buf;
> +
> + level = kunit_stream_get_level(this);
> + if (!level) {
> + kunit_err(this->test,
> + "Stream was committed without a specified log level.\n");
> + level = KERN_ERR;
> + kunit_stream_set_level(this, level);
> + }
> +
> + buf = string_stream_get_string(stream);
> + if (!buf) {
> + kunit_err(this->test,
> + "Could not allocate buffer, dumping stream:\n");
> + list_for_each_entry(fragment, &stream->fragments, node) {
> + kunit_err(this->test, fragment->fragment);
> + }
> + kunit_err(this->test, "\n");
> + goto cleanup;
> + }
> +
> + kunit_printk(level, this->test, buf);
> + kfree(buf);
> +
> +cleanup:
> + kunit_stream_clear(this);
> +}
> +
> +static int kunit_stream_init(struct kunit_resource *res, void *context)
> +{
> + struct kunit *test = context;
> + struct kunit_stream *stream;
> +
> + stream = kzalloc(sizeof(*stream), GFP_KERNEL);
> + if (!stream)
> + return -ENOMEM;
> + res->allocation = stream;
> + stream->test = test;
> + spin_lock_init(&stream->lock);
> + stream->internal_stream = new_string_stream();
> +
> + if (!stream->internal_stream)
> + return -ENOMEM;

What happens to stream? Don't you want to free that?

> +
> + return 0;
> +}
> +
> +static void kunit_stream_free(struct kunit_resource *res)
> +{
> + struct kunit_stream *stream = res->allocation;
> +
> + if (!string_stream_is_empty(stream->internal_stream)) {
> + kunit_err(stream->test,
> + "End of test case reached with uncommitted stream entries.\n");
> + kunit_stream_commit(stream);
> + }
> +
> + destroy_string_stream(stream->internal_stream);
> + kfree(stream);
> +}
> +
> +struct kunit_stream *kunit_new_stream(struct kunit *test)
> +{
> + struct kunit_resource *res;
> +
> + res = kunit_alloc_resource(test,
> + kunit_stream_init,
> + kunit_stream_free,
> + test);
> +
> + if (res)
> + return res->allocation;
> + else
> + return NULL;
> +}
> diff --git a/kunit/test.c b/kunit/test.c
> index 541f9adb1608c..f7575b127e2df 100644
> --- a/kunit/test.c
> +++ b/kunit/test.c
> @@ -63,12 +63,20 @@ static void kunit_vprintk(const struct kunit *test,
> "kunit %s: %pV", test->name, vaf);
> }
>
> +static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
> +{
> + kunit_set_success(test, false);
> + kunit_stream_set_level(stream, KERN_ERR);
> + kunit_stream_commit(stream);
> +}
> +
> int kunit_init_test(struct kunit *test, const char *name)
> {
> spin_lock_init(&test->lock);
> INIT_LIST_HEAD(&test->resources);
> test->name = name;
> test->vprintk = kunit_vprintk;
> + test->fail = kunit_fail;
>
> return 0;
> }
>

thanks,
-- Shuah

2019-05-03 03:18:18

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v2 08/17] kunit: test: add support for test abort



On 2019-05-01 5:01 p.m., Brendan Higgins wrote:
> +/*
> + * struct kunit_try_catch - provides a generic way to run code which might fail.
> + * @context: used to pass user data to the try and catch functions.
> + *
> + * kunit_try_catch provides a generic, architecture independent way to execute
> + * an arbitrary function of type kunit_try_catch_func_t which may bail out by
> + * calling kunit_try_catch_throw(). If kunit_try_catch_throw() is called, @try
> + * is stopped at the site of invocation and @catch is catch is called.

I found some of the C++ comparisons in this series a bit distasteful but
wasn't going to say anything until I saw the try catch.... But looking
into the implementation it's just a thread that can exit early which
seems fine to me. Just a poor choice of name I guess...

[snip]

> +static void __noreturn kunit_abort(struct kunit *test)
> +{
> + kunit_set_death_test(test, true);
> +
> + kunit_try_catch_throw(&test->try_catch);
> +
> + /*
> + * Throw could not abort from test.
> + *
> + * XXX: we should never reach this line! As kunit_try_catch_throw is
> + * marked __noreturn.
> + */
> + WARN_ONCE(true, "Throw could not abort from test!\n");
> +}
> +
> int kunit_init_test(struct kunit *test, const char *name)
> {
> spin_lock_init(&test->lock);
> @@ -77,6 +103,7 @@ int kunit_init_test(struct kunit *test, const char *name)
> test->name = name;
> test->vprintk = kunit_vprintk;
> test->fail = kunit_fail;
> + test->abort = kunit_abort;

There are a number of these function pointers which seem to be pointless
to me as you only ever set them to one function. Just call the function
directly. As it is, it is an unnecessary indirection for someone reading
the code. If and when you have multiple implementations of the function
then add the pointer. Don't assume you're going to need it later on and
add all this maintenance burden if you never use it..

[snip]

> +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
> +{
> + try_catch->run = kunit_generic_run_try_catch;
> + try_catch->throw = kunit_generic_throw;
> +}

Same here. There's only one implementation of try_catch and I can't
really see any sensible justification for another implementation. Even
if there is, add the indirection when the second implementation is
added. This isn't C++ and we don't need to make everything a "method".

Thanks,

Logan

2019-05-03 03:20:31

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework



On 2019-05-01 5:01 p.m., Brendan Higgins wrote:
> ## TLDR
>
> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> 5.2.
>

As I said on the last posting, I like this and would like to see it move
forward. I still have the same concerns over the downsides of using UML
(ie. not being able to compile large swaths of the tree due to features
that don't exist in that arch) but these are concerns for later.

I'd prefer to see the unnecessary indirection that I pointed out in
patch 8 cleaned up but, besides that, the code looks good to me.

Reviewed-by: Logan Gunthorpe <[email protected]>

Thanks!

Logan

2019-05-03 05:29:54

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 07/17] kunit: test: add initial tests

On Thu, May 2, 2019 at 6:27 PM shuah <[email protected]> wrote:
>
> On 5/1/19 5:01 PM, Brendan Higgins wrote:
> > Add a test for string stream along with a simpler example.
> >
> > Signed-off-by: Brendan Higgins <[email protected]>
> > ---
> > kunit/Kconfig | 12 ++++++
> > kunit/Makefile | 4 ++
> > kunit/example-test.c | 88 ++++++++++++++++++++++++++++++++++++++
> > kunit/string-stream-test.c | 61 ++++++++++++++++++++++++++
> > 4 files changed, 165 insertions(+)
> > create mode 100644 kunit/example-test.c
> > create mode 100644 kunit/string-stream-test.c
> >
> > diff --git a/kunit/Kconfig b/kunit/Kconfig
> > index 64480092b2c24..5cb500355c873 100644
> > --- a/kunit/Kconfig
> > +++ b/kunit/Kconfig
> > @@ -13,4 +13,16 @@ config KUNIT
> > special hardware. For more information, please see
> > Documentation/kunit/
> >
> > +config KUNIT_TEST
> > + bool "KUnit test for KUnit"
> > + depends on KUNIT
> > + help
> > + Enables KUnit test to test KUnit.
> > +
>
> Please add a bit more information on what this config option
> does. Why should user care to enable it?
>
> > +config KUNIT_EXAMPLE_TEST
> > + bool "Example test for KUnit"
> > + depends on KUNIT
> > + help
> > + Enables example KUnit test to demo features of KUnit.
> > +
>
> Same here.

Sounds reasonable. Will fix in the next revision.

< snip >

Thanks!

2019-05-03 05:36:52

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 03/17] kunit: test: add string_stream a std::stream like string builder

On Thu, May 2, 2019 at 6:26 PM shuah <[email protected]> wrote:
>
> On 5/1/19 5:01 PM, Brendan Higgins wrote:
< snip >
> > diff --git a/kunit/Makefile b/kunit/Makefile
> > index 5efdc4dea2c08..275b565a0e81f 100644
> > --- a/kunit/Makefile
> > +++ b/kunit/Makefile
> > @@ -1 +1,2 @@
> > -obj-$(CONFIG_KUNIT) += test.o
> > +obj-$(CONFIG_KUNIT) += test.o \
> > + string-stream.o
> > diff --git a/kunit/string-stream.c b/kunit/string-stream.c
> > new file mode 100644
> > index 0000000000000..7018194ecf2fa
> > --- /dev/null
> > +++ b/kunit/string-stream.c
> > @@ -0,0 +1,144 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * C++ stream style string builder used in KUnit for building messages.
> > + *
> > + * Copyright (C) 2019, Google LLC.
> > + * Author: Brendan Higgins <[email protected]>
> > + */
> > +
> > +#include <linux/list.h>
> > +#include <linux/slab.h>
> > +#include <kunit/string-stream.h>
> > +
> > +int string_stream_vadd(struct string_stream *this,
> > + const char *fmt,
> > + va_list args)
> > +{
> > + struct string_stream_fragment *fragment;
>
> Since there is field with the same name, please use a different
> name. Using the same name for the struct which contains a field
> of the same name get very confusing and will hard to maintain
> the code.
>
> > + int len;
> > + va_list args_for_counting;
> > + unsigned long flags;
> > +
> > + /* Make a copy because `vsnprintf` could change it */
> > + va_copy(args_for_counting, args);
> > +
> > + /* Need space for null byte. */
> > + len = vsnprintf(NULL, 0, fmt, args_for_counting) + 1;
> > +
> > + va_end(args_for_counting);
> > +
> > + fragment = kmalloc(sizeof(*fragment), GFP_KERNEL);
> > + if (!fragment)
> > + return -ENOMEM;
> > +
> > + fragment->fragment = kmalloc(len, GFP_KERNEL);
>
> This is confusing. See above comment.

Good point. Will fix in the next revision.

< snip >

Thanks!

2019-05-03 05:41:42

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

On Thu, May 2, 2019 at 6:45 PM Frank Rowand <[email protected]> wrote:
>
> On 5/2/19 4:45 PM, Brendan Higgins wrote:
> > On Thu, May 2, 2019 at 2:16 PM Frank Rowand <[email protected]> wrote:
> >>
> >> On 5/2/19 11:07 AM, Brendan Higgins wrote:
> >>> On Thu, May 2, 2019 at 4:02 AM Greg KH <[email protected]> wrote:
> >>>>
> >>>> On Wed, May 01, 2019 at 04:01:21PM -0700, Brendan Higgins wrote:
> >>>>> From: Felix Guo <[email protected]>
> >>>>>
> >>>>> The ultimate goal is to create minimal isolated test binaries; in the
> >>>>> meantime we are using UML to provide the infrastructure to run tests, so
> >>>>> define an abstract way to configure and run tests that allow us to
> >>>>> change the context in which tests are built without affecting the user.
> >>>>> This also makes pretty and dynamic error reporting, and a lot of other
> >>>>> nice features easier.
> >>>>>
> >>>>> kunit_config.py:
> >>>>> - parse .config and Kconfig files.
> >>>>>
> >>>>> kunit_kernel.py: provides helper functions to:
> >>>>> - configure the kernel using kunitconfig.
> >>>>> - build the kernel with the appropriate configuration.
> >>>>> - provide function to invoke the kernel and stream the output back.
> >>>>>
> >>>>> Signed-off-by: Felix Guo <[email protected]>
> >>>>> Signed-off-by: Brendan Higgins <[email protected]>
> >>>>
> >>>> Ah, here's probably my answer to my previous logging format question,
> >>>> right? What's the chance that these wrappers output stuff in a standard
> >>>> format that test-framework-tools can already parse? :)
> >
> > To be clear, the test-framework-tools format we are talking about is
> > TAP13[1], correct?
>
> I'm not sure what the test community prefers for a format. I'll let them
> jump in and debate that question.
>
>
> >
> > My understanding is that is what kselftest is being converted to use.
> >
> >>>
> >>> It should be pretty easy to do. I had some patches that pack up the
> >>> results into a serialized format for a presubmit service; it should be
> >>> pretty straightforward to take the same logic and just change the
> >>> output format.
> >>
> >> When examining and trying out the previous versions of the patch I found
> >> the wrappers useful to provide information about how to control and use
> >> the tests, but I had no interest in using the scripts as they do not
> >> fit in with my personal environment and workflow.
> >>
> >> In the previous versions of the patch, these helper scripts are optional,
> >> which is good for my use case. If the helper scripts are required to
> >
> > They are still optional.
> >
> >> get the data into the proper format then the scripts are not quite so
> >> optional, they become the expected environment. I think the proper
> >> format should exist without the helper scripts.
> >
> > That's a good point. A couple things,
> >
> > First off, supporting TAP13, either in the kernel or the wrapper
> > script is not hard, but I don't think that is the real issue that you
> > raise.
> >
> > If your only concern is that you will always be able to have human
> > readable KUnit results printed to the kernel log, that is a guarantee
> > I feel comfortable making. Beyond that, I think it is going to take a
> > long while before I would feel comfortable guaranteeing anything about
> > how will KUnit work, what kind of data it will want to expose, and how
> > it will be organized. I think the wrapper script provides a nice
> > facade that I can maintain, can mediate between the implementation
> > details and the user, and can mediate between the implementation
> > details and other pieces of software that might want to consume
> > results.
> >
> > [1] https://testanything.org/tap-version-13-specification.html
>
> My concern is based on a focus on my little part of the world
> (which in _previous_ versions of the patch series was the devicetree
> unittest.c tests being converted to use the kunit infrastructure).
> If I step back and think of the entire kernel globally I may end
> up with a different conclusion - but I'm going to remain myopic
> for this email.
>
> I want the test results to be usable by me and my fellow
> developers. I prefer that the test results be easily accessible
> (current printk() implementation means that kunit messages are
> just as accessible as the current unittest.c printk() output).
> If the printk() output needs to be filtered through a script
> to generate the actual test results then that is sub-optimal
> to me. It is one more step added to my workflow. And
> potentially with an embedded target a major pain to get a
> data file (the kernel log file) transferred from a target
> to my development host.

That's fair. If that is indeed your only concern, then I don't think
the wrapper script will ever be an issue for you. You will always be
able to execute a given test the old fashioned/manual way, and the
wrapper script only summarizes results, it does not change the
contents.

>
> I want a reported test failure to be easy to trace back to the
> point in the source where the failure is reported. With printk()
> the search is a simple grep for the failure message. If the
> failure message has been processed by a script, and then the
> failure reported to me in an email, then I may have to look
> at the script to reverse engineer how the original failure
> message was transformed into the message that was reported
> to me in the email. Then I search for the point in the
> source where the failure is reported. So a basic task has
> just become more difficult and time consuming.

That seems to be a valid concern. I would reiterate that you shouldn't
be concerned by any processing done by the wrapper script itself, but
the reality is that depending on what happens with automated
testing/presubmit/CI other people might end up parsing and
transforming test results - it might happen, it might not. I currently
have a CI system set up for KUnit on my public repo that I don't think
you would be offended by, but I don't know what we are going to do
when it comes time to integrate with existing upstream CI systems.

In anycase, I don't think that either sticking with or doing away with
the wrapper script is going to have any long term bearing on what
happens in this regard.

Cheers

2019-05-03 05:53:03

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 04/17] kunit: test: add kunit_stream a std::stream like logger

On Thu, May 2, 2019 at 6:50 PM shuah <[email protected]> wrote:
>
> On 5/1/19 5:01 PM, Brendan Higgins wrote:

< snip >

> > diff --git a/kunit/kunit-stream.c b/kunit/kunit-stream.c
> > new file mode 100644
> > index 0000000000000..93c14eec03844
> > --- /dev/null
> > +++ b/kunit/kunit-stream.c
> > @@ -0,0 +1,149 @@

< snip >

> > +static int kunit_stream_init(struct kunit_resource *res, void *context)
> > +{
> > + struct kunit *test = context;
> > + struct kunit_stream *stream;
> > +
> > + stream = kzalloc(sizeof(*stream), GFP_KERNEL);
> > + if (!stream)
> > + return -ENOMEM;
> > + res->allocation = stream;
> > + stream->test = test;
> > + spin_lock_init(&stream->lock);
> > + stream->internal_stream = new_string_stream();
> > +
> > + if (!stream->internal_stream)
> > + return -ENOMEM;
>
> What happens to stream? Don't you want to free that?

Good catch. Will fix in next revision.

< snip >

Cheers

2019-05-03 07:00:32

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2 16/17] kernel/sysctl-test: Add null pointer test for sysctl.c:proc_dointvec()

On Thu, May 02, 2019 at 11:45:43AM -0700, Brendan Higgins wrote:
> On Thu, May 2, 2019 at 11:15 AM <[email protected]> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Greg KH
> > >
> > > On Wed, May 01, 2019 at 04:01:25PM -0700, Brendan Higgins wrote:
> > > > From: Iurii Zaikin <[email protected]>
> > > >
> > > > KUnit tests for initialized data behavior of proc_dointvec that is
> > > > explicitly checked in the code. Includes basic parsing tests including
> > > > int min/max overflow.
> > > >
> > > > Signed-off-by: Iurii Zaikin <[email protected]>
> > > > Signed-off-by: Brendan Higgins <[email protected]>
> > > > ---
> > > > kernel/Makefile | 2 +
> > > > kernel/sysctl-test.c | 292
> > > +++++++++++++++++++++++++++++++++++++++++++
> > > > lib/Kconfig.debug | 6 +
> > > > 3 files changed, 300 insertions(+)
> > > > create mode 100644 kernel/sysctl-test.c
> > > >
> > > > diff --git a/kernel/Makefile b/kernel/Makefile
> > > > index 6c57e78817dad..c81a8976b6a4b 100644
> > > > --- a/kernel/Makefile
> > > > +++ b/kernel/Makefile
> > > > @@ -112,6 +112,8 @@ obj-$(CONFIG_HAS_IOMEM) += iomem.o
> > > > obj-$(CONFIG_ZONE_DEVICE) += memremap.o
> > > > obj-$(CONFIG_RSEQ) += rseq.o
> > > >
> > > > +obj-$(CONFIG_SYSCTL_KUNIT_TEST) += sysctl-test.o
> > >
> > > You are going to have to have a "standard" naming scheme for test
> > > modules, are you going to recommend "foo-test" over "test-foo"? If so,
> > > that's fine, we should just be consistant and document it somewhere.
> > >
> > > Personally, I'd prefer "test-foo", but that's just me, naming is hard...
> >
> > My preference would be "test-foo" as well. Just my 2 cents.
>
> I definitely agree we should be consistent. My personal bias
> (unsurprisingly) is "foo-test," but this is just because that is the
> convention I am used to in other projects I have worked on.
>
> On an unbiased note, we are currently almost evenly split between the
> two conventions with *slight* preference for "foo-test": I ran the two
> following grep commands on v5.1-rc7:
>
> grep -Hrn --exclude-dir="build" -e "config [a-zA-Z_0-9]\+_TEST$" | wc -l
> grep -Hrn --exclude-dir="build" -e "config TEST_[a-zA-Z_0-9]\+" | wc -l
>
> "foo-test" has 36 occurrences.
> "test-foo" has 33 occurrences.
>
> The things I am more concerned about is how this would affect file
> naming. If we have a unit test for foo.c, I think foo_test.c is more
> consistent with our namespacing conventions. The other thing, is if we
> already have a Kconfig symbol called FOO_TEST (or TEST_FOO) what
> should we name the KUnit test in this case? FOO_UNIT_TEST?
> FOO_KUNIT_TEST, like I did above?

Ok, I can live with "foo-test", as you are right, in a directory listing
and config option, it makes more sense to add it as a suffix.

thanks,

greg k-h

2019-05-03 07:00:41

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 08/17] kunit: test: add support for test abort

On Thu, May 2, 2019 at 8:15 PM Logan Gunthorpe <[email protected]> wrote:
>
>
>
> On 2019-05-01 5:01 p.m., Brendan Higgins wrote:
> > +/*
> > + * struct kunit_try_catch - provides a generic way to run code which might fail.
> > + * @context: used to pass user data to the try and catch functions.
> > + *
> > + * kunit_try_catch provides a generic, architecture independent way to execute
> > + * an arbitrary function of type kunit_try_catch_func_t which may bail out by
> > + * calling kunit_try_catch_throw(). If kunit_try_catch_throw() is called, @try
> > + * is stopped at the site of invocation and @catch is catch is called.
>
> I found some of the C++ comparisons in this series a bit distasteful but
> wasn't going to say anything until I saw the try catch.... But looking
> into the implementation it's just a thread that can exit early which
> seems fine to me. Just a poor choice of name I guess...

Guilty as charged (I have a long history with C++, sorry). Would you
prefer I changed the name? I just figured that try-catch is a commonly
understood pattern that describes exactly what I am doing.

>
> [snip]
>
> > +static void __noreturn kunit_abort(struct kunit *test)
> > +{
> > + kunit_set_death_test(test, true);
> > +
> > + kunit_try_catch_throw(&test->try_catch);
> > +
> > + /*
> > + * Throw could not abort from test.
> > + *
> > + * XXX: we should never reach this line! As kunit_try_catch_throw is
> > + * marked __noreturn.
> > + */
> > + WARN_ONCE(true, "Throw could not abort from test!\n");
> > +}
> > +
> > int kunit_init_test(struct kunit *test, const char *name)
> > {
> > spin_lock_init(&test->lock);
> > @@ -77,6 +103,7 @@ int kunit_init_test(struct kunit *test, const char *name)
> > test->name = name;
> > test->vprintk = kunit_vprintk;
> > test->fail = kunit_fail;
> > + test->abort = kunit_abort;
>
> There are a number of these function pointers which seem to be pointless
> to me as you only ever set them to one function. Just call the function
> directly. As it is, it is an unnecessary indirection for someone reading
> the code. If and when you have multiple implementations of the function
> then add the pointer. Don't assume you're going to need it later on and
> add all this maintenance burden if you never use it..

Ah, yes, Frank (and probably others) previously asked me to remove
unnecessary method pointers; I removed all the totally unused ones. As
for these, I don't use them in this patchset, but I use them in my
patchsets that will follow up this one. These in particular are
present so that they can be mocked out for testing.

>
> [snip]
>
> > +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
> > +{
> > + try_catch->run = kunit_generic_run_try_catch;
> > + try_catch->throw = kunit_generic_throw;
> > +}
>
> Same here. There's only one implementation of try_catch and I can't
> really see any sensible justification for another implementation. Even
> if there is, add the indirection when the second implementation is
> added. This isn't C++ and we don't need to make everything a "method".

These methods are for a UML specific implementation in a follow up
patchset, which is needed for some features like crash recovery, death
tests, and removes dependence on kthreads.

I know this probably sounds like premature complexity. Arguably it is
in hindsight, but I wrote those features before I pulled out these
interfaces (they were actually both originally in this patchset, but I
dropped them to make this patchset easier to review). I can remove
these methods and add them back in when I actually use them in the
follow up patchsets if you prefer.

Thanks!

2019-05-03 08:12:17

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

On Thu, May 02, 2019 at 04:45:29PM -0700, Brendan Higgins wrote:
> On Thu, May 2, 2019 at 2:16 PM Frank Rowand <[email protected]> wrote:
> >
> > On 5/2/19 11:07 AM, Brendan Higgins wrote:
> > > On Thu, May 2, 2019 at 4:02 AM Greg KH <[email protected]> wrote:
> > >>
> > >> On Wed, May 01, 2019 at 04:01:21PM -0700, Brendan Higgins wrote:
> > >>> From: Felix Guo <[email protected]>
> > >>>
> > >>> The ultimate goal is to create minimal isolated test binaries; in the
> > >>> meantime we are using UML to provide the infrastructure to run tests, so
> > >>> define an abstract way to configure and run tests that allow us to
> > >>> change the context in which tests are built without affecting the user.
> > >>> This also makes pretty and dynamic error reporting, and a lot of other
> > >>> nice features easier.
> > >>>
> > >>> kunit_config.py:
> > >>> - parse .config and Kconfig files.
> > >>>
> > >>> kunit_kernel.py: provides helper functions to:
> > >>> - configure the kernel using kunitconfig.
> > >>> - build the kernel with the appropriate configuration.
> > >>> - provide function to invoke the kernel and stream the output back.
> > >>>
> > >>> Signed-off-by: Felix Guo <[email protected]>
> > >>> Signed-off-by: Brendan Higgins <[email protected]>
> > >>
> > >> Ah, here's probably my answer to my previous logging format question,
> > >> right? What's the chance that these wrappers output stuff in a standard
> > >> format that test-framework-tools can already parse? :)
>
> To be clear, the test-framework-tools format we are talking about is
> TAP13[1], correct?

Yes.

> My understanding is that is what kselftest is being converted to use.

Yes, and I think it's almost done. The core of kselftest provides
functions that all tests can use to log messages in the correct format.

The core of kunit should also log the messages in this format as well,
and not rely on the helper scripts as Frank points out, not everyone
will use/want them. Might as well make it easy for everyone to always
do the right thing and not force it to always be added in later.

thanks,

greg k-h

2019-05-03 13:26:46

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v2 08/17] kunit: test: add support for test abort



On 2019-05-03 12:48 a.m., Brendan Higgins wrote:
> On Thu, May 2, 2019 at 8:15 PM Logan Gunthorpe <[email protected]> wrote:
>> On 2019-05-01 5:01 p.m., Brendan Higgins wrote:
>>> +/*
>>> + * struct kunit_try_catch - provides a generic way to run code which might fail.
>>> + * @context: used to pass user data to the try and catch functions.
>>> + *
>>> + * kunit_try_catch provides a generic, architecture independent way to execute
>>> + * an arbitrary function of type kunit_try_catch_func_t which may bail out by
>>> + * calling kunit_try_catch_throw(). If kunit_try_catch_throw() is called, @try
>>> + * is stopped at the site of invocation and @catch is catch is called.
>>
>> I found some of the C++ comparisons in this series a bit distasteful but
>> wasn't going to say anything until I saw the try catch.... But looking
>> into the implementation it's just a thread that can exit early which
>> seems fine to me. Just a poor choice of name I guess...
>
> Guilty as charged (I have a long history with C++, sorry). Would you
> prefer I changed the name? I just figured that try-catch is a commonly
> understood pattern that describes exactly what I am doing.

It is a commonly understood pattern, but I don't think it's what the
code is doing. Try-catch cleans up an entire stack and allows each level
of the stack to apply local cleanup. This implementation simply exits a
thread and has none of that complexity. To me, it seems like an odd
abstraction here as it's really just a test runner that can exit early
(though I haven't seen the follow-up UML implementation).

I would prefer to see this cleaned up such that the abstraction matches
more what's going on but I don't feel that strongly about it so I'll
leave it up to you to figure out what's best unless other reviewers have
stronger opinions.

>>
>> [snip]
>>
>>> +static void __noreturn kunit_abort(struct kunit *test)
>>> +{
>>> + kunit_set_death_test(test, true);
>>> +
>>> + kunit_try_catch_throw(&test->try_catch);
>>> +
>>> + /*
>>> + * Throw could not abort from test.
>>> + *
>>> + * XXX: we should never reach this line! As kunit_try_catch_throw is
>>> + * marked __noreturn.
>>> + */
>>> + WARN_ONCE(true, "Throw could not abort from test!\n");
>>> +}
>>> +
>>> int kunit_init_test(struct kunit *test, const char *name)
>>> {
>>> spin_lock_init(&test->lock);
>>> @@ -77,6 +103,7 @@ int kunit_init_test(struct kunit *test, const char *name)
>>> test->name = name;
>>> test->vprintk = kunit_vprintk;
>>> test->fail = kunit_fail;
>>> + test->abort = kunit_abort;
>>
>> There are a number of these function pointers which seem to be pointless
>> to me as you only ever set them to one function. Just call the function
>> directly. As it is, it is an unnecessary indirection for someone reading
>> the code. If and when you have multiple implementations of the function
>> then add the pointer. Don't assume you're going to need it later on and
>> add all this maintenance burden if you never use it..
>
> Ah, yes, Frank (and probably others) previously asked me to remove
> unnecessary method pointers; I removed all the totally unused ones. As
> for these, I don't use them in this patchset, but I use them in my
> patchsets that will follow up this one. These in particular are
> present so that they can be mocked out for testing.

Adding indirection and function pointers solely for the purpose of
mocking out while testing doesn't sit well with me and I don't think it
should be a pattern that's encouraged. Adding extra complexity like this
to a design to make it unit-testable doesn't seem like something that
makes sense in kernel code. Especially given that indirect calls are
more expensive in the age of spectre.

Also, mocking these particular functions seems like it's an artifact of
how you've designed the try/catch abstraction. If the abstraction was
more around an abort-able test runner then it doesn't make sense to need
to mock out the abort/fail functions as you will be testing overly
generic features of something that don't seem necessary to the
implementation.

>>
>> [snip]
>>
>>> +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
>>> +{
>>> + try_catch->run = kunit_generic_run_try_catch;
>>> + try_catch->throw = kunit_generic_throw;
>>> +}
>>
>> Same here. There's only one implementation of try_catch and I can't
>> really see any sensible justification for another implementation. Even
>> if there is, add the indirection when the second implementation is
>> added. This isn't C++ and we don't need to make everything a "method".
>
> These methods are for a UML specific implementation in a follow up
> patchset, which is needed for some features like crash recovery, death
> tests, and removes dependence on kthreads.
>
> I know this probably sounds like premature complexity. Arguably it is
> in hindsight, but I wrote those features before I pulled out these
> interfaces (they were actually both originally in this patchset, but I
> dropped them to make this patchset easier to review). I can remove
> these methods and add them back in when I actually use them in the
> follow up patchsets if you prefer.

Yes, remove them now and add them back when you use them in follow-up
patches. If reviewers find problems with the follow-up patches or have a
better suggestion on how to do what ever it is you are doing, then you
just have this unnecessary code and there's wasted developer time and
review bandwidth that will need to be spent cleaning it up.

Thanks,

Logan

2019-05-03 14:37:26

by Shuah Khan

[permalink] [raw]
Subject: Re: [PATCH v2 11/17] kunit: test: add test managed resource tests

On 5/1/19 5:01 PM, Brendan Higgins wrote:
> From: Avinash Kondareddy <[email protected]>
>
> Tests how tests interact with test managed resources in their lifetime.
>
> Signed-off-by: Avinash Kondareddy <[email protected]>
> Signed-off-by: Brendan Higgins <[email protected]>
> ---

I think this change log could use more details. It is vague on what it
does.

thanks,
-- Shuah

2019-05-03 15:47:25

by Shuah Khan

[permalink] [raw]
Subject: Re: [PATCH v2 15/17] MAINTAINERS: add entry for KUnit the unit testing framework

On 5/1/19 5:01 PM, Brendan Higgins wrote:
> Add myself as maintainer of KUnit, the Linux kernel's unit testing
> framework.
>
> Signed-off-by: Brendan Higgins <[email protected]>
> ---
> MAINTAINERS | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 5c38f21aee787..c78ae95c56b80 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -8448,6 +8448,16 @@ S: Maintained
> F: tools/testing/selftests/
> F: Documentation/dev-tools/kselftest*
>
> +KERNEL UNIT TESTING FRAMEWORK (KUnit)
> +M: Brendan Higgins <[email protected]>
> +L: [email protected]
> +W: https://google.github.io/kunit-docs/third_party/kernel/docs/
> +S: Maintained
> +F: Documentation/kunit/
> +F: include/kunit/
> +F: kunit/
> +F: tools/testing/kunit/
> +

Please add kselftest mailing list to this entry, based on our
conversation on taking these patches through kselftest tree.

thanks,
-- Shuah

2019-05-03 20:36:06

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

On 5/2/19 10:36 PM, Brendan Higgins wrote:
> On Thu, May 2, 2019 at 6:45 PM Frank Rowand <[email protected]> wrote:
>>
>> On 5/2/19 4:45 PM, Brendan Higgins wrote:
>>> On Thu, May 2, 2019 at 2:16 PM Frank Rowand <[email protected]> wrote:
>>>>
>>>> On 5/2/19 11:07 AM, Brendan Higgins wrote:
>>>>> On Thu, May 2, 2019 at 4:02 AM Greg KH <[email protected]> wrote:
>>>>>>
>>>>>> On Wed, May 01, 2019 at 04:01:21PM -0700, Brendan Higgins wrote:
>>>>>>> From: Felix Guo <[email protected]>
>>>>>>>
>>>>>>> The ultimate goal is to create minimal isolated test binaries; in the
>>>>>>> meantime we are using UML to provide the infrastructure to run tests, so
>>>>>>> define an abstract way to configure and run tests that allow us to
>>>>>>> change the context in which tests are built without affecting the user.
>>>>>>> This also makes pretty and dynamic error reporting, and a lot of other
>>>>>>> nice features easier.
>>>>>>>
>>>>>>> kunit_config.py:
>>>>>>> - parse .config and Kconfig files.
>>>>>>>
>>>>>>> kunit_kernel.py: provides helper functions to:
>>>>>>> - configure the kernel using kunitconfig.
>>>>>>> - build the kernel with the appropriate configuration.
>>>>>>> - provide function to invoke the kernel and stream the output back.
>>>>>>>
>>>>>>> Signed-off-by: Felix Guo <[email protected]>
>>>>>>> Signed-off-by: Brendan Higgins <[email protected]>
>>>>>>
>>>>>> Ah, here's probably my answer to my previous logging format question,
>>>>>> right? What's the chance that these wrappers output stuff in a standard
>>>>>> format that test-framework-tools can already parse? :)
>>>
>>> To be clear, the test-framework-tools format we are talking about is
>>> TAP13[1], correct?
>>
>> I'm not sure what the test community prefers for a format. I'll let them
>> jump in and debate that question.
>>
>>
>>>
>>> My understanding is that is what kselftest is being converted to use.
>>>
>>>>>
>>>>> It should be pretty easy to do. I had some patches that pack up the
>>>>> results into a serialized format for a presubmit service; it should be
>>>>> pretty straightforward to take the same logic and just change the
>>>>> output format.
>>>>
>>>> When examining and trying out the previous versions of the patch I found
>>>> the wrappers useful to provide information about how to control and use
>>>> the tests, but I had no interest in using the scripts as they do not
>>>> fit in with my personal environment and workflow.
>>>>
>>>> In the previous versions of the patch, these helper scripts are optional,
>>>> which is good for my use case. If the helper scripts are required to
>>>
>>> They are still optional.
>>>
>>>> get the data into the proper format then the scripts are not quite so
>>>> optional, they become the expected environment. I think the proper
>>>> format should exist without the helper scripts.
>>>
>>> That's a good point. A couple things,
>>>
>>> First off, supporting TAP13, either in the kernel or the wrapper
>>> script is not hard, but I don't think that is the real issue that you
>>> raise.
>>>
>>> If your only concern is that you will always be able to have human
>>> readable KUnit results printed to the kernel log, that is a guarantee
>>> I feel comfortable making. Beyond that, I think it is going to take a
>>> long while before I would feel comfortable guaranteeing anything about
>>> how will KUnit work, what kind of data it will want to expose, and how
>>> it will be organized. I think the wrapper script provides a nice
>>> facade that I can maintain, can mediate between the implementation
>>> details and the user, and can mediate between the implementation
>>> details and other pieces of software that might want to consume
>>> results.
>>>
>>> [1] https://testanything.org/tap-version-13-specification.html
>>
>> My concern is based on a focus on my little part of the world
>> (which in _previous_ versions of the patch series was the devicetree
>> unittest.c tests being converted to use the kunit infrastructure).
>> If I step back and think of the entire kernel globally I may end
>> up with a different conclusion - but I'm going to remain myopic
>> for this email.
>>
>> I want the test results to be usable by me and my fellow
>> developers. I prefer that the test results be easily accessible
>> (current printk() implementation means that kunit messages are
>> just as accessible as the current unittest.c printk() output).
>> If the printk() output needs to be filtered through a script
>> to generate the actual test results then that is sub-optimal
>> to me. It is one more step added to my workflow. And
>> potentially with an embedded target a major pain to get a
>> data file (the kernel log file) transferred from a target
>> to my development host.
>
> That's fair. If that is indeed your only concern, then I don't think
> the wrapper script will ever be an issue for you. You will always be
> able to execute a given test the old fashioned/manual way, and the
> wrapper script only summarizes results, it does not change the
> contents.
>
>>
>> I want a reported test failure to be easy to trace back to the
>> point in the source where the failure is reported. With printk()
>> the search is a simple grep for the failure message. If the
>> failure message has been processed by a script, and then the
>> failure reported to me in an email, then I may have to look
>> at the script to reverse engineer how the original failure
>> message was transformed into the message that was reported
>> to me in the email. Then I search for the point in the
>> source where the failure is reported. So a basic task has
>> just become more difficult and time consuming.
>
> That seems to be a valid concern. I would reiterate that you shouldn't
> be concerned by any processing done by the wrapper script itself, but
> the reality is that depending on what happens with automated
> testing/presubmit/CI other people might end up parsing and
> transforming test results - it might happen, it might not.

You seem to be missing my point.

Greg asked that the output be in a standard format.

You replied that the standard format could be created by the wrapper script.

Now you say that "it might happen, it might not". In other words the output
may or may not end up in the standard format.

As Greg points out in comments to patch 12:

"The core of kunit should also log the messages in this format as well,
and not rely on the helper scripts as Frank points out, not everyone
will use/want them. Might as well make it easy for everyone to always
do the right thing and not force it to always be added in later."

I am requesting that the original message be in the standard format. Of
course anyone is free to transform the messages in later processing, no
big deal.


> I currently
> have a CI system set up for KUnit on my public repo that I don't think
> you would be offended by, but I don't know what we are going to do
> when it comes time to integrate with existing upstream CI systems.
>
> In anycase, I don't think that either sticking with or doing away with
> the wrapper script is going to have any long term bearing on what
> happens in this regard.
>
> Cheers
>

2019-05-03 23:19:30

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

> On 5/2/19 10:36 PM, Brendan Higgins wrote:
> > On Thu, May 2, 2019 at 6:45 PM Frank Rowand <[email protected]> wrote:
> >>
> >> On 5/2/19 4:45 PM, Brendan Higgins wrote:
> >>> On Thu, May 2, 2019 at 2:16 PM Frank Rowand <[email protected]> wrote:
> >>>>
> >>>> On 5/2/19 11:07 AM, Brendan Higgins wrote:
> >>>>> On Thu, May 2, 2019 at 4:02 AM Greg KH <[email protected]> wrote:
> >>>>>>
> >>>>>> On Wed, May 01, 2019 at 04:01:21PM -0700, Brendan Higgins wrote:
> >>>>>>> From: Felix Guo <[email protected]>
> >>>>>>>
> >>>>>>> The ultimate goal is to create minimal isolated test binaries; in the
> >>>>>>> meantime we are using UML to provide the infrastructure to run tests, so
> >>>>>>> define an abstract way to configure and run tests that allow us to
> >>>>>>> change the context in which tests are built without affecting the user.
> >>>>>>> This also makes pretty and dynamic error reporting, and a lot of other
> >>>>>>> nice features easier.
> >>>>>>>
> >>>>>>> kunit_config.py:
> >>>>>>> - parse .config and Kconfig files.
> >>>>>>>
> >>>>>>> kunit_kernel.py: provides helper functions to:
> >>>>>>> - configure the kernel using kunitconfig.
> >>>>>>> - build the kernel with the appropriate configuration.
> >>>>>>> - provide function to invoke the kernel and stream the output back.
> >>>>>>>
> >>>>>>> Signed-off-by: Felix Guo <[email protected]>
> >>>>>>> Signed-off-by: Brendan Higgins <[email protected]>
> >>>>>>
> >>>>>> Ah, here's probably my answer to my previous logging format question,
> >>>>>> right? What's the chance that these wrappers output stuff in a standard
> >>>>>> format that test-framework-tools can already parse? :)
> >>>
> >>> To be clear, the test-framework-tools format we are talking about is
> >>> TAP13[1], correct?
> >>
> >> I'm not sure what the test community prefers for a format. I'll let them
> >> jump in and debate that question.
> >>
> >>
> >>>
> >>> My understanding is that is what kselftest is being converted to use.
> >>>
> >>>>>
> >>>>> It should be pretty easy to do. I had some patches that pack up the
> >>>>> results into a serialized format for a presubmit service; it should be
> >>>>> pretty straightforward to take the same logic and just change the
> >>>>> output format.
> >>>>
> >>>> When examining and trying out the previous versions of the patch I found
> >>>> the wrappers useful to provide information about how to control and use
> >>>> the tests, but I had no interest in using the scripts as they do not
> >>>> fit in with my personal environment and workflow.
> >>>>
> >>>> In the previous versions of the patch, these helper scripts are optional,
> >>>> which is good for my use case. If the helper scripts are required to
> >>>
> >>> They are still optional.
> >>>
> >>>> get the data into the proper format then the scripts are not quite so
> >>>> optional, they become the expected environment. I think the proper
> >>>> format should exist without the helper scripts.
> >>>
> >>> That's a good point. A couple things,
> >>>
> >>> First off, supporting TAP13, either in the kernel or the wrapper
> >>> script is not hard, but I don't think that is the real issue that you
> >>> raise.
> >>>
> >>> If your only concern is that you will always be able to have human
> >>> readable KUnit results printed to the kernel log, that is a guarantee
> >>> I feel comfortable making. Beyond that, I think it is going to take a
> >>> long while before I would feel comfortable guaranteeing anything about
> >>> how will KUnit work, what kind of data it will want to expose, and how
> >>> it will be organized. I think the wrapper script provides a nice
> >>> facade that I can maintain, can mediate between the implementation
> >>> details and the user, and can mediate between the implementation
> >>> details and other pieces of software that might want to consume
> >>> results.
> >>>
> >>> [1] https://testanything.org/tap-version-13-specification.html
> >>
> >> My concern is based on a focus on my little part of the world
> >> (which in _previous_ versions of the patch series was the devicetree
> >> unittest.c tests being converted to use the kunit infrastructure).
> >> If I step back and think of the entire kernel globally I may end
> >> up with a different conclusion - but I'm going to remain myopic
> >> for this email.
> >>
> >> I want the test results to be usable by me and my fellow
> >> developers. I prefer that the test results be easily accessible
> >> (current printk() implementation means that kunit messages are
> >> just as accessible as the current unittest.c printk() output).
> >> If the printk() output needs to be filtered through a script
> >> to generate the actual test results then that is sub-optimal
> >> to me. It is one more step added to my workflow. And
> >> potentially with an embedded target a major pain to get a
> >> data file (the kernel log file) transferred from a target
> >> to my development host.
> >
> > That's fair. If that is indeed your only concern, then I don't think
> > the wrapper script will ever be an issue for you. You will always be
> > able to execute a given test the old fashioned/manual way, and the
> > wrapper script only summarizes results, it does not change the
> > contents.
> >
> >>
> >> I want a reported test failure to be easy to trace back to the
> >> point in the source where the failure is reported. With printk()
> >> the search is a simple grep for the failure message. If the
> >> failure message has been processed by a script, and then the
> >> failure reported to me in an email, then I may have to look
> >> at the script to reverse engineer how the original failure
> >> message was transformed into the message that was reported
> >> to me in the email. Then I search for the point in the
> >> source where the failure is reported. So a basic task has
> >> just become more difficult and time consuming.
> >
> > That seems to be a valid concern. I would reiterate that you shouldn't
> > be concerned by any processing done by the wrapper script itself, but
> > the reality is that depending on what happens with automated
> > testing/presubmit/CI other people might end up parsing and
> > transforming test results - it might happen, it might not.
>
> You seem to be missing my point.
>
> Greg asked that the output be in a standard format.
>
> You replied that the standard format could be created by the wrapper script.

I thought Greg originally meant that that is how it could be done when
he first commented on this patch, so I was agreeing and elaborating.
Nevertheless, it seems you and Greg are now in agreement on this
point, so I won't argue it further.

>
> Now you say that "it might happen, it might not". In other words the output
> may or may not end up in the standard format.

Sorry, that was in reference to your concern about getting an email in
a different format than what the tool that you use generates. It
wasn't a statement about what I was or wasn't going to do in regards
to supporting a standard format.

>
> As Greg points out in comments to patch 12:
>
> "The core of kunit should also log the messages in this format as well,
> and not rely on the helper scripts as Frank points out, not everyone
> will use/want them. Might as well make it easy for everyone to always
> do the right thing and not force it to always be added in later."
>
> I am requesting that the original message be in the standard format. Of
> course anyone is free to transform the messages in later processing, no
> big deal.

My mistake, I thought that was a concern of yours.

In any case, it sounds like you and Greg are in agreement on the core
libraries generating the output in TAP13, so I won't argue that point
further.

## Analysis of using TAP13

One of my earlier concerns was that TAP13 is a bit over constrained
for what I would like to output from the KUnit core. It only allows
data to be output as either:
- test number
- ok/not ok with single line description
- directive
- diagnostics
- YAML block

The test number must become before a set of ok/not ok lines, and does
not contain any additional information. One annoying thing about this
is it doesn't provide any kind of nesting or grouping.

There is one ok/not ok line per test and it may have a short
description of the test immediately after 'ok' or 'not ok'; this is
problematic because it wants the first thing you say about a test to
be after you know whether it passes or not.

Directives are just a way to specify skipped tests and TODOs.

Diagnostics seem useful, it looks like you can put whatever
information in them and print them out at anytime. It looks like a lot
of kselftests emit a lot of data this way.

The YAML block seems to be the way that they prefer users to emit data
beyond number of tests run and whether a test passed or failed. I
could express most things I want to express in terms of YAML, but it
is not the nicest format for displaying a lot of data like
expectations, missed function calls, and other things which have a
natural concise representation. Nevertheless, YAML readability is
mostly a problem who won't be using the wrapper scripts. My biggest
problem with the YAML block is that you can only have one, and TAP
specifies that it must come after the corresponding ok/not ok line,
which again has the issue that you have to hold on to a lot of
diagnostic data longer than you ideally would. Another downside is
that I now have to write a YAML serializer for the kernel.

## Here is what I propose for this patchset:

- Print out test number range at the beginning of each test suite.
- Print out log lines as soon as they happen as diagnostics.
- Print out the lines that state whether a test passes or fails as a
ok/not ok line.

This would be technically conforming with TAP13 and is consistent with
what some kselftests have done.

## To be done in a future patchset:

Add a YAML serializer and print out some logs containing structured
data (like expectation failures, unexpected function calls, etc) in
YAML blocks.

Does this sound reasonable? I will go ahead and start working on this,
but feel free to give me feedback on the overall idea in the meantime.

Cheers

2019-05-04 00:11:10

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 16/17] kernel/sysctl-test: Add null pointer test for sysctl.c:proc_dointvec()

> On Thu, May 02, 2019 at 11:45:43AM -0700, Brendan Higgins wrote:
> > On Thu, May 2, 2019 at 11:15 AM <[email protected]> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Greg KH
> > > >
> > > > On Wed, May 01, 2019 at 04:01:25PM -0700, Brendan Higgins wrote:
> > > > > From: Iurii Zaikin <[email protected]>
> > > > >
> > > > > KUnit tests for initialized data behavior of proc_dointvec that is
> > > > > explicitly checked in the code. Includes basic parsing tests including
> > > > > int min/max overflow.
> > > > >
> > > > > Signed-off-by: Iurii Zaikin <[email protected]>
> > > > > Signed-off-by: Brendan Higgins <[email protected]>
> > > > > ---
> > > > > kernel/Makefile | 2 +
> > > > > kernel/sysctl-test.c | 292
> > > > +++++++++++++++++++++++++++++++++++++++++++
> > > > > lib/Kconfig.debug | 6 +
> > > > > 3 files changed, 300 insertions(+)
> > > > > create mode 100644 kernel/sysctl-test.c
> > > > >
> > > > > diff --git a/kernel/Makefile b/kernel/Makefile
> > > > > index 6c57e78817dad..c81a8976b6a4b 100644
> > > > > --- a/kernel/Makefile
> > > > > +++ b/kernel/Makefile
> > > > > @@ -112,6 +112,8 @@ obj-$(CONFIG_HAS_IOMEM) += iomem.o
> > > > > obj-$(CONFIG_ZONE_DEVICE) += memremap.o
> > > > > obj-$(CONFIG_RSEQ) += rseq.o
> > > > >
> > > > > +obj-$(CONFIG_SYSCTL_KUNIT_TEST) += sysctl-test.o
> > > >
> > > > You are going to have to have a "standard" naming scheme for test
> > > > modules, are you going to recommend "foo-test" over "test-foo"? If so,
> > > > that's fine, we should just be consistant and document it somewhere.
> > > >
> > > > Personally, I'd prefer "test-foo", but that's just me, naming is hard...
> > >
> > > My preference would be "test-foo" as well. Just my 2 cents.
> >
> > I definitely agree we should be consistent. My personal bias
> > (unsurprisingly) is "foo-test," but this is just because that is the
> > convention I am used to in other projects I have worked on.
> >
> > On an unbiased note, we are currently almost evenly split between the
> > two conventions with *slight* preference for "foo-test": I ran the two
> > following grep commands on v5.1-rc7:
> >
> > grep -Hrn --exclude-dir="build" -e "config [a-zA-Z_0-9]\+_TEST$" | wc -l
> > grep -Hrn --exclude-dir="build" -e "config TEST_[a-zA-Z_0-9]\+" | wc -l
> >
> > "foo-test" has 36 occurrences.
> > "test-foo" has 33 occurrences.
> >
> > The things I am more concerned about is how this would affect file
> > naming. If we have a unit test for foo.c, I think foo_test.c is more
> > consistent with our namespacing conventions. The other thing, is if we
> > already have a Kconfig symbol called FOO_TEST (or TEST_FOO) what
> > should we name the KUnit test in this case? FOO_UNIT_TEST?
> > FOO_KUNIT_TEST, like I did above?
>
> Ok, I can live with "foo-test", as you are right, in a directory listing
> and config option, it makes more sense to add it as a suffix.

Cool, so just for future reference, if we already have a Kconfig
symbol called FOO_TEST (or TEST_FOO) what should we name the KUnit
test in this case? FOO_UNIT_TEST? FOO_KUNIT_TEST, like I did above?

2019-05-04 12:14:29

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

On Fri, May 03, 2019 at 04:14:49PM -0700, Brendan Higgins wrote:
> In any case, it sounds like you and Greg are in agreement on the core
> libraries generating the output in TAP13, so I won't argue that point
> further.

Great!

> ## Analysis of using TAP13
>
> One of my earlier concerns was that TAP13 is a bit over constrained
> for what I would like to output from the KUnit core. It only allows
> data to be output as either:
> - test number
> - ok/not ok with single line description
> - directive
> - diagnostics
> - YAML block
>
> The test number must become before a set of ok/not ok lines, and does
> not contain any additional information. One annoying thing about this
> is it doesn't provide any kind of nesting or grouping.

It should handle nesting just fine, I think we do that already today.

> There is one ok/not ok line per test and it may have a short
> description of the test immediately after 'ok' or 'not ok'; this is
> problematic because it wants the first thing you say about a test to
> be after you know whether it passes or not.

Take a look at the output of our current tests, I think you might find
it to be a bit more flexible than you think.

Also, this isn't our standard, we picked it because we needed a standard
that the tools of today already understand. It might have issues and
other problems, but we are not in the business of writing test output
parsing tools, and we don't want to force everyone out there to write
custom parsers. We want them to be able to use the tools they already
have so they can test the kernel, and to do so as easily as possible.

thanks,

greg k-h

2019-05-04 12:53:56

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2 16/17] kernel/sysctl-test: Add null pointer test for sysctl.c:proc_dointvec()

On Fri, May 03, 2019 at 04:41:10PM -0700, Brendan Higgins wrote:
> > On Thu, May 02, 2019 at 11:45:43AM -0700, Brendan Higgins wrote:
> > > On Thu, May 2, 2019 at 11:15 AM <[email protected]> wrote:
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Greg KH
> > > > >
> > > > > On Wed, May 01, 2019 at 04:01:25PM -0700, Brendan Higgins wrote:
> > > > > > From: Iurii Zaikin <[email protected]>
> > > > > >
> > > > > > KUnit tests for initialized data behavior of proc_dointvec that is
> > > > > > explicitly checked in the code. Includes basic parsing tests including
> > > > > > int min/max overflow.
> > > > > >
> > > > > > Signed-off-by: Iurii Zaikin <[email protected]>
> > > > > > Signed-off-by: Brendan Higgins <[email protected]>
> > > > > > ---
> > > > > > kernel/Makefile | 2 +
> > > > > > kernel/sysctl-test.c | 292
> > > > > +++++++++++++++++++++++++++++++++++++++++++
> > > > > > lib/Kconfig.debug | 6 +
> > > > > > 3 files changed, 300 insertions(+)
> > > > > > create mode 100644 kernel/sysctl-test.c
> > > > > >
> > > > > > diff --git a/kernel/Makefile b/kernel/Makefile
> > > > > > index 6c57e78817dad..c81a8976b6a4b 100644
> > > > > > --- a/kernel/Makefile
> > > > > > +++ b/kernel/Makefile
> > > > > > @@ -112,6 +112,8 @@ obj-$(CONFIG_HAS_IOMEM) += iomem.o
> > > > > > obj-$(CONFIG_ZONE_DEVICE) += memremap.o
> > > > > > obj-$(CONFIG_RSEQ) += rseq.o
> > > > > >
> > > > > > +obj-$(CONFIG_SYSCTL_KUNIT_TEST) += sysctl-test.o
> > > > >
> > > > > You are going to have to have a "standard" naming scheme for test
> > > > > modules, are you going to recommend "foo-test" over "test-foo"? If so,
> > > > > that's fine, we should just be consistant and document it somewhere.
> > > > >
> > > > > Personally, I'd prefer "test-foo", but that's just me, naming is hard...
> > > >
> > > > My preference would be "test-foo" as well. Just my 2 cents.
> > >
> > > I definitely agree we should be consistent. My personal bias
> > > (unsurprisingly) is "foo-test," but this is just because that is the
> > > convention I am used to in other projects I have worked on.
> > >
> > > On an unbiased note, we are currently almost evenly split between the
> > > two conventions with *slight* preference for "foo-test": I ran the two
> > > following grep commands on v5.1-rc7:
> > >
> > > grep -Hrn --exclude-dir="build" -e "config [a-zA-Z_0-9]\+_TEST$" | wc -l
> > > grep -Hrn --exclude-dir="build" -e "config TEST_[a-zA-Z_0-9]\+" | wc -l
> > >
> > > "foo-test" has 36 occurrences.
> > > "test-foo" has 33 occurrences.
> > >
> > > The things I am more concerned about is how this would affect file
> > > naming. If we have a unit test for foo.c, I think foo_test.c is more
> > > consistent with our namespacing conventions. The other thing, is if we
> > > already have a Kconfig symbol called FOO_TEST (or TEST_FOO) what
> > > should we name the KUnit test in this case? FOO_UNIT_TEST?
> > > FOO_KUNIT_TEST, like I did above?
> >
> > Ok, I can live with "foo-test", as you are right, in a directory listing
> > and config option, it makes more sense to add it as a suffix.
>
> Cool, so just for future reference, if we already have a Kconfig
> symbol called FOO_TEST (or TEST_FOO) what should we name the KUnit
> test in this case? FOO_UNIT_TEST? FOO_KUNIT_TEST, like I did above?

FOO_KUNIT_TEST is fine, I doubt that's going to come up very often.

thanks,

greg k-h

2019-05-06 00:24:24

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

On 5/3/19 4:14 PM, Brendan Higgins wrote:
>> On 5/2/19 10:36 PM, Brendan Higgins wrote:
>>> On Thu, May 2, 2019 at 6:45 PM Frank Rowand <[email protected]> wrote:
>>>>
>>>> On 5/2/19 4:45 PM, Brendan Higgins wrote:
>>>>> On Thu, May 2, 2019 at 2:16 PM Frank Rowand <[email protected]> wrote:
>>>>>>
>>>>>> On 5/2/19 11:07 AM, Brendan Higgins wrote:
>>>>>>> On Thu, May 2, 2019 at 4:02 AM Greg KH <[email protected]> wrote:
>>>>>>>>
>>>>>>>> On Wed, May 01, 2019 at 04:01:21PM -0700, Brendan Higgins wrote:
>>>>>>>>> From: Felix Guo <[email protected]>
>>>>>>>>>
>>>>>>>>> The ultimate goal is to create minimal isolated test binaries; in the
>>>>>>>>> meantime we are using UML to provide the infrastructure to run tests, so
>>>>>>>>> define an abstract way to configure and run tests that allow us to
>>>>>>>>> change the context in which tests are built without affecting the user.
>>>>>>>>> This also makes pretty and dynamic error reporting, and a lot of other
>>>>>>>>> nice features easier.
>>>>>>>>>
>>>>>>>>> kunit_config.py:
>>>>>>>>> - parse .config and Kconfig files.
>>>>>>>>>
>>>>>>>>> kunit_kernel.py: provides helper functions to:
>>>>>>>>> - configure the kernel using kunitconfig.
>>>>>>>>> - build the kernel with the appropriate configuration.
>>>>>>>>> - provide function to invoke the kernel and stream the output back.
>>>>>>>>>
>>>>>>>>> Signed-off-by: Felix Guo <[email protected]>
>>>>>>>>> Signed-off-by: Brendan Higgins <[email protected]>
>>>>>>>>
>>>>>>>> Ah, here's probably my answer to my previous logging format question,
>>>>>>>> right? What's the chance that these wrappers output stuff in a standard
>>>>>>>> format that test-framework-tools can already parse? :)
>>>>>
>>>>> To be clear, the test-framework-tools format we are talking about is
>>>>> TAP13[1], correct?
>>>>
>>>> I'm not sure what the test community prefers for a format. I'll let them
>>>> jump in and debate that question.
>>>>
>>>>
>>>>>
>>>>> My understanding is that is what kselftest is being converted to use.
>>>>>
>>>>>>>
>>>>>>> It should be pretty easy to do. I had some patches that pack up the
>>>>>>> results into a serialized format for a presubmit service; it should be
>>>>>>> pretty straightforward to take the same logic and just change the
>>>>>>> output format.
>>>>>>
>>>>>> When examining and trying out the previous versions of the patch I found
>>>>>> the wrappers useful to provide information about how to control and use
>>>>>> the tests, but I had no interest in using the scripts as they do not
>>>>>> fit in with my personal environment and workflow.
>>>>>>
>>>>>> In the previous versions of the patch, these helper scripts are optional,
>>>>>> which is good for my use case. If the helper scripts are required to
>>>>>
>>>>> They are still optional.
>>>>>
>>>>>> get the data into the proper format then the scripts are not quite so
>>>>>> optional, they become the expected environment. I think the proper
>>>>>> format should exist without the helper scripts.
>>>>>
>>>>> That's a good point. A couple things,
>>>>>
>>>>> First off, supporting TAP13, either in the kernel or the wrapper
>>>>> script is not hard, but I don't think that is the real issue that you
>>>>> raise.
>>>>>
>>>>> If your only concern is that you will always be able to have human
>>>>> readable KUnit results printed to the kernel log, that is a guarantee
>>>>> I feel comfortable making. Beyond that, I think it is going to take a
>>>>> long while before I would feel comfortable guaranteeing anything about
>>>>> how will KUnit work, what kind of data it will want to expose, and how
>>>>> it will be organized. I think the wrapper script provides a nice
>>>>> facade that I can maintain, can mediate between the implementation
>>>>> details and the user, and can mediate between the implementation
>>>>> details and other pieces of software that might want to consume
>>>>> results.
>>>>>
>>>>> [1] https://testanything.org/tap-version-13-specification.html
>>>>
>>>> My concern is based on a focus on my little part of the world
>>>> (which in _previous_ versions of the patch series was the devicetree
>>>> unittest.c tests being converted to use the kunit infrastructure).
>>>> If I step back and think of the entire kernel globally I may end
>>>> up with a different conclusion - but I'm going to remain myopic
>>>> for this email.
>>>>
>>>> I want the test results to be usable by me and my fellow
>>>> developers. I prefer that the test results be easily accessible
>>>> (current printk() implementation means that kunit messages are
>>>> just as accessible as the current unittest.c printk() output).
>>>> If the printk() output needs to be filtered through a script
>>>> to generate the actual test results then that is sub-optimal
>>>> to me. It is one more step added to my workflow. And
>>>> potentially with an embedded target a major pain to get a
>>>> data file (the kernel log file) transferred from a target
>>>> to my development host.
>>>
>>> That's fair. If that is indeed your only concern, then I don't think
>>> the wrapper script will ever be an issue for you. You will always be
>>> able to execute a given test the old fashioned/manual way, and the
>>> wrapper script only summarizes results, it does not change the
>>> contents.
>>>
>>>>
>>>> I want a reported test failure to be easy to trace back to the
>>>> point in the source where the failure is reported. With printk()
>>>> the search is a simple grep for the failure message. If the
>>>> failure message has been processed by a script, and then the
>>>> failure reported to me in an email, then I may have to look
>>>> at the script to reverse engineer how the original failure
>>>> message was transformed into the message that was reported
>>>> to me in the email. Then I search for the point in the
>>>> source where the failure is reported. So a basic task has
>>>> just become more difficult and time consuming.
>>>
>>> That seems to be a valid concern. I would reiterate that you shouldn't
>>> be concerned by any processing done by the wrapper script itself, but
>>> the reality is that depending on what happens with automated
>>> testing/presubmit/CI other people might end up parsing and
>>> transforming test results - it might happen, it might not.
>>
>> You seem to be missing my point.
>>
>> Greg asked that the output be in a standard format.
>>
>> You replied that the standard format could be created by the wrapper script.
>
> I thought Greg originally meant that that is how it could be done when
> he first commented on this patch, so I was agreeing and elaborating.
> Nevertheless, it seems you and Greg are now in agreement on this
> point, so I won't argue it further.
>
>>
>> Now you say that "it might happen, it might not". In other words the output
>> may or may not end up in the standard format.
>
> Sorry, that was in reference to your concern about getting an email in
> a different format than what the tool that you use generates. It
> wasn't a statement about what I was or wasn't going to do in regards
> to supporting a standard format.
>
>>
>> As Greg points out in comments to patch 12:
>>
>> "The core of kunit should also log the messages in this format as well,
>> and not rely on the helper scripts as Frank points out, not everyone
>> will use/want them. Might as well make it easy for everyone to always
>> do the right thing and not force it to always be added in later."
>>
>> I am requesting that the original message be in the standard format. Of
>> course anyone is free to transform the messages in later processing, no
>> big deal.
>
> My mistake, I thought that was a concern of yours.
>
> In any case, it sounds like you and Greg are in agreement on the core
> libraries generating the output in TAP13, so I won't argue that point
> further.
>
> ## Analysis of using TAP13

I have never looked at TAP version 13 in any depth at all, so do not consider
me to be any sort of expert.

My entire TAP knowledge is based on:

https://testanything.org/tap-version-13-specification.html

and the pull request to create the TAP version 14 specification:

https://github.com/TestAnything/testanything.github.io/pull/36/files

You can see the full version 14 document in the submitter's repo:

$ git clone https://github.com/isaacs/testanything.github.io.git
$ cd testanything.github.io
$ git checkout tap14
$ ls tap-version-14-specification.md

My understanding is the the version 14 specification is not trying to
add new features, but instead capture what is already implemented in
the wild.


> One of my earlier concerns was that TAP13 is a bit over constrained
> for what I would like to output from the KUnit core. It only allows
> data to be output as either:
> - test number
> - ok/not ok with single line description
> - directive
> - diagnostics
> - YAML block
>
> The test number must become before a set of ok/not ok lines, and does
> not contain any additional information. One annoying thing about this
> is it doesn't provide any kind of nesting or grouping.

Greg's response mentions ktest (?) already does nesting.

Version 14 allows nesting through subtests. I have not looked at what
ktest does, so I do not know if it uses subtest, or something else.


> There is one ok/not ok line per test and it may have a short
> description of the test immediately after 'ok' or 'not ok'; this is
> problematic because it wants the first thing you say about a test to
> be after you know whether it passes or not.

I think you could output a diagnostic line that says a test is starting.
This is important to me because printk() errors and warnings that are
related to a test can be output by a subsystem other than the subsystem
that I am testing. If there is no marker at the start of the test
then there is no way to attribute the printk()s to the test.


> Directives are just a way to specify skipped tests and TODOs.
>
> Diagnostics seem useful, it looks like you can put whatever
> information in them and print them out at anytime. It looks like a lot
> of kselftests emit a lot of data this way.
>
> The YAML block seems to be the way that they prefer users to emit data
> beyond number of tests run and whether a test passed or failed. I
> could express most things I want to express in terms of YAML, but it
> is not the nicest format for displaying a lot of data like
> expectations, missed function calls, and other things which have a
> natural concise representation. Nevertheless, YAML readability is
> mostly a problem who won't be using the wrapper scripts.

The examples in specification V13 and V14 look very simple and very
readable to me. (And I am not a fan of YAML.)


> My biggest
> problem with the YAML block is that you can only have one, and TAP
> specifies that it must come after the corresponding ok/not ok line,
> which again has the issue that you have to hold on to a lot of
> diagnostic data longer than you ideally would. Another downside is
> that I now have to write a YAML serializer for the kernel.

If a test generates diagnostic data, then I would expect that to be
the direct result of a test failure. So the test can output the
"not ok" line, then immediately output the YAML block. I do not
see a need for stashing YAML output ahead of time.

If diagnostic data is generated before the test can determine
success or failure, then it can be output as diagnostic data
instead of stashing it for later.


> ## Here is what I propose for this patchset:
>
> - Print out test number range at the beginning of each test suite.
> - Print out log lines as soon as they happen as diagnostics.
> - Print out the lines that state whether a test passes or fails as a
> ok/not ok line.
>
> This would be technically conforming with TAP13 and is consistent with
> what some kselftests have done.
>
> ## To be done in a future patchset:
>
> Add a YAML serializer and print out some logs containing structured
> data (like expectation failures, unexpected function calls, etc) in
> YAML blocks.

YAML serializer sounds like not needed complexity.

>
> Does this sound reasonable? I will go ahead and start working on this,
> but feel free to give me feedback on the overall idea in the meantime.
>
> Cheers
>

2019-05-06 08:49:35

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 08/17] kunit: test: add support for test abort

On Fri, May 3, 2019 at 5:33 AM Logan Gunthorpe <[email protected]> wrote:
>
>
>
> On 2019-05-03 12:48 a.m., Brendan Higgins wrote:
> > On Thu, May 2, 2019 at 8:15 PM Logan Gunthorpe <[email protected]> wrote:
> >> On 2019-05-01 5:01 p.m., Brendan Higgins wrote:
> >>> +/*
> >>> + * struct kunit_try_catch - provides a generic way to run code which might fail.
> >>> + * @context: used to pass user data to the try and catch functions.
> >>> + *
> >>> + * kunit_try_catch provides a generic, architecture independent way to execute
> >>> + * an arbitrary function of type kunit_try_catch_func_t which may bail out by
> >>> + * calling kunit_try_catch_throw(). If kunit_try_catch_throw() is called, @try
> >>> + * is stopped at the site of invocation and @catch is catch is called.
> >>
> >> I found some of the C++ comparisons in this series a bit distasteful but
> >> wasn't going to say anything until I saw the try catch.... But looking
> >> into the implementation it's just a thread that can exit early which
> >> seems fine to me. Just a poor choice of name I guess...
> >
> > Guilty as charged (I have a long history with C++, sorry). Would you
> > prefer I changed the name? I just figured that try-catch is a commonly
> > understood pattern that describes exactly what I am doing.
>
> It is a commonly understood pattern, but I don't think it's what the
> code is doing. Try-catch cleans up an entire stack and allows each level
> of the stack to apply local cleanup. This implementation simply exits a
> thread and has none of that complexity. To me, it seems like an odd
> abstraction here as it's really just a test runner that can exit early
> (though I haven't seen the follow-up UML implementation).

Yeah, that is closer to what the UML specific version does, but that's
a conversation for another time.

>
> I would prefer to see this cleaned up such that the abstraction matches
> more what's going on but I don't feel that strongly about it so I'll
> leave it up to you to figure out what's best unless other reviewers have
> stronger opinions.

Cool. Let's revisit this with the follow-up patchset.

>
> >>
> >> [snip]
> >>
> >>> +static void __noreturn kunit_abort(struct kunit *test)
> >>> +{
> >>> + kunit_set_death_test(test, true);
> >>> +
> >>> + kunit_try_catch_throw(&test->try_catch);
> >>> +
> >>> + /*
> >>> + * Throw could not abort from test.
> >>> + *
> >>> + * XXX: we should never reach this line! As kunit_try_catch_throw is
> >>> + * marked __noreturn.
> >>> + */
> >>> + WARN_ONCE(true, "Throw could not abort from test!\n");
> >>> +}
> >>> +
> >>> int kunit_init_test(struct kunit *test, const char *name)
> >>> {
> >>> spin_lock_init(&test->lock);
> >>> @@ -77,6 +103,7 @@ int kunit_init_test(struct kunit *test, const char *name)
> >>> test->name = name;
> >>> test->vprintk = kunit_vprintk;
> >>> test->fail = kunit_fail;
> >>> + test->abort = kunit_abort;
> >>
> >> There are a number of these function pointers which seem to be pointless
> >> to me as you only ever set them to one function. Just call the function
> >> directly. As it is, it is an unnecessary indirection for someone reading
> >> the code. If and when you have multiple implementations of the function
> >> then add the pointer. Don't assume you're going to need it later on and
> >> add all this maintenance burden if you never use it..
> >
> > Ah, yes, Frank (and probably others) previously asked me to remove
> > unnecessary method pointers; I removed all the totally unused ones. As
> > for these, I don't use them in this patchset, but I use them in my
> > patchsets that will follow up this one. These in particular are
> > present so that they can be mocked out for testing.
>
> Adding indirection and function pointers solely for the purpose of
> mocking out while testing doesn't sit well with me and I don't think it
> should be a pattern that's encouraged. Adding extra complexity like this
> to a design to make it unit-testable doesn't seem like something that
> makes sense in kernel code. Especially given that indirect calls are
> more expensive in the age of spectre.

Indirection is a pretty common method to make something mockable or
fakeable. Nevertheless, probably an easier discussion to have once we
have some examples to discuss.

>
> Also, mocking these particular functions seems like it's an artifact of
> how you've designed the try/catch abstraction. If the abstraction was
> more around an abort-able test runner then it doesn't make sense to need
> to mock out the abort/fail functions as you will be testing overly
> generic features of something that don't seem necessary to the
> implementation.
>
> >>
> >> [snip]
> >>
> >>> +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
> >>> +{
> >>> + try_catch->run = kunit_generic_run_try_catch;
> >>> + try_catch->throw = kunit_generic_throw;
> >>> +}
> >>
> >> Same here. There's only one implementation of try_catch and I can't
> >> really see any sensible justification for another implementation. Even
> >> if there is, add the indirection when the second implementation is
> >> added. This isn't C++ and we don't need to make everything a "method".
> >
> > These methods are for a UML specific implementation in a follow up
> > patchset, which is needed for some features like crash recovery, death
> > tests, and removes dependence on kthreads.
> >
> > I know this probably sounds like premature complexity. Arguably it is
> > in hindsight, but I wrote those features before I pulled out these
> > interfaces (they were actually both originally in this patchset, but I
> > dropped them to make this patchset easier to review). I can remove
> > these methods and add them back in when I actually use them in the
> > follow up patchsets if you prefer.
>
> Yes, remove them now and add them back when you use them in follow-up
> patches. If reviewers find problems with the follow-up patches or have a
> better suggestion on how to do what ever it is you are doing, then you
> just have this unnecessary code and there's wasted developer time and
> review bandwidth that will need to be spent cleaning it up.

Fair enough. Next patchset will have the remaining unused indirection
you pointed out removed.

Thanks!

2019-05-06 09:05:52

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 11/17] kunit: test: add test managed resource tests

On Fri, May 3, 2019 at 7:34 AM shuah <[email protected]> wrote:
>
> On 5/1/19 5:01 PM, Brendan Higgins wrote:
> > From: Avinash Kondareddy <[email protected]>
> >
> > Tests how tests interact with test managed resources in their lifetime.
> >
> > Signed-off-by: Avinash Kondareddy <[email protected]>
> > Signed-off-by: Brendan Higgins <[email protected]>
> > ---
>
> I think this change log could use more details. It is vague on what it
> does.

Agreed. Will fix in next revision.

2019-05-06 09:19:29

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 15/17] MAINTAINERS: add entry for KUnit the unit testing framework

On Fri, May 3, 2019 at 7:38 AM shuah <[email protected]> wrote:
>
> On 5/1/19 5:01 PM, Brendan Higgins wrote:
> > Add myself as maintainer of KUnit, the Linux kernel's unit testing
> > framework.
> >
> > Signed-off-by: Brendan Higgins <[email protected]>
> > ---
> > MAINTAINERS | 10 ++++++++++
> > 1 file changed, 10 insertions(+)
> >
> > diff --git a/MAINTAINERS b/MAINTAINERS
> > index 5c38f21aee787..c78ae95c56b80 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -8448,6 +8448,16 @@ S: Maintained
> > F: tools/testing/selftests/
> > F: Documentation/dev-tools/kselftest*
> >
> > +KERNEL UNIT TESTING FRAMEWORK (KUnit)
> > +M: Brendan Higgins <[email protected]>
> > +L: [email protected]
> > +W: https://google.github.io/kunit-docs/third_party/kernel/docs/
> > +S: Maintained
> > +F: Documentation/kunit/
> > +F: include/kunit/
> > +F: kunit/
> > +F: tools/testing/kunit/
> > +
>
> Please add kselftest mailing list to this entry, based on our
> conversation on taking these patches through kselftest tree.

Will do.

Thanks!

2019-05-06 17:46:38

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

On Sun, May 5, 2019 at 5:19 PM Frank Rowand <[email protected]> wrote:
> You can see the full version 14 document in the submitter's repo:
>
> $ git clone https://github.com/isaacs/testanything.github.io.git
> $ cd testanything.github.io
> $ git checkout tap14
> $ ls tap-version-14-specification.md
>
> My understanding is the the version 14 specification is not trying to
> add new features, but instead capture what is already implemented in
> the wild.

Oh! I didn't know about the work on TAP 14. I'll go read through this.

> > ## Here is what I propose for this patchset:
> >
> > - Print out test number range at the beginning of each test suite.
> > - Print out log lines as soon as they happen as diagnostics.
> > - Print out the lines that state whether a test passes or fails as a
> > ok/not ok line.
> >
> > This would be technically conforming with TAP13 and is consistent with
> > what some kselftests have done.

This is what I fixed kselftest to actually do (it wasn't doing correct
TAP13), and Shuah is testing the series now:
https://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest.git/log/?h=ksft-tap-refactor

I'll go read TAP 14 now...

--
Kees Cook

2019-05-06 21:40:33

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

> On 5/3/19 4:14 PM, Brendan Higgins wrote:
> >> On 5/2/19 10:36 PM, Brendan Higgins wrote:
> >>> On Thu, May 2, 2019 at 6:45 PM Frank Rowand <[email protected]> wrote:
> >>>>
> >>>> On 5/2/19 4:45 PM, Brendan Higgins wrote:
> >>>>> On Thu, May 2, 2019 at 2:16 PM Frank Rowand <[email protected]> wrote:
> >>>>>>
> >>>>>> On 5/2/19 11:07 AM, Brendan Higgins wrote:
> >>>>>>> On Thu, May 2, 2019 at 4:02 AM Greg KH <[email protected]> wrote:
> >>>>>>>>
> >>>>>>>> On Wed, May 01, 2019 at 04:01:21PM -0700, Brendan Higgins wrote:
> >>>>>>>>> From: Felix Guo <[email protected]>
> >>>>>>>>>
> >>>>>>>>> The ultimate goal is to create minimal isolated test binaries; in the
> >>>>>>>>> meantime we are using UML to provide the infrastructure to run tests, so
> >>>>>>>>> define an abstract way to configure and run tests that allow us to
> >>>>>>>>> change the context in which tests are built without affecting the user.
> >>>>>>>>> This also makes pretty and dynamic error reporting, and a lot of other
> >>>>>>>>> nice features easier.
> >>>>>>>>>
> >>>>>>>>> kunit_config.py:
> >>>>>>>>> - parse .config and Kconfig files.
> >>>>>>>>>
> >>>>>>>>> kunit_kernel.py: provides helper functions to:
> >>>>>>>>> - configure the kernel using kunitconfig.
> >>>>>>>>> - build the kernel with the appropriate configuration.
> >>>>>>>>> - provide function to invoke the kernel and stream the output back.
> >>>>>>>>>
> >>>>>>>>> Signed-off-by: Felix Guo <[email protected]>
> >>>>>>>>> Signed-off-by: Brendan Higgins <[email protected]>
> >>>>>>>>
> >>>>>>>> Ah, here's probably my answer to my previous logging format question,
> >>>>>>>> right? What's the chance that these wrappers output stuff in a standard
> >>>>>>>> format that test-framework-tools can already parse? :)
> >>>>>
> >>>>> To be clear, the test-framework-tools format we are talking about is
> >>>>> TAP13[1], correct?
> >>>>
> >>>> I'm not sure what the test community prefers for a format. I'll let them
> >>>> jump in and debate that question.
> >>>>
> >>>>
> >>>>>
> >>>>> My understanding is that is what kselftest is being converted to use.
> >>>>>
> >>>>>>>
> >>>>>>> It should be pretty easy to do. I had some patches that pack up the
> >>>>>>> results into a serialized format for a presubmit service; it should be
> >>>>>>> pretty straightforward to take the same logic and just change the
> >>>>>>> output format.
> >>>>>>
> >>>>>> When examining and trying out the previous versions of the patch I found
> >>>>>> the wrappers useful to provide information about how to control and use
> >>>>>> the tests, but I had no interest in using the scripts as they do not
> >>>>>> fit in with my personal environment and workflow.
> >>>>>>
> >>>>>> In the previous versions of the patch, these helper scripts are optional,
> >>>>>> which is good for my use case. If the helper scripts are required to
> >>>>>
> >>>>> They are still optional.
> >>>>>
> >>>>>> get the data into the proper format then the scripts are not quite so
> >>>>>> optional, they become the expected environment. I think the proper
> >>>>>> format should exist without the helper scripts.
> >>>>>
> >>>>> That's a good point. A couple things,
> >>>>>
> >>>>> First off, supporting TAP13, either in the kernel or the wrapper
> >>>>> script is not hard, but I don't think that is the real issue that you
> >>>>> raise.
> >>>>>
> >>>>> If your only concern is that you will always be able to have human
> >>>>> readable KUnit results printed to the kernel log, that is a guarantee
> >>>>> I feel comfortable making. Beyond that, I think it is going to take a
> >>>>> long while before I would feel comfortable guaranteeing anything about
> >>>>> how will KUnit work, what kind of data it will want to expose, and how
> >>>>> it will be organized. I think the wrapper script provides a nice
> >>>>> facade that I can maintain, can mediate between the implementation
> >>>>> details and the user, and can mediate between the implementation
> >>>>> details and other pieces of software that might want to consume
> >>>>> results.
> >>>>>
> >>>>> [1] https://testanything.org/tap-version-13-specification.html
> >>>>
> >>>> My concern is based on a focus on my little part of the world
> >>>> (which in _previous_ versions of the patch series was the devicetree
> >>>> unittest.c tests being converted to use the kunit infrastructure).
> >>>> If I step back and think of the entire kernel globally I may end
> >>>> up with a different conclusion - but I'm going to remain myopic
> >>>> for this email.
> >>>>
> >>>> I want the test results to be usable by me and my fellow
> >>>> developers. I prefer that the test results be easily accessible
> >>>> (current printk() implementation means that kunit messages are
> >>>> just as accessible as the current unittest.c printk() output).
> >>>> If the printk() output needs to be filtered through a script
> >>>> to generate the actual test results then that is sub-optimal
> >>>> to me. It is one more step added to my workflow. And
> >>>> potentially with an embedded target a major pain to get a
> >>>> data file (the kernel log file) transferred from a target
> >>>> to my development host.
> >>>
> >>> That's fair. If that is indeed your only concern, then I don't think
> >>> the wrapper script will ever be an issue for you. You will always be
> >>> able to execute a given test the old fashioned/manual way, and the
> >>> wrapper script only summarizes results, it does not change the
> >>> contents.
> >>>
> >>>>
> >>>> I want a reported test failure to be easy to trace back to the
> >>>> point in the source where the failure is reported. With printk()
> >>>> the search is a simple grep for the failure message. If the
> >>>> failure message has been processed by a script, and then the
> >>>> failure reported to me in an email, then I may have to look
> >>>> at the script to reverse engineer how the original failure
> >>>> message was transformed into the message that was reported
> >>>> to me in the email. Then I search for the point in the
> >>>> source where the failure is reported. So a basic task has
> >>>> just become more difficult and time consuming.
> >>>
> >>> That seems to be a valid concern. I would reiterate that you shouldn't
> >>> be concerned by any processing done by the wrapper script itself, but
> >>> the reality is that depending on what happens with automated
> >>> testing/presubmit/CI other people might end up parsing and
> >>> transforming test results - it might happen, it might not.
> >>
> >> You seem to be missing my point.
> >>
> >> Greg asked that the output be in a standard format.
> >>
> >> You replied that the standard format could be created by the wrapper script.
> >
> > I thought Greg originally meant that that is how it could be done when
> > he first commented on this patch, so I was agreeing and elaborating.
> > Nevertheless, it seems you and Greg are now in agreement on this
> > point, so I won't argue it further.
> >
> >>
> >> Now you say that "it might happen, it might not". In other words the output
> >> may or may not end up in the standard format.
> >
> > Sorry, that was in reference to your concern about getting an email in
> > a different format than what the tool that you use generates. It
> > wasn't a statement about what I was or wasn't going to do in regards
> > to supporting a standard format.
> >
> >>
> >> As Greg points out in comments to patch 12:
> >>
> >> "The core of kunit should also log the messages in this format as well,
> >> and not rely on the helper scripts as Frank points out, not everyone
> >> will use/want them. Might as well make it easy for everyone to always
> >> do the right thing and not force it to always be added in later."
> >>
> >> I am requesting that the original message be in the standard format. Of
> >> course anyone is free to transform the messages in later processing, no
> >> big deal.
> >
> > My mistake, I thought that was a concern of yours.
> >
> > In any case, it sounds like you and Greg are in agreement on the core
> > libraries generating the output in TAP13, so I won't argue that point
> > further.
> >
> > ## Analysis of using TAP13
>
> I have never looked at TAP version 13 in any depth at all, so do not consider
> me to be any sort of expert.
>
> My entire TAP knowledge is based on:
>
> https://testanything.org/tap-version-13-specification.html
>
> and the pull request to create the TAP version 14 specification:
>
> https://github.com/TestAnything/testanything.github.io/pull/36/files
>
> You can see the full version 14 document in the submitter's repo:
>
> $ git clone https://github.com/isaacs/testanything.github.io.git
> $ cd testanything.github.io
> $ git checkout tap14
> $ ls tap-version-14-specification.md
>
> My understanding is the the version 14 specification is not trying to
> add new features, but instead capture what is already implemented in
> the wild.
>
>
> > One of my earlier concerns was that TAP13 is a bit over constrained
> > for what I would like to output from the KUnit core. It only allows
> > data to be output as either:
> > - test number
> > - ok/not ok with single line description
> > - directive
> > - diagnostics
> > - YAML block
> >
> > The test number must become before a set of ok/not ok lines, and does
> > not contain any additional information. One annoying thing about this
> > is it doesn't provide any kind of nesting or grouping.
>
> Greg's response mentions ktest (?) already does nesting.

I think we are talking about kselftest.

> Version 14 allows nesting through subtests. I have not looked at what
> ktest does, so I do not know if it uses subtest, or something else.

Oh nice! That is new in version 14. I can use that.

> > There is one ok/not ok line per test and it may have a short
> > description of the test immediately after 'ok' or 'not ok'; this is
> > problematic because it wants the first thing you say about a test to
> > be after you know whether it passes or not.
>
> I think you could output a diagnostic line that says a test is starting.
> This is important to me because printk() errors and warnings that are
> related to a test can be output by a subsystem other than the subsystem
> that I am testing. If there is no marker at the start of the test
> then there is no way to attribute the printk()s to the test.

I agree.

Technically conforms with the spec, and kselftest does that, but is
also not part of the spec. Well, it *is* specified if you use
subtests. I think the right approach is to make each
"kunit_module/test suite" a test, and all the test cases will be
subtests.

> > Directives are just a way to specify skipped tests and TODOs.
> >
> > Diagnostics seem useful, it looks like you can put whatever
> > information in them and print them out at anytime. It looks like a lot
> > of kselftests emit a lot of data this way.
> >
> > The YAML block seems to be the way that they prefer users to emit data
> > beyond number of tests run and whether a test passed or failed. I
> > could express most things I want to express in terms of YAML, but it
> > is not the nicest format for displaying a lot of data like
> > expectations, missed function calls, and other things which have a
> > natural concise representation. Nevertheless, YAML readability is
> > mostly a problem who won't be using the wrapper scripts.
>
> The examples in specification V13 and V14 look very simple and very
> readable to me. (And I am not a fan of YAML.)
>
>
> > My biggest
> > problem with the YAML block is that you can only have one, and TAP
> > specifies that it must come after the corresponding ok/not ok line,
> > which again has the issue that you have to hold on to a lot of
> > diagnostic data longer than you ideally would. Another downside is
> > that I now have to write a YAML serializer for the kernel.
>
> If a test generates diagnostic data, then I would expect that to be
> the direct result of a test failure. So the test can output the
> "not ok" line, then immediately output the YAML block. I do not
> see a need for stashing YAML output ahead of time.
>
> If diagnostic data is generated before the test can determine
> success or failure, then it can be output as diagnostic data
> instead of stashing it for later.

Cool, that's what I am thinking I am going to do - I just wanted to
make sure people were okay with this approach. I mean, I think that is
what kselftest does.

We can hold off on the YAML stuff for now then.

> > ## Here is what I propose for this patchset:
> >
> > - Print out test number range at the beginning of each test suite.
> > - Print out log lines as soon as they happen as diagnostics.
> > - Print out the lines that state whether a test passes or fails as a
> > ok/not ok line.
> >
> > This would be technically conforming with TAP13 and is consistent with
> > what some kselftests have done.
> >
> > ## To be done in a future patchset:
> >
> > Add a YAML serializer and print out some logs containing structured
> > data (like expectation failures, unexpected function calls, etc) in
> > YAML blocks.
>
> YAML serializer sounds like not needed complexity.
>
> >
> > Does this sound reasonable? I will go ahead and start working on this,
> > but feel free to give me feedback on the overall idea in the meantime.
> >
> > Cheers

Thanks!

2019-05-06 21:44:38

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

> On Sun, May 5, 2019 at 5:19 PM Frank Rowand <[email protected]> wrote:
> > You can see the full version 14 document in the submitter's repo:
> >
> > $ git clone https://github.com/isaacs/testanything.github.io.git
> > $ cd testanything.github.io
> > $ git checkout tap14
> > $ ls tap-version-14-specification.md
> >
> > My understanding is the the version 14 specification is not trying to
> > add new features, but instead capture what is already implemented in
> > the wild.
>
> Oh! I didn't know about the work on TAP 14. I'll go read through this.
>
> > > ## Here is what I propose for this patchset:
> > >
> > > - Print out test number range at the beginning of each test suite.
> > > - Print out log lines as soon as they happen as diagnostics.
> > > - Print out the lines that state whether a test passes or fails as a
> > > ok/not ok line.
> > >
> > > This would be technically conforming with TAP13 and is consistent with
> > > what some kselftests have done.
>
> This is what I fixed kselftest to actually do (it wasn't doing correct
> TAP13), and Shuah is testing the series now:
> https://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest.git/log/?h=ksft-tap-refactor

Oh, cool! I guess this is an okay approach then.

Thanks!

2019-05-07 03:16:43

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On 5/1/19 4:01 PM, Brendan Higgins wrote:
> ## TLDR
>
> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> 5.2.
>
> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
> we would merge through your tree when the time came? Am I remembering
> correctly?
>
> ## Background
>
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
>
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
>
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.

As a result of the emails replying to this patch thread, I am now
starting to look at kselftest. My level of understanding is based
on some slide presentations, an LWN article, https://kselftest.wiki.kernel.org/
and a _tiny_ bit of looking at kselftest code.

tl;dr; I don't really understand kselftest yet.


(1) why KUnit exists

> ## What's so special about unit testing?
>
> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this
> makes unit tests deterministic: a failing unit test always indicates a
> problem. Finally, because unit tests necessarily have finer granularity,
> they are able to test all code paths easily solving the classic problem
> of difficulty in exercising error handling code.

(2) KUnit is not meant to replace kselftest

> ## Is KUnit trying to replace other testing frameworks for the kernel?
>
> No. Most existing tests for the Linux kernel are end-to-end tests, which
> have their place. A well tested system has lots of unit tests, a
> reasonable number of integration tests, and some end-to-end tests. KUnit
> is just trying to address the unit test space which is currently not
> being addressed.

My understanding is that the intent of KUnit is to avoid booting a kernel on
real hardware or in a virtual machine. That seems to be a matter of semantics
to me because isn't invoking a UML Linux just running the Linux kernel in
a different form of virtualization?

So I do not understand why KUnit is an improvement over kselftest.

It seems to me that KUnit is just another piece of infrastructure that I
am going to have to be familiar with as a kernel developer. More overhead,
more information to stuff into my tiny little brain.

I would guess that some developers will focus on just one of the two test
environments (and some will focus on both), splitting the development
resources instead of pooling them on a common infrastructure.

What am I missing?

-Frank


>
> ## More information on KUnit
>
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/v5.1-rc7/v1
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/v5.1-rc7/v1 branch.
>
> ## Changes Since Last Version
>
> None. I just rebased the last patchset on v5.1-rc7.
>

2019-05-07 08:02:40

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Mon, May 06, 2019 at 08:14:12PM -0700, Frank Rowand wrote:
> On 5/1/19 4:01 PM, Brendan Higgins wrote:
> > ## TLDR
> >
> > I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> > 5.2.
> >
> > Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
> > we would merge through your tree when the time came? Am I remembering
> > correctly?
> >
> > ## Background
> >
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
> >
> > Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> > it does not require installing the kernel on a test machine or in a VM
> > and does not require tests to be written in userspace running on a host
> > kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> > can run several dozen tests in under a second. Currently, the entire
> > KUnit test suite for KUnit runs in under a second from the initial
> > invocation (build time excluded).
> >
> > KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> > Googletest/Googlemock for C++. KUnit provides facilities for defining
> > unit test cases, grouping related test cases into test suites, providing
> > common infrastructure for running tests, mocking, spying, and much more.
>
> As a result of the emails replying to this patch thread, I am now
> starting to look at kselftest. My level of understanding is based
> on some slide presentations, an LWN article, https://kselftest.wiki.kernel.org/
> and a _tiny_ bit of looking at kselftest code.
>
> tl;dr; I don't really understand kselftest yet.
>
>
> (1) why KUnit exists
>
> > ## What's so special about unit testing?
> >
> > A unit test is supposed to test a single unit of code in isolation,
> > hence the name. There should be no dependencies outside the control of
> > the test; this means no external dependencies, which makes tests orders
> > of magnitudes faster. Likewise, since there are no external dependencies,
> > there are no hoops to jump through to run the tests. Additionally, this
> > makes unit tests deterministic: a failing unit test always indicates a
> > problem. Finally, because unit tests necessarily have finer granularity,
> > they are able to test all code paths easily solving the classic problem
> > of difficulty in exercising error handling code.
>
> (2) KUnit is not meant to replace kselftest
>
> > ## Is KUnit trying to replace other testing frameworks for the kernel?
> >
> > No. Most existing tests for the Linux kernel are end-to-end tests, which
> > have their place. A well tested system has lots of unit tests, a
> > reasonable number of integration tests, and some end-to-end tests. KUnit
> > is just trying to address the unit test space which is currently not
> > being addressed.
>
> My understanding is that the intent of KUnit is to avoid booting a kernel on
> real hardware or in a virtual machine. That seems to be a matter of semantics
> to me because isn't invoking a UML Linux just running the Linux kernel in
> a different form of virtualization?
>
> So I do not understand why KUnit is an improvement over kselftest.
>
> It seems to me that KUnit is just another piece of infrastructure that I
> am going to have to be familiar with as a kernel developer. More overhead,
> more information to stuff into my tiny little brain.
>
> I would guess that some developers will focus on just one of the two test
> environments (and some will focus on both), splitting the development
> resources instead of pooling them on a common infrastructure.
>
> What am I missing?

kselftest provides no in-kernel framework for testing kernel code
specifically. That should be what kunit provides, an "easy" way to
write in-kernel tests for things.

Brendan, did I get it right?

thanks,

greg k-h

2019-05-07 15:25:09

by Shuah Khan

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On 5/7/19 2:01 AM, Greg KH wrote:
> On Mon, May 06, 2019 at 08:14:12PM -0700, Frank Rowand wrote:
>> On 5/1/19 4:01 PM, Brendan Higgins wrote:
>>> ## TLDR
>>>
>>> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
>>> 5.2.
>>>
>>> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
>>> we would merge through your tree when the time came? Am I remembering
>>> correctly?
>>>
>>> ## Background
>>>
>>> This patch set proposes KUnit, a lightweight unit testing and mocking
>>> framework for the Linux kernel.
>>>
>>> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
>>> it does not require installing the kernel on a test machine or in a VM
>>> and does not require tests to be written in userspace running on a host
>>> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
>>> can run several dozen tests in under a second. Currently, the entire
>>> KUnit test suite for KUnit runs in under a second from the initial
>>> invocation (build time excluded).
>>>
>>> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
>>> Googletest/Googlemock for C++. KUnit provides facilities for defining
>>> unit test cases, grouping related test cases into test suites, providing
>>> common infrastructure for running tests, mocking, spying, and much more.
>>
>> As a result of the emails replying to this patch thread, I am now
>> starting to look at kselftest. My level of understanding is based
>> on some slide presentations, an LWN article, https://kselftest.wiki.kernel.org/
>> and a _tiny_ bit of looking at kselftest code.
>>
>> tl;dr; I don't really understand kselftest yet.
>>
>>
>> (1) why KUnit exists
>>
>>> ## What's so special about unit testing?
>>>
>>> A unit test is supposed to test a single unit of code in isolation,
>>> hence the name. There should be no dependencies outside the control of
>>> the test; this means no external dependencies, which makes tests orders
>>> of magnitudes faster. Likewise, since there are no external dependencies,
>>> there are no hoops to jump through to run the tests. Additionally, this
>>> makes unit tests deterministic: a failing unit test always indicates a
>>> problem. Finally, because unit tests necessarily have finer granularity,
>>> they are able to test all code paths easily solving the classic problem
>>> of difficulty in exercising error handling code.
>>
>> (2) KUnit is not meant to replace kselftest
>>
>>> ## Is KUnit trying to replace other testing frameworks for the kernel?
>>>
>>> No. Most existing tests for the Linux kernel are end-to-end tests, which
>>> have their place. A well tested system has lots of unit tests, a
>>> reasonable number of integration tests, and some end-to-end tests. KUnit
>>> is just trying to address the unit test space which is currently not
>>> being addressed.
>>
>> My understanding is that the intent of KUnit is to avoid booting a kernel on
>> real hardware or in a virtual machine. That seems to be a matter of semantics
>> to me because isn't invoking a UML Linux just running the Linux kernel in
>> a different form of virtualization?
>>
>> So I do not understand why KUnit is an improvement over kselftest.

They are in two different categories. Kselftest falls into black box
regression test suite which is a collection of user-space tests with a
few kernel test modules back-ending the tests in some cases.

Kselftest can be used by both kernel developers and users and provides
a good way to regression test releases in test rings.

KUnit is a white box category and is a better fit as unit test framework
for development and provides a in-kernel testing. I wouldn't view them
one replacing the other. They just provide coverage for different areas
of testing.

I wouldn't view KUnit as something that would be easily run in test
rings for example.

Brendan, does that sound about right?

>>
>> It seems to me that KUnit is just another piece of infrastructure that I
>> am going to have to be familiar with as a kernel developer. More overhead,
>> more information to stuff into my tiny little brain.
>>
>> I would guess that some developers will focus on just one of the two test
>> environments (and some will focus on both), splitting the development
>> resources instead of pooling them on a common infrastructure.

>> What am I missing?
>
> kselftest provides no in-kernel framework for testing kernel code
> specifically. That should be what kunit provides, an "easy" way to
> write in-kernel tests for things.
>
> Brendan, did I get it right?
thanks,
-- Shuah

2019-05-07 17:25:55

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Tue, May 07, 2019 at 10:01:19AM +0200, Greg KH wrote:
> > My understanding is that the intent of KUnit is to avoid booting a kernel on
> > real hardware or in a virtual machine. That seems to be a matter of semantics
> > to me because isn't invoking a UML Linux just running the Linux kernel in
> > a different form of virtualization?
> >
> > So I do not understand why KUnit is an improvement over kselftest.
> >
> > It seems to me that KUnit is just another piece of infrastructure that I
> > am going to have to be familiar with as a kernel developer. More overhead,
> > more information to stuff into my tiny little brain.
> >
> > I would guess that some developers will focus on just one of the two test
> > environments (and some will focus on both), splitting the development
> > resources instead of pooling them on a common infrastructure.
> >
> > What am I missing?
>
> kselftest provides no in-kernel framework for testing kernel code
> specifically. That should be what kunit provides, an "easy" way to
> write in-kernel tests for things.
>
> Brendan, did I get it right?

Yes, that's basically right. You don't *have* to use KUnit. It's
supposed to be a simple way to run a large number of small tests that
for specific small components in a system.

For example, I currently use xfstests using KVM and GCE to test all of
ext4. These tests require using multiple 5 GB and 20GB virtual disks,
and it works by mounting ext4 file systems and exercising ext4 through
the system call interfaces, using userspace tools such as fsstress,
fsx, fio, etc. It requires time overhead to start the VM, create and
allocate virtual disks, etc. For example, to run a single 3 seconds
xfstest (generic/001), it requires full 10 seconds to run it via
kvm-xfstests.

KUnit is something else; it's specifically intended to allow you to
create lightweight tests quickly and easily, and by reducing the
effort needed to write and run unit tests, hopefully we'll have a lot
more of them and thus improve kernel quality.

As an example, I have a volunteer working on developing KUinit tests
for ext4. We're going to start by testing the ext4 extent status
tree. The source code is at fs/ext4/extent_status.c; it's
approximately 1800 LOC. The Kunit tests for the extent status tree
will exercise all of the corner cases for the various extent status
tree functions --- e.g., ext4_es_insert_delayed_block(),
ext4_es_remove_extent(), ext4_es_cache_extent(), etc. And it will do
this in isolation without our needing to create a test file system or
using a test block device.

Next we'll test the ext4 block allocator, again in isolation. To test
the block allocator we will have to write "mock functions" which
simulate reading allocation bitmaps from disk. Again, this will allow
the test writer to explicitly construct corner cases and validate that
the block allocator works as expected without having to reverese
engineer file system data structures which will force a particular
code path to be executed.

So this is why it's largely irrelevant to me that KUinit uses UML. In
fact, it's a feature. We're not testing device drivers, or the
scheduler, or anything else architecture-specific. UML is not about
virtualization. What it's about in this context is allowing us to
start running test code as quickly as possible. Booting KVM takes
about 3-4 seconds, and this includes initializing virtio_scsi and
other device drivers. If by using UML we can hold the amount of
unnecessary kernel subsystem initialization down to the absolute
minimum, and if it means that we can communicating to the test
framework via a userspace "printf" from UML/KUnit code, as opposed to
via a virtual serial port to KVM's virtual console, it all makes for
lighter weight testing.

Why did I go looking for a volunteer to write KUnit tests for ext4?
Well, I have a plan to make some changes in restructing how ext4's
write path works, in order to support things like copy-on-write, a
more efficient delayed allocation system, etc. This will require
making changes to the extent status tree, and by having unit tests for
the extent status tree, we'll be able to detect any bugs that we might
accidentally introduce in the es tree far more quickly than if we
didn't have those tests available. Google has long found that having
these sorts of unit tests is a real win for developer velocity for any
non-trivial code module (or C++ class), even when you take into
account the time it takes to create the unit tests.

- Ted

P.S. Many thanks to Brendan for finding such a volunteer for me; the
person in question is a SRE from Switzerland who is interested in
getting involved with kernel testing, and this is going to be their
20% project. :-)

2019-05-07 19:15:01

by Bird, Tim

[permalink] [raw]
Subject: RE: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

Here is a bit of inline commentary on the TAP13/TAP14 discussion.

> -----Original Message-----
> From: Brendan Higgins
>
> > On 5/3/19 4:14 PM, Brendan Higgins wrote:
> > >> On 5/2/19 10:36 PM, Brendan Higgins wrote:
> > > In any case, it sounds like you and Greg are in agreement on the core
> > > libraries generating the output in TAP13, so I won't argue that point
> > > further.
> > >
> > > ## Analysis of using TAP13
> >
> > I have never looked at TAP version 13 in any depth at all, so do not consider
> > me to be any sort of expert.
> >
> > My entire TAP knowledge is based on:
> >
> > https://testanything.org/tap-version-13-specification.html
> >
> > and the pull request to create the TAP version 14 specification:
> >
> > https://github.com/TestAnything/testanything.github.io/pull/36/files
> >
> > You can see the full version 14 document in the submitter's repo:
> >
> > $ git clone https://github.com/isaacs/testanything.github.io.git
> > $ cd testanything.github.io
> > $ git checkout tap14
> > $ ls tap-version-14-specification.md
> >
> > My understanding is the the version 14 specification is not trying to
> > add new features, but instead capture what is already implemented in
> > the wild.
> >
> >
> > > One of my earlier concerns was that TAP13 is a bit over constrained
> > > for what I would like to output from the KUnit core. It only allows
> > > data to be output as either:
> > > - test number
> > > - ok/not ok with single line description
> > > - directive
> > > - diagnostics
> > > - YAML block
> > >
> > > The test number must become before a set of ok/not ok lines, and does
> > > not contain any additional information. One annoying thing about this
> > > is it doesn't provide any kind of nesting or grouping.
> >
> > Greg's response mentions ktest (?) already does nesting.
>
> I think we are talking about kselftest.
>
> > Version 14 allows nesting through subtests. I have not looked at what
> > ktest does, so I do not know if it uses subtest, or something else.
>
> Oh nice! That is new in version 14. I can use that.

We have run into the problem of subtests (or nested tests, both using
TAP13) in Fuego. I recall that this issue came up in kselftest, and I believe
we discussed a solution, but I don't recall what it was.

Can someone remind me what kselftest does to handle nested tests
(in terms of TAP13 output)?

>
> > > There is one ok/not ok line per test and it may have a short
> > > description of the test immediately after 'ok' or 'not ok'; this is
> > > problematic because it wants the first thing you say about a test to
> > > be after you know whether it passes or not.
> >
> > I think you could output a diagnostic line that says a test is starting.
> > This is important to me because printk() errors and warnings that are
> > related to a test can be output by a subsystem other than the subsystem
> > that I am testing. If there is no marker at the start of the test
> > then there is no way to attribute the printk()s to the test.
>
> I agree.

This is a significant problem. In Fuego we output each line with a test id prefix,
which goes against the spec, but helps solve this. Test output should be
kept separate from system output, but if I understand correctly, there are no
channels in prinkt to use to keep different data streams separate.

How does kselftest deal with this now?

>
> Technically conforms with the spec, and kselftest does that, but is
> also not part of the spec. Well, it *is* specified if you use
> subtests. I think the right approach is to make each
> "kunit_module/test suite" a test, and all the test cases will be
> subtests.
>
> > > Directives are just a way to specify skipped tests and TODOs.
> > >
> > > Diagnostics seem useful, it looks like you can put whatever
> > > information in them and print them out at anytime. It looks like a lot
> > > of kselftests emit a lot of data this way.
> > >
> > > The YAML block seems to be the way that they prefer users to emit data
> > > beyond number of tests run and whether a test passed or failed. I
> > > could express most things I want to express in terms of YAML, but it
> > > is not the nicest format for displaying a lot of data like
> > > expectations, missed function calls, and other things which have a
> > > natural concise representation. Nevertheless, YAML readability is
> > > mostly a problem who won't be using the wrapper scripts.
> >
> > The examples in specification V13 and V14 look very simple and very
> > readable to me. (And I am not a fan of YAML.)
> >
> >
> > > My biggest
> > > problem with the YAML block is that you can only have one, and TAP
> > > specifies that it must come after the corresponding ok/not ok line,
> > > which again has the issue that you have to hold on to a lot of
> > > diagnostic data longer than you ideally would. Another downside is
> > > that I now have to write a YAML serializer for the kernel.
> >
> > If a test generates diagnostic data, then I would expect that to be
> > the direct result of a test failure. So the test can output the
> > "not ok" line, then immediately output the YAML block. I do not
> > see a need for stashing YAML output ahead of time.
> >
> > If diagnostic data is generated before the test can determine
> > success or failure, then it can be output as diagnostic data
> > instead of stashing it for later.
>
> Cool, that's what I am thinking I am going to do - I just wanted to
> make sure people were okay with this approach. I mean, I think that is
> what kselftest does.

IMHO the diagnostic data does not have to be in YAML. That's only
if there's a well-known schema for the diagnostic data, to make the
data machine-readable. TAP13 specifically avoided defining such a
schema. I need to look at TAP14 and see if they have defined something.
(Thanks for bringing that to my attention.)

The important part, since there are no start and end delimiters for each
testcase, is to structure output (including from unrelated sub-systems
affected by the test) to either occur all before or all after the test line.
Otherwise it's impossible to sensibly parse the diagnostic data and associate it
with a test. (That is, the TAP lines become the delimiters between each testcase's
output and data). This is a pretty big weakness of TAP13. Since the TAP line
has the test result, it usually means that the subsystem output for the test
is emitted *before* the TAP line. It's preferable, in order to keep the
data together, that the diagnostic data also be emitted before the TAP
line.

>
> We can hold off on the YAML stuff for now then.
>
> > > ## Here is what I propose for this patchset:
> > >
> > > - Print out test number range at the beginning of each test suite.
> > > - Print out log lines as soon as they happen as diagnostics.
> > > - Print out the lines that state whether a test passes or fails as a
> > > ok/not ok line.
> > >
> > > This would be technically conforming with TAP13 and is consistent with
> > > what some kselftests have done.
> > >
> > > ## To be done in a future patchset:
> > >
> > > Add a YAML serializer and print out some logs containing structured
> > > data (like expectation failures, unexpected function calls, etc) in
> > > YAML blocks.
> >
> > YAML serializer sounds like not needed complexity.
I agree, for now.

I think if we start to see some patterns for some data that many tests
output, we might want (as a kernel community) to define a YAML
schema for the kselftest output. But I think that's biting off too much
right now. IMHO we would want any YAML schema we define to
cover more than just unit tests, so the job of defining that would be
pretty big.

This would be a good discussion to have at a testing micro-conference
or summit. :-)

> >
> > >
> > > Does this sound reasonable? I will go ahead and start working on this,
> > > but feel free to give me feedback on the overall idea in the meantime.

Sounds good. Thanks for working on this.
-- Tim

2019-05-08 19:23:16

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

> On Tue, May 07, 2019 at 10:01:19AM +0200, Greg KH wrote:
> > > My understanding is that the intent of KUnit is to avoid booting a kernel on
> > > real hardware or in a virtual machine. That seems to be a matter of semantics
> > > to me because isn't invoking a UML Linux just running the Linux kernel in
> > > a different form of virtualization?
> > >
> > > So I do not understand why KUnit is an improvement over kselftest.
> > >
> > > It seems to me that KUnit is just another piece of infrastructure that I
> > > am going to have to be familiar with as a kernel developer. More overhead,
> > > more information to stuff into my tiny little brain.
> > >
> > > I would guess that some developers will focus on just one of the two test
> > > environments (and some will focus on both), splitting the development
> > > resources instead of pooling them on a common infrastructure.
> > >
> > > What am I missing?
> >
> > kselftest provides no in-kernel framework for testing kernel code
> > specifically. That should be what kunit provides, an "easy" way to
> > write in-kernel tests for things.
> >
> > Brendan, did I get it right?
>
> Yes, that's basically right. You don't *have* to use KUnit. It's
> supposed to be a simple way to run a large number of small tests that
> for specific small components in a system.
>
> For example, I currently use xfstests using KVM and GCE to test all of
> ext4. These tests require using multiple 5 GB and 20GB virtual disks,
> and it works by mounting ext4 file systems and exercising ext4 through
> the system call interfaces, using userspace tools such as fsstress,
> fsx, fio, etc. It requires time overhead to start the VM, create and
> allocate virtual disks, etc. For example, to run a single 3 seconds
> xfstest (generic/001), it requires full 10 seconds to run it via
> kvm-xfstests.
>
> KUnit is something else; it's specifically intended to allow you to
> create lightweight tests quickly and easily, and by reducing the
> effort needed to write and run unit tests, hopefully we'll have a lot
> more of them and thus improve kernel quality.
>
> As an example, I have a volunteer working on developing KUinit tests
> for ext4. We're going to start by testing the ext4 extent status
> tree. The source code is at fs/ext4/extent_status.c; it's
> approximately 1800 LOC. The Kunit tests for the extent status tree
> will exercise all of the corner cases for the various extent status
> tree functions --- e.g., ext4_es_insert_delayed_block(),
> ext4_es_remove_extent(), ext4_es_cache_extent(), etc. And it will do
> this in isolation without our needing to create a test file system or
> using a test block device.
>
> Next we'll test the ext4 block allocator, again in isolation. To test
> the block allocator we will have to write "mock functions" which
> simulate reading allocation bitmaps from disk. Again, this will allow
> the test writer to explicitly construct corner cases and validate that
> the block allocator works as expected without having to reverese
> engineer file system data structures which will force a particular
> code path to be executed.
>
> So this is why it's largely irrelevant to me that KUinit uses UML. In
> fact, it's a feature. We're not testing device drivers, or the
> scheduler, or anything else architecture-specific. UML is not about
> virtualization. What it's about in this context is allowing us to
> start running test code as quickly as possible. Booting KVM takes
> about 3-4 seconds, and this includes initializing virtio_scsi and
> other device drivers. If by using UML we can hold the amount of
> unnecessary kernel subsystem initialization down to the absolute
> minimum, and if it means that we can communicating to the test
> framework via a userspace "printf" from UML/KUnit code, as opposed to
> via a virtual serial port to KVM's virtual console, it all makes for
> lighter weight testing.
>
> Why did I go looking for a volunteer to write KUnit tests for ext4?
> Well, I have a plan to make some changes in restructing how ext4's
> write path works, in order to support things like copy-on-write, a
> more efficient delayed allocation system, etc. This will require
> making changes to the extent status tree, and by having unit tests for
> the extent status tree, we'll be able to detect any bugs that we might
> accidentally introduce in the es tree far more quickly than if we
> didn't have those tests available. Google has long found that having
> these sorts of unit tests is a real win for developer velocity for any
> non-trivial code module (or C++ class), even when you take into
> account the time it takes to create the unit tests.
>
> - Ted
>
> P.S. Many thanks to Brendan for finding such a volunteer for me; the
> person in question is a SRE from Switzerland who is interested in
> getting involved with kernel testing, and this is going to be their
> 20% project. :-)

Thanks Ted, I really appreciate it!

Since Ted provided such an awesome detailed response, I don't think I
really need to go into any detail; nevertheless, I think that Greg and
Shuah have the right idea; in particular, Shuah provides a good
summary.

Thanks everyone!

2019-05-09 00:59:59

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

Hi Ted,

On 5/7/19 10:22 AM, Theodore Ts'o wrote:
> On Tue, May 07, 2019 at 10:01:19AM +0200, Greg KH wrote:
Not very helpful to cut the text here, plus not explicitly indicating that
text was cut (yes, I know the ">>>" will be a clue for the careful reader),
losing the set up for my question.


>>> My understanding is that the intent of KUnit is to avoid booting a kernel on
>>> real hardware or in a virtual machine. That seems to be a matter of semantics
>>> to me because isn't invoking a UML Linux just running the Linux kernel in
>>> a different form of virtualization?
>>>
>>> So I do not understand why KUnit is an improvement over kselftest.
>>>
>>> It seems to me that KUnit is just another piece of infrastructure that I
>>> am going to have to be familiar with as a kernel developer. More overhead,
>>> more information to stuff into my tiny little brain.
>>>
>>> I would guess that some developers will focus on just one of the two test
>>> environments (and some will focus on both), splitting the development
>>> resources instead of pooling them on a common infrastructure.
>>>
>>> What am I missing?
>>
>> kselftest provides no in-kernel framework for testing kernel code
>> specifically. That should be what kunit provides, an "easy" way to
>> write in-kernel tests for things.
>>
>> Brendan, did I get it right?
>
> Yes, that's basically right. You don't *have* to use KUnit. It's

If KUnit is added to the kernel, and a subsystem that I am submitting
code for has chosen to use KUnit instead of kselftest, then yes, I do
*have* to use KUnit if my submission needs to contain a test for the
code unless I want to convince the maintainer that somehow my case
is special and I prefer to use kselftest instead of KUnittest.


> supposed to be a simple way to run a large number of small tests that
> for specific small components in a system.

kselftest also supports running a subset of tests. That subset of tests
can also be a large number of small tests. There is nothing inherent
in KUnit vs kselftest in this regard, as far as I am aware.


> For example, I currently use xfstests using KVM and GCE to test all of
> ext4. These tests require using multiple 5 GB and 20GB virtual disks,
> and it works by mounting ext4 file systems and exercising ext4 through
> the system call interfaces, using userspace tools such as fsstress,
> fsx, fio, etc. It requires time overhead to start the VM, create and
> allocate virtual disks, etc. For example, to run a single 3 seconds
> xfstest (generic/001), it requires full 10 seconds to run it via
> kvm-xfstests.
>


> KUnit is something else; it's specifically intended to allow you to
> create lightweight tests quickly and easily, and by reducing the
> effort needed to write and run unit tests, hopefully we'll have a lot
> more of them and thus improve kernel quality.

The same is true of kselftest. You can create lightweight tests in
kselftest.


> As an example, I have a volunteer working on developing KUinit tests
> for ext4. We're going to start by testing the ext4 extent status
> tree. The source code is at fs/ext4/extent_status.c; it's
> approximately 1800 LOC. The Kunit tests for the extent status tree
> will exercise all of the corner cases for the various extent status
> tree functions --- e.g., ext4_es_insert_delayed_block(),
> ext4_es_remove_extent(), ext4_es_cache_extent(), etc. And it will do
> this in isolation without our needing to create a test file system or
> using a test block device.
>

> Next we'll test the ext4 block allocator, again in isolation. To test
> the block allocator we will have to write "mock functions" which
> simulate reading allocation bitmaps from disk. Again, this will allow
> the test writer to explicitly construct corner cases and validate that
> the block allocator works as expected without having to reverese
> engineer file system data structures which will force a particular
> code path to be executed.

This would be a difference, but mock functions do not exist in KUnit.
The KUnit test will call the real kernel function in the UML kernel.

I think Brendan has indicated a desire to have mock functions in the
future.

Brendan, do I understand that correctly?

-Frank

> So this is why it's largely irrelevant to me that KUinit uses UML. In
> fact, it's a feature. We're not testing device drivers, or the
> scheduler, or anything else architecture-specific. UML is not about
> virtualization. What it's about in this context is allowing us to
> start running test code as quickly as possible. Booting KVM takes
> about 3-4 seconds, and this includes initializing virtio_scsi and
> other device drivers. If by using UML we can hold the amount of
> unnecessary kernel subsystem initialization down to the absolute
> minimum, and if it means that we can communicating to the test
> framework via a userspace "printf" from UML/KUnit code, as opposed to
> via a virtual serial port to KVM's virtual console, it all makes for
> lighter weight testing.
>
> Why did I go looking for a volunteer to write KUnit tests for ext4?
> Well, I have a plan to make some changes in restructing how ext4's
> write path works, in order to support things like copy-on-write, a
> more efficient delayed allocation system, etc. This will require
> making changes to the extent status tree, and by having unit tests for
> the extent status tree, we'll be able to detect any bugs that we might
> accidentally introduce in the es tree far more quickly than if we
> didn't have those tests available. Google has long found that having
> these sorts of unit tests is a real win for developer velocity for any
> non-trivial code module (or C++ class), even when you take into
> account the time it takes to create the unit tests.
>
> - Ted>
> P.S. Many thanks to Brendan for finding such a volunteer for me; the
> person in question is a SRE from Switzerland who is interested in
> getting involved with kernel testing, and this is going to be their
> 20% project. :-)
>
>

2019-05-09 01:02:48

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On 5/7/19 8:23 AM, shuah wrote:
> On 5/7/19 2:01 AM, Greg KH wrote:
>> On Mon, May 06, 2019 at 08:14:12PM -0700, Frank Rowand wrote:
>>> On 5/1/19 4:01 PM, Brendan Higgins wrote:
>>>> ## TLDR
>>>>
>>>> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
>>>> 5.2.
>>>>
>>>> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
>>>> we would merge through your tree when the time came? Am I remembering
>>>> correctly?
>>>>
>>>> ## Background
>>>>
>>>> This patch set proposes KUnit, a lightweight unit testing and mocking
>>>> framework for the Linux kernel.
>>>>
>>>> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
>>>> it does not require installing the kernel on a test machine or in a VM
>>>> and does not require tests to be written in userspace running on a host
>>>> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
>>>> can run several dozen tests in under a second. Currently, the entire
>>>> KUnit test suite for KUnit runs in under a second from the initial
>>>> invocation (build time excluded).
>>>>
>>>> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
>>>> Googletest/Googlemock for C++. KUnit provides facilities for defining
>>>> unit test cases, grouping related test cases into test suites, providing
>>>> common infrastructure for running tests, mocking, spying, and much more.
>>>
>>> As a result of the emails replying to this patch thread, I am now
>>> starting to look at kselftest.  My level of understanding is based
>>> on some slide presentations, an LWN article, https://kselftest.wiki.kernel.org/
>>> and a _tiny_ bit of looking at kselftest code.
>>>
>>> tl;dr; I don't really understand kselftest yet.
>>>
>>>
>>> (1) why KUnit exists
>>>
>>>> ## What's so special about unit testing?
>>>>
>>>> A unit test is supposed to test a single unit of code in isolation,
>>>> hence the name. There should be no dependencies outside the control of
>>>> the test; this means no external dependencies, which makes tests orders
>>>> of magnitudes faster. Likewise, since there are no external dependencies,
>>>> there are no hoops to jump through to run the tests. Additionally, this
>>>> makes unit tests deterministic: a failing unit test always indicates a
>>>> problem. Finally, because unit tests necessarily have finer granularity,
>>>> they are able to test all code paths easily solving the classic problem
>>>> of difficulty in exercising error handling code.
>>>
>>> (2) KUnit is not meant to replace kselftest
>>>
>>>> ## Is KUnit trying to replace other testing frameworks for the kernel?
>>>>
>>>> No. Most existing tests for the Linux kernel are end-to-end tests, which
>>>> have their place. A well tested system has lots of unit tests, a
>>>> reasonable number of integration tests, and some end-to-end tests. KUnit
>>>> is just trying to address the unit test space which is currently not
>>>> being addressed.
>>>
>>> My understanding is that the intent of KUnit is to avoid booting a kernel on
>>> real hardware or in a virtual machine.  That seems to be a matter of semantics
>>> to me because isn't invoking a UML Linux just running the Linux kernel in
>>> a different form of virtualization?
>>>
>>> So I do not understand why KUnit is an improvement over kselftest.
>
> They are in two different categories. Kselftest falls into black box
> regression test suite which is a collection of user-space tests with a
> few kernel test modules back-ending the tests in some cases.
>
> Kselftest can be used by both kernel developers and users and provides
> a good way to regression test releases in test rings.
>
> KUnit is a white box category and is a better fit as unit test framework
> for development and provides a in-kernel testing. I wouldn't view them
> one replacing the other. They just provide coverage for different areas
> of testing.

I don't see what about kselftest or KUnit is inherently black box or
white box. I would expect both frameworks to be used for either type
of testing.


> I wouldn't view KUnit as something that would be easily run in test rings for example.

I don't see why not.

-Frank

>
> Brendan, does that sound about right?
>
>>>
>>> It seems to me that KUnit is just another piece of infrastructure that I
>>> am going to have to be familiar with as a kernel developer.  More overhead,
>>> more information to stuff into my tiny little brain.
>>>
>>> I would guess that some developers will focus on just one of the two test
>>> environments (and some will focus on both), splitting the development
>>> resources instead of pooling them on a common infrastructure.
>
>>> What am I missing?
>>
>> kselftest provides no in-kernel framework for testing kernel code
>> specifically.  That should be what kunit provides, an "easy" way to
>> write in-kernel tests for things.
>>
>> Brendan, did I get it right?
> thanks,
> -- Shuah
> .
>

2019-05-09 01:16:28

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On 5/7/19 1:01 AM, Greg KH wrote:
> On Mon, May 06, 2019 at 08:14:12PM -0700, Frank Rowand wrote:
>> On 5/1/19 4:01 PM, Brendan Higgins wrote:
>>> ## TLDR
>>>
>>> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
>>> 5.2.
>>>
>>> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
>>> we would merge through your tree when the time came? Am I remembering
>>> correctly?
>>>
>>> ## Background
>>>
>>> This patch set proposes KUnit, a lightweight unit testing and mocking
>>> framework for the Linux kernel.
>>>
>>> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
>>> it does not require installing the kernel on a test machine or in a VM
>>> and does not require tests to be written in userspace running on a host
>>> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
>>> can run several dozen tests in under a second. Currently, the entire
>>> KUnit test suite for KUnit runs in under a second from the initial
>>> invocation (build time excluded).
>>>
>>> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
>>> Googletest/Googlemock for C++. KUnit provides facilities for defining
>>> unit test cases, grouping related test cases into test suites, providing
>>> common infrastructure for running tests, mocking, spying, and much more.
>>
>> As a result of the emails replying to this patch thread, I am now
>> starting to look at kselftest. My level of understanding is based
>> on some slide presentations, an LWN article, https://kselftest.wiki.kernel.org/
>> and a _tiny_ bit of looking at kselftest code.
>>
>> tl;dr; I don't really understand kselftest yet.
>>
>>
>> (1) why KUnit exists
>>
>>> ## What's so special about unit testing?
>>>
>>> A unit test is supposed to test a single unit of code in isolation,
>>> hence the name. There should be no dependencies outside the control of
>>> the test; this means no external dependencies, which makes tests orders
>>> of magnitudes faster. Likewise, since there are no external dependencies,
>>> there are no hoops to jump through to run the tests. Additionally, this
>>> makes unit tests deterministic: a failing unit test always indicates a
>>> problem. Finally, because unit tests necessarily have finer granularity,
>>> they are able to test all code paths easily solving the classic problem
>>> of difficulty in exercising error handling code.
>>
>> (2) KUnit is not meant to replace kselftest
>>
>>> ## Is KUnit trying to replace other testing frameworks for the kernel?
>>>
>>> No. Most existing tests for the Linux kernel are end-to-end tests, which
>>> have their place. A well tested system has lots of unit tests, a
>>> reasonable number of integration tests, and some end-to-end tests. KUnit
>>> is just trying to address the unit test space which is currently not
>>> being addressed.
>>
>> My understanding is that the intent of KUnit is to avoid booting a kernel on
>> real hardware or in a virtual machine. That seems to be a matter of semantics
>> to me because isn't invoking a UML Linux just running the Linux kernel in
>> a different form of virtualization?
>>
>> So I do not understand why KUnit is an improvement over kselftest.
>>
>> It seems to me that KUnit is just another piece of infrastructure that I
>> am going to have to be familiar with as a kernel developer. More overhead,
>> more information to stuff into my tiny little brain.
>>
>> I would guess that some developers will focus on just one of the two test
>> environments (and some will focus on both), splitting the development
>> resources instead of pooling them on a common infrastructure.
>>
>> What am I missing?
>
> kselftest provides no in-kernel framework for testing kernel code
> specifically. That should be what kunit provides, an "easy" way to
> write in-kernel tests for things.

kselftest provides a mechanism for in-kernel tests via modules. For
example, see:

tools/testing/selftests/vm/run_vmtests invokes:
tools/testing/selftests/vm/test_vmalloc.sh
loads module:
test_vmalloc
(which is built from lib/test_vmalloc.c if CONFIG_TEST_VMALLOC)

A very quick and dirty search (likely to miss some tests) finds modules:

test_bitmap
test_bpf
test_firmware
test_printf
test_static_key_base
test_static_keys
test_user_copy
test_vmalloc

-Frank

>
> Brendan, did I get it right?
>
> thanks,
>
> greg k-h
> .
>

2019-05-09 01:47:57

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Wed, May 08, 2019 at 05:58:49PM -0700, Frank Rowand wrote:
>
> If KUnit is added to the kernel, and a subsystem that I am submitting
> code for has chosen to use KUnit instead of kselftest, then yes, I do
> *have* to use KUnit if my submission needs to contain a test for the
> code unless I want to convince the maintainer that somehow my case
> is special and I prefer to use kselftest instead of KUnittest.

That's going to be between you and the maintainer. Today, if you want
to submit a substantive change to xfs or ext4, you're going to be
asked to create test for that new feature using xfstests. It doesn't
matter that xfstests isn't in the kernel --- if that's what is
required by the maintainer.

> > supposed to be a simple way to run a large number of small tests that
> > for specific small components in a system.
>
> kselftest also supports running a subset of tests. That subset of tests
> can also be a large number of small tests. There is nothing inherent
> in KUnit vs kselftest in this regard, as far as I am aware.

The big difference is that kselftests are driven by a C program that
runs in userspace. Take a look at tools/testing/selftests/filesystem/dnotify_test.c
it has a main(int argc, char *argv) function.

In contrast, KUnit are fragments of C code which run in the kernel;
not in userspace. This allows us to test internal functions inside
complex file system (such as the block allocator in ext4) directly.
This makes it *fundamentally* different from kselftest.

Cheers,

- Ted

2019-05-09 02:01:27

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Wed, May 08, 2019 at 05:43:35PM -0700, Frank Rowand wrote:
> kselftest provides a mechanism for in-kernel tests via modules. For
> example, see:
>
> tools/testing/selftests/vm/run_vmtests invokes:
> tools/testing/selftests/vm/test_vmalloc.sh
> loads module:
> test_vmalloc
> (which is built from lib/test_vmalloc.c if CONFIG_TEST_VMALLOC)

The majority of the kselftests are implemented as userspace programs.

You *can* run in-kernel test using modules; but there is no framework
for the in-kernel code found in the test modules, which means each of
the in-kernel code has to create their own in-kernel test
infrastructure.

That's much like saying you can use vice grips to turn a nut or
bolt-head. You *can*, but it might be that using a monkey wrench
would be a much better tool that is much easier.

What would you say to a wood worker objecting that a toolbox should
contain a monkey wrench because he already knows how to use vise
grips, and his tiny brain shouldn't be forced to learn how to use a
wrench when he knows how to use a vise grip, which is a perfectly good
tool?

If you want to use vice grips as a hammer, screwdriver, monkey wrench,
etc. there's nothing stopping you from doing that. But it's not fair
to object to other people who might want to use better tools.

The reality is that we have a lot of testing tools. It's not just
kselftests. There is xfstests for file system code, blktests for
block layer tests, etc. We use the right tool for the right job.

- Ted

2019-05-09 02:15:09

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On 5/8/19 6:58 PM, Theodore Ts'o wrote:
> On Wed, May 08, 2019 at 05:43:35PM -0700, Frank Rowand wrote:
>> kselftest provides a mechanism for in-kernel tests via modules. For
>> example, see:
>>
>> tools/testing/selftests/vm/run_vmtests invokes:
>> tools/testing/selftests/vm/test_vmalloc.sh
>> loads module:
>> test_vmalloc
>> (which is built from lib/test_vmalloc.c if CONFIG_TEST_VMALLOC)
>
> The majority of the kselftests are implemented as userspace programs.

Non-argument.


> You *can* run in-kernel test using modules; but there is no framework
> for the in-kernel code found in the test modules, which means each of
> the in-kernel code has to create their own in-kernel test
> infrastructure.

Why create an entire new subsystem (KUnit) when you can add a header
file (and .c code as appropriate) that outputs the proper TAP formatted
results from kselftest kernel test modules?

There are already a multitude of in kernel test modules used by
kselftest. It would be good if they all used a common TAP compliant
mechanism to report results.


> That's much like saying you can use vice grips to turn a nut or
> bolt-head. You *can*, but it might be that using a monkey wrench
> would be a much better tool that is much easier.
>
> What would you say to a wood worker objecting that a toolbox should
> contain a monkey wrench because he already knows how to use vise
> grips, and his tiny brain shouldn't be forced to learn how to use a
> wrench when he knows how to use a vise grip, which is a perfectly good
> tool?
>
> If you want to use vice grips as a hammer, screwdriver, monkey wrench,
> etc. there's nothing stopping you from doing that. But it's not fair
> to object to other people who might want to use better tools.
>
> The reality is that we have a lot of testing tools. It's not just
> kselftests. There is xfstests for file system code, blktests for
> block layer tests, etc. We use the right tool for the right job.

More specious arguments.

-Frank

>
> - Ted
>

2019-05-09 02:19:49

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On 5/8/19 6:44 PM, Theodore Ts'o wrote:
> On Wed, May 08, 2019 at 05:58:49PM -0700, Frank Rowand wrote:
>>
>> If KUnit is added to the kernel, and a subsystem that I am submitting
>> code for has chosen to use KUnit instead of kselftest, then yes, I do
>> *have* to use KUnit if my submission needs to contain a test for the
>> code unless I want to convince the maintainer that somehow my case
>> is special and I prefer to use kselftest instead of KUnittest.
>
> That's going to be between you and the maintainer. Today, if you want
> to submit a substantive change to xfs or ext4, you're going to be
> asked to create test for that new feature using xfstests. It doesn't
> matter that xfstests isn't in the kernel --- if that's what is
> required by the maintainer.

Yes, that is exactly what I was saying.

Please do not cut the pertinent parts of context that I am replying to.


>>> supposed to be a simple way to run a large number of small tests that
>>> for specific small components in a system.
>>
>> kselftest also supports running a subset of tests. That subset of tests
>> can also be a large number of small tests. There is nothing inherent
>> in KUnit vs kselftest in this regard, as far as I am aware.


> The big difference is that kselftests are driven by a C program that
> runs in userspace. Take a look at tools/testing/selftests/filesystem/dnotify_test.c
> it has a main(int argc, char *argv) function.
>
> In contrast, KUnit are fragments of C code which run in the kernel;
> not in userspace. This allows us to test internal functions inside
> complex file system (such as the block allocator in ext4) directly.
> This makes it *fundamentally* different from kselftest.

No, totally incorrect. kselftests also supports in kernel modules as
I mention in another reply to this patch.

This is talking past each other a little bit, because your next reply
is a reply to my email about modules.

-Frank

>
> Cheers,
>
> - Ted
>

2019-05-09 03:24:14

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Wed, May 08, 2019 at 07:13:59PM -0700, Frank Rowand wrote:
> > If you want to use vice grips as a hammer, screwdriver, monkey wrench,
> > etc. there's nothing stopping you from doing that. But it's not fair
> > to object to other people who might want to use better tools.
> >
> > The reality is that we have a lot of testing tools. It's not just
> > kselftests. There is xfstests for file system code, blktests for
> > block layer tests, etc. We use the right tool for the right job.
>
> More specious arguments.

Well, *I* don't think they are specious; so I think we're going to
have to agree to disagree.

Cheers,

- Ted

2019-05-09 05:34:46

by Randy Dunlap

[permalink] [raw]
Subject: Re: [PATCH v2 14/17] Documentation: kunit: add documentation for KUnit

Hi,

On 5/1/19 4:01 PM, Brendan Higgins wrote:
> Add documentation for KUnit, the Linux kernel unit testing framework.
> - Add intro and usage guide for KUnit
> - Add API reference
>
> Signed-off-by: Felix Guo <[email protected]>
> Signed-off-by: Brendan Higgins <[email protected]>
> ---
> Documentation/index.rst | 1 +
> Documentation/kunit/api/index.rst | 16 ++
> Documentation/kunit/api/test.rst | 15 +
> Documentation/kunit/faq.rst | 46 +++
> Documentation/kunit/index.rst | 80 ++++++
> Documentation/kunit/start.rst | 180 ++++++++++++
> Documentation/kunit/usage.rst | 447 ++++++++++++++++++++++++++++++
> 7 files changed, 785 insertions(+)
> create mode 100644 Documentation/kunit/api/index.rst
> create mode 100644 Documentation/kunit/api/test.rst
> create mode 100644 Documentation/kunit/faq.rst
> create mode 100644 Documentation/kunit/index.rst
> create mode 100644 Documentation/kunit/start.rst
> create mode 100644 Documentation/kunit/usage.rst
>

> diff --git a/Documentation/kunit/api/index.rst b/Documentation/kunit/api/index.rst
> new file mode 100644
> index 0000000000000..c31c530088153
> --- /dev/null
> +++ b/Documentation/kunit/api/index.rst
> @@ -0,0 +1,16 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=============
> +API Reference
> +=============
> +.. toctree::
> +
> + test
> +
> +This section documents the KUnit kernel testing API. It is divided into 3
> +sections:
> +
> +================================= ==============================================
> +:doc:`test` documents all of the standard testing API
> + excluding mocking or mocking related features.
> +================================= ==============================================

What 3 sections does the above refer to? seems to be missing.



> diff --git a/Documentation/kunit/start.rst b/Documentation/kunit/start.rst
> new file mode 100644
> index 0000000000000..5cdba5091905e
> --- /dev/null
> +++ b/Documentation/kunit/start.rst
> @@ -0,0 +1,180 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +===============
> +Getting Started
> +===============
> +
> +Installing dependencies
> +=======================
> +KUnit has the same dependencies as the Linux kernel. As long as you can build
> +the kernel, you can run KUnit.
> +
> +KUnit Wrapper
> +=============
> +Included with KUnit is a simple Python wrapper that helps format the output to
> +easily use and read KUnit output. It handles building and running the kernel, as
> +well as formatting the output.
> +
> +The wrapper can be run with:
> +
> +.. code-block:: bash
> +
> + ./tools/testing/kunit/kunit.py
> +
> +Creating a kunitconfig
> +======================
> +The Python script is a thin wrapper around Kbuild as such, it needs to be

around Kbuild. As such,

> +configured with a ``kunitconfig`` file. This file essentially contains the
> +regular Kernel config, with the specific test targets as well.
> +
> +.. code-block:: bash
> +
> + git clone -b master https://kunit.googlesource.com/kunitconfig $PATH_TO_KUNITCONFIG_REPO
> + cd $PATH_TO_LINUX_REPO
> + ln -s $PATH_TO_KUNIT_CONFIG_REPO/kunitconfig kunitconfig
> +
> +You may want to add kunitconfig to your local gitignore.
> +
> +Verifying KUnit Works
> +-------------------------

I would expect Sphinx to complain about the underline length not being the
same as the header/title above it.

> +
> +To make sure that everything is set up correctly, simply invoke the Python
> +wrapper from your kernel repo:
> +
> +.. code-block:: bash
> +
> + ./tools/testing/kunit/kunit.py
> +
> +.. note::
> + You may want to run ``make mrproper`` first.
> +
> +If everything worked correctly, you should see the following:
> +
> +.. code-block:: bash
> +
> + Generating .config ...
> + Building KUnit Kernel ...
> + Starting KUnit Kernel ...
> +
> +followed by a list of tests that are run. All of them should be passing.
> +
> +.. note::
> + Because it is building a lot of sources for the first time, the ``Building
> + kunit kernel`` step may take a while.
> +
> +Writing your first test
> +==========================

underline length warning?

> +
> +In your kernel repo let's add some code that we can test. Create a file
> +``drivers/misc/example.h`` with the contents:
> +
> +.. code-block:: c
> +
> + int misc_example_add(int left, int right);
> +
> +create a file ``drivers/misc/example.c``:
> +
> +.. code-block:: c
> +
> + #include <linux/errno.h>
> +
> + #include "example.h"
> +
> + int misc_example_add(int left, int right)
> + {
> + return left + right;
> + }
> +
> +Now add the following lines to ``drivers/misc/Kconfig``:
> +
> +.. code-block:: kconfig
> +
> + config MISC_EXAMPLE
> + bool "My example"
> +
> +and the following lines to ``drivers/misc/Makefile``:
> +
> +.. code-block:: make
> +
> + obj-$(CONFIG_MISC_EXAMPLE) += example.o
> +
> +Now we are ready to write the test. The test will be in
> +``drivers/misc/example-test.c``:
> +
> +.. code-block:: c
> +
> + #include <kunit/test.h>
> + #include "example.h"
> +
> + /* Define the test cases. */
> +
> + static void misc_example_add_test_basic(struct kunit *test)
> + {
> + KUNIT_EXPECT_EQ(test, 1, misc_example_add(1, 0));
> + KUNIT_EXPECT_EQ(test, 2, misc_example_add(1, 1));
> + KUNIT_EXPECT_EQ(test, 0, misc_example_add(-1, 1));
> + KUNIT_EXPECT_EQ(test, INT_MAX, misc_example_add(0, INT_MAX));
> + KUNIT_EXPECT_EQ(test, -1, misc_example_add(INT_MAX, INT_MIN));
> + }
> +
> + static void misc_example_test_failure(struct kunit *test)
> + {
> + KUNIT_FAIL(test, "This test never passes.");
> + }
> +
> + static struct kunit_case misc_example_test_cases[] = {
> + KUNIT_CASE(misc_example_add_test_basic),
> + KUNIT_CASE(misc_example_test_failure),
> + {},
> + };
> +
> + static struct kunit_module misc_example_test_module = {
> + .name = "misc-example",
> + .test_cases = misc_example_test_cases,
> + };
> + module_test(misc_example_test_module);
> +
> +Now add the following to ``drivers/misc/Kconfig``:
> +
> +.. code-block:: kconfig
> +
> + config MISC_EXAMPLE_TEST
> + bool "Test for my example"
> + depends on MISC_EXAMPLE && KUNIT
> +
> +and the following to ``drivers/misc/Makefile``:
> +
> +.. code-block:: make
> +
> + obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o
> +
> +Now add it to your ``kunitconfig``:
> +
> +.. code-block:: none
> +
> + CONFIG_MISC_EXAMPLE=y
> + CONFIG_MISC_EXAMPLE_TEST=y
> +
> +Now you can run the test:
> +
> +.. code-block:: bash
> +
> + ./tools/testing/kunit/kunit.py
> +
> +You should see the following failure:
> +
> +.. code-block:: none
> +
> + ...
> + [16:08:57] [PASSED] misc-example:misc_example_add_test_basic
> + [16:08:57] [FAILED] misc-example:misc_example_test_failure
> + [16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17
> + [16:08:57] This test never passes.
> + ...
> +
> +Congrats! You just wrote your first KUnit test!
> +
> +Next Steps
> +=============

underline length warning. (?)

> +* Check out the :doc:`usage` page for a more
> + in-depth explanation of KUnit.
> diff --git a/Documentation/kunit/usage.rst b/Documentation/kunit/usage.rst
> new file mode 100644
> index 0000000000000..5c83ea9e21bc5
> --- /dev/null
> +++ b/Documentation/kunit/usage.rst
> @@ -0,0 +1,447 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=============
> +Using KUnit
> +=============

over/underline length warnings?

> +
> +The purpose of this document is to describe what KUnit is, how it works, how it
> +is intended to be used, and all the concepts and terminology that are needed to
> +understand it. This guide assumes a working knowledge of the Linux kernel and
> +some basic knowledge of testing.
> +
> +For a high level introduction to KUnit, including setting up KUnit for your
> +project, see :doc:`start`.
> +
> +Organization of this document
> +=================================

underline length? (and more below, but not being marked)

> +
> +This document is organized into two main sections: Testing and Isolating
> +Behavior. The first covers what a unit test is and how to use KUnit to write
> +them. The second covers how to use KUnit to isolate code and make it possible
> +to unit test code that was otherwise un-unit-testable.
> +
> +Testing
> +==========
> +
> +What is KUnit?
> +------------------
> +
> +"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing
> +Framework." KUnit is intended first and foremost for writing unit tests; it is
> +general enough that it can be used to write integration tests; however, this is
> +a secondary goal. KUnit has no ambition of being the only testing framework for
> +the kernel; for example, it does not intend to be an end-to-end testing
> +framework.
> +
> +What is Unit Testing?
> +-------------------------


thanks.
--
~Randy

2019-05-09 11:57:09

by Knut Omang

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Wed, 2019-05-08 at 23:20 -0400, Theodore Ts'o wrote:
> On Wed, May 08, 2019 at 07:13:59PM -0700, Frank Rowand wrote:
> > > If you want to use vice grips as a hammer, screwdriver, monkey wrench,
> > > etc. there's nothing stopping you from doing that. But it's not fair
> > > to object to other people who might want to use better tools.
> > >
> > > The reality is that we have a lot of testing tools. It's not just
> > > kselftests. There is xfstests for file system code, blktests for
> > > block layer tests, etc. We use the right tool for the right job.
> >
> > More specious arguments.
>
> Well, *I* don't think they are specious; so I think we're going to
> have to agree to disagree.

Looking at both Frank's and Ted's arguments here, I don't think you
really disagree, I just think you are having different classes of tests in mind.

In my view it's useful to think in terms of two main categories of
interesting unit tests for kernel code (using the term "unit test" pragmatically):

1) Tests that exercises typically algorithmic or intricate, complex
code with relatively few outside dependencies, or where the dependencies
are considered worth mocking, such as the basics of container data
structures or page table code. If I get you right, Ted, the tests
you refer to in this thread are such tests. I believe covering this space
is the goal Brendan has in mind for KUnit.

2) Tests that exercises interaction between a module under test and other
parts of the kernel, such as testing intricacies of the interaction of
a driver or file system with the rest of the kernel, and with hardware,
whether that is real hardware or a model/emulation.
Using your testing needs as example again, Ted, from my shallow understanding,
you have such needs within the context of xfstests (https://github.com/tytso/xfstests)

To 1) I agree with Frank in that the problem with using UML is that you still have to
relate to the complexity of a kernel run time system, while what you really want for these
types of tests is just to compile a couple of kernel source files in a normal user land
context, to allow the use of Valgrind and other user space tools on the code. The
challenge is to get the code compiled in such an environment as it usually relies on
subtle kernel macros and definitions, which is why UML seems like such an attractive
solution. Like Frank I really see no big difference from a testing and debugging
perspective of UML versus running inside a Qemu/KVM process, and I think I have an idea
for a better solution:

In the early phases of the SIF project which mention below, I did a lot of experimentation around this. My biggest challenge then was to test the driver
implementation of the pages table handling of an Intel page table compatible on-device
MMU, using a mix of page sizes, but with a few subtle limitations in the hardware. With some efforts of code generation and heavy automated use of
compiler feedback, I was able
to do that to great satisfaction, as it probably saved the project a lot of time in
debugging, and myself a lot of pain :)

To 2) most of the current xfstests (if not all?) are user space tests that do not use
extra test specific kernel code, or test specific changes to the modules under test (am I
right, Ted?) and I believe that's just as it should be: if something can be exercised well enough from user space, then that's the easier approach.

However sometimes the test cannot be made easily without interacting directly
with internal kernel interfaces, or having such interaction would greatly simplify or
increase the precision of the test. This need was the initial motivation for us to make
KTF (https://github.com/oracle/ktf, http://heim.ifi.uio.no/~knuto/ktf/index.html) which we are working on to adapt to fit naturally and in the right way
as a kernel patch set.

We developed the SIF infiniband HCA driver
(https://github.com/oracle/linux-uek/tree/uek4/qu7/drivers/infiniband/hw/sif)
and associated user level libraries in what I like to call a "pragmatically test driven"
way. At the end of the project we had quite a few unit tests, but only a small fraction of them were KTF tests, most of the testing needs were covered
by user land unit tests,
and higher level application testing.

To you Frank, and your concern about having to learn yet another tool with it's own set of syntax, I completely agree with you. We definitely would want
to minimize the need to
learn new ways, which is why I think it is important to see the whole complex of unit
testing together, and at least make sure it works in a unified and efficient way from a
syntax and operational way.

With KTF we focus on trying to make kernel testing as similar and integrated with user
space tests as possible, using similar test macros, and also to not reinvent more wheels than necessary by basing reporting and test execution on
existing user land tools.
KTF integrates with Googletest for this functionality. This also makes the reporting format discussion here irrelevant for KTF, as KTF supports whatever
reporting format the user land tool supports - Googletest for instance naturally supports pluggable reporting implementations, and there already seems
to be a TAP reporting extension out there (I haven't tried it yet though)

Using and relating to an existing user land framework allows us to have a set of
tests that works the same way from a user/developer perspective,
but some of them are kernel only tests, some are ordinary user land
tests, exercising system call boundaries and other kernel
interfaces, and some are what we call "hybrid", where parts of
the test run in user mode and parts in kernel mode.

I hope we can discuss this complex in more detail, for instance at the testing
and fuzzing workshop at LPC later this year, where I have proposed a topic for it.

Thanks,
Knut




2019-05-09 13:38:32

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Thu, May 09, 2019 at 01:52:15PM +0200, Knut Omang wrote:
> 1) Tests that exercises typically algorithmic or intricate, complex
> code with relatively few outside dependencies, or where the dependencies
> are considered worth mocking, such as the basics of container data
> structures or page table code. If I get you right, Ted, the tests
> you refer to in this thread are such tests. I believe covering this space
> is the goal Brendan has in mind for KUnit.

Yes, that's correct. I'd also add that one of the key differences is
that it sounds like Frank and you are coming from the perspective of
testing *device drivers* where in general there aren't a lot of
complex code which is hardware independent. After all, the vast
majority of device drivers are primarily interface code to hardware,
with as much as possible abstracted away to common code. (Take, for
example, the model of the SCSI layer; or all of the kobject code.)

> 2) Tests that exercises interaction between a module under test and other
> parts of the kernel, such as testing intricacies of the interaction of
> a driver or file system with the rest of the kernel, and with hardware,
> whether that is real hardware or a model/emulation.
> Using your testing needs as example again, Ted, from my shallow understanding,
> you have such needs within the context of xfstests (https://github.com/tytso/xfstests)

Well, upstream is for xfstests is git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git

The test framework where I can run 20 hours worth of xfstests
(multiple file system features enabled, multiple mount options, etc.)
in 3 hours of wall clock time using multiple cloud VM is something
called gce-xfstests.

I also have kvm-xfstests, which optimizes low test latency, where I
want to run a one or a small number of tests with a minimum of
overhead --- gce startup and shutdown is around 2 minutes, where as
kvm startup and shutdown is about 7 seconds. As far as I'm concerned,
7 seconds is still too slow, but that's the best I've been able to do
given all of the other things I want a test framework to do, including
archiving test results, parsing the test results so it's easy to
interpret, etc. Both kvm-xfstests and gce-xfstests are located at:

git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git

So if Frank's primary argument is "too many frameworks", it's already
too late. The block layer has blktests has a seprate framework,
called blktests --- and yeah, it's a bit painful to launch or learn
how to set things up.

That's why I added support to run blktests using gce-xfstests and
kvm-xfstests, so that "gce-xfstests --blktests" or "kvm-xfstests
--xfstests" will pluck a kernel from your build tree, and launch at
test appliance VM using that kernel and run the block layer tests.

The point is we *already* have multiple test frameworks, which are
optimized for testing different parts of the kernel. And if you plan
to do a lot of work in these parts of the kernel, you're going to have
to learn how to use some other test framework other than kselftest.
Sorry, that's just the way it goes.

Of course, I'll accept trivial patches that haven't been tested using
xfstests --- but that's because I can trivially run the smoke test for
you. Of course, if I get a lot of patches from a contributor which
cause test regressions, I'll treat them much like someone who
contribute patches which fail to build. I'll apply pressure to the
contributor to actually build test, or run a ten minute kvm-xfstests
smoke test. Part of the reason why I feel comfortable to do this is
it's really easy to run the smoke test. There are pre-compiled test
appliances, and a lot of documentation:

https://github.com/tytso/xfstests-bld/blob/master/Documentation/kvm-quickstart.md

This is why I have close to zero sympathy to Frank's complaint that
extra test frameworks are a bad thing. To me, that's whining. I've
done a huge amount of work to meet contributors more than half-way.
The insistence that "There Must Be One", ala the Highlander movie, is
IMHO so wrong that it's not even close. Is it really that hard to do
a "git pull", download a test appliance, set up a config file to tell
kvm-xfstests where to find your build tree, and then run "kvm-xfstests
--smoke" or "gce-xfstests --smoke"? Cry me a river.

There are already multiple test frameworks, and if you expect to do a
lot of work in a particular subsystem, you'll be expected to use the
Maintainer's choice of tests. Deal with it. We do this so we can
scale to the number of contributors we have in our subsystem.

> To 1) I agree with Frank in that the problem with using UML is that you still have to
> relate to the complexity of a kernel run time system, while what you really want for these
> types of tests is just to compile a couple of kernel source files in a normal user land
> context, to allow the use of Valgrind and other user space tools on the code.

"Just compiling a couple of kernel source files in a normal user land"
is much harder than you think. It requires writing vast numbers of
mocking functions --- for a file system I would have to simulate the
block device layer, large portions of the VFS layer, the scheduler and
the locking layer if I want to test locking bugs, etc., etc. In
practice, UML itself is serving as mocking layer, by its mere
existence. So when Frank says that KUnit doesn't provide any mocking
functions, I don't at all agree. Using KUnit and UML makes testing
internal interfaces *far* simpler, especially if the comparison is
"just compile some kernel source files as part of a userspace test
program".

Perhaps your and Frank's experience is different --- perhaps that can
be explained by your past experience and interest in testing device
drivers as opposed to file systems.

The other thing I'd add is that at least for me, a really important
consideration is how quickly we can run tests. I consider
minimization of developer friction (e.g., all you need to do is
running "make ; kvm-xfstests --smoke" to run tests), and maximizing
developer velocity to be high priority goals. Developer velocity is
how quickly can you run the tests; ideally, less than 5-10 seconds.

And that's the other reason why I consider unit tests to be a
complement to integration tests. "gce-xfstests --smoke" takes 10-15
minutes. If I can have unit tests which takes 5-15 seconds for a
smoke test of the specific part of ext4 that I am modifying (and often
with much better coverage than integration tests from userspace),
that's at really big deal. I can do this for e2fsprogs; but if I have
to launch a VM, the VM overhead pretty much eats all or most of that
time budget right there.

From looking at your documentation of KTF, you are targetting the use
case of continuous testing. That's a different testing scenario than
what I'm describing; with continuous testing, overhead measured in
minutes or even tens of minutes is not a big deal. But if you are
trying to do real-time testing as part of your development process ---
*real* Test Driven Development, then test latency is a really big
deal.

I'll grant that for people who are working on device drivers where
architecture dependencies are a big deal, building for an architecture
where you can run in a virtual environment or using test hardware is
going to be a better way to go. And Brendan has said he's willing to
look at adapting KUnit so it can be built for use in a virtual
environment to accomodate your requirements.

As far as I'm concerned, however, I would *not* be interested in KTF
unless you could demonstrate to me that launching at test VM, somehow
getting the kernel modules copied into the VM, and running the tests
as kernel modules, has zero overhead compared to using UML.

Ultimately, I'm a pragmatist. If KTF serves your needs best, good for
you. If other approaches are better for other parts of the kernel,
let's not try to impose a strict "There Must Be Only One" religion.
That's already not true today, and for good reason. There are many
different kinds of kernel code, and many different types of test
philosophies. Trying to force all kernel testing into a single
Procrustean Bed is simply not productive.

Regards,

- Ted

2019-05-09 14:51:59

by Knut Omang

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Thu, 2019-05-09 at 09:35 -0400, Theodore Ts'o wrote:
> On Thu, May 09, 2019 at 01:52:15PM +0200, Knut Omang wrote:
> > 1) Tests that exercises typically algorithmic or intricate, complex
> > code with relatively few outside dependencies, or where the dependencies
> > are considered worth mocking, such as the basics of container data
> > structures or page table code. If I get you right, Ted, the tests
> > you refer to in this thread are such tests. I believe covering this space
> > is the goal Brendan has in mind for KUnit.
>
> Yes, that's correct. I'd also add that one of the key differences is
> that it sounds like Frank and you are coming from the perspective of
> testing *device drivers* where in general there aren't a lot of
> complex code which is hardware independent. After all, the vast
> majority of device drivers are primarily interface code to hardware,
> with as much as possible abstracted away to common code. (Take, for
> example, the model of the SCSI layer; or all of the kobject code.)
>
> > 2) Tests that exercises interaction between a module under test and other
> > parts of the kernel, such as testing intricacies of the interaction of
> > a driver or file system with the rest of the kernel, and with hardware,
> > whether that is real hardware or a model/emulation.
> > Using your testing needs as example again, Ted, from my shallow understanding,
> > you have such needs within the context of xfstests (
> https://github.com/tytso/xfstests)
>
> Well, upstream is for xfstests is git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git

Thanks for the correction!

> The test framework where I can run 20 hours worth of xfstests
> (multiple file system features enabled, multiple mount options, etc.)
> in 3 hours of wall clock time using multiple cloud VM is something
> called gce-xfstests.
>
> I also have kvm-xfstests, which optimizes low test latency, where I
> want to run a one or a small number of tests with a minimum of
> overhead --- gce startup and shutdown is around 2 minutes, where as
> kvm startup and shutdown is about 7 seconds. As far as I'm concerned,
> 7 seconds is still too slow, but that's the best I've been able to do
> given all of the other things I want a test framework to do, including
> archiving test results, parsing the test results so it's easy to
> interpret, etc. Both kvm-xfstests and gce-xfstests are located at:
>
> git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
>
> So if Frank's primary argument is "too many frameworks", it's already
> too late. The block layer has blktests has a seprate framework,
> called blktests --- and yeah, it's a bit painful to launch or learn
> how to set things up.

I agree at that level - and the good thing is that there are a lot to learn
from looking at other people's ways - but working towards unification rather than even
more similar, but subtly different ways I think is a good thing anyway!

> That's why I added support to run blktests using gce-xfstests and
> kvm-xfstests, so that "gce-xfstests --blktests" or "kvm-xfstests
> --xfstests" will pluck a kernel from your build tree, and launch at
> test appliance VM using that kernel and run the block layer tests.
>
> The point is we *already* have multiple test frameworks, which are
> optimized for testing different parts of the kernel. And if you plan
> to do a lot of work in these parts of the kernel, you're going to have
> to learn how to use some other test framework other than kselftest.
> Sorry, that's just the way it goes.
>
> Of course, I'll accept trivial patches that haven't been tested using
> xfstests --- but that's because I can trivially run the smoke test for
> you. Of course, if I get a lot of patches from a contributor which
> cause test regressions, I'll treat them much like someone who
> contribute patches which fail to build. I'll apply pressure to the
> contributor to actually build test, or run a ten minute kvm-xfstests
> smoke test. Part of the reason why I feel comfortable to do this is
> it's really easy to run the smoke test. There are pre-compiled test
> appliances, and a lot of documentation:
>
> https://github.com/tytso/xfstests-bld/blob/master/Documentation/kvm-quickstart.md
>
> This is why I have close to zero sympathy to Frank's complaint that
> extra test frameworks are a bad thing. To me, that's whining. I've
> done a huge amount of work to meet contributors more than half-way.
> The insistence that "There Must Be One", ala the Highlander movie, is
> IMHO so wrong that it's not even close. Is it really that hard to do
> a "git pull", download a test appliance, set up a config file to tell
> kvm-xfstests where to find your build tree, and then run "kvm-xfstests
> --smoke" or "gce-xfstests --smoke"? Cry me a river.
>
> There are already multiple test frameworks, and if you expect to do a
> lot of work in a particular subsystem, you'll be expected to use the
> Maintainer's choice of tests. Deal with it. We do this so we can
> scale to the number of contributors we have in our subsystem.
>
> > To 1) I agree with Frank in that the problem with using UML is that you still have to
> > relate to the complexity of a kernel run time system, while what you really want for
> these
> > types of tests is just to compile a couple of kernel source files in a normal user
> land
> > context, to allow the use of Valgrind and other user space tools on the code.
>
> "Just compiling a couple of kernel source files in a normal user land"
> is much harder than you think. It requires writing vast numbers of
> mocking functions --- for a file system I would have to simulate the
> block device layer, large portions of the VFS layer, the scheduler and
> the locking layer if I want to test locking bugs, etc., etc. In
> practice, UML itself is serving as mocking layer, by its mere
> existence.

I might not see the full picture here wrt file system testing,
but I do know exactly how difficult it is to do it for that ~29,000
lines of code Infiniband driver I was working on, since actually I did it
with several versions of the kernel. Anyway that's probably more of a
topic for a talk than an email thread :-)

> So when Frank says that KUnit doesn't provide any mocking
> functions, I don't at all agree. Using KUnit and UML makes testing
> internal interfaces *far* simpler, especially if the comparison is
> "just compile some kernel source files as part of a userspace test
> program".
>
> Perhaps your and Frank's experience is different --- perhaps that can
> be explained by your past experience and interest in testing device
> drivers as opposed to file systems.
>
> The other thing I'd add is that at least for me, a really important
> consideration is how quickly we can run tests. I consider
> minimization of developer friction (e.g., all you need to do is
> running "make ; kvm-xfstests --smoke" to run tests), and maximizing
> developer velocity to be high priority goals. Developer velocity is
> how quickly can you run the tests; ideally, less than 5-10 seconds.

I completely agree on that one. I think a fundamental feature of any
framework at this level is that it should be usable as developer tests as part
of an efficient work cycle.

> And that's the other reason why I consider unit tests to be a
> complement to integration tests. "gce-xfstests --smoke" takes 10-15
> minutes. If I can have unit tests which takes 5-15 seconds for a
> smoke test of the specific part of ext4 that I am modifying (and often
> with much better coverage than integration tests from userspace),
> that's at really big deal. I can do this for e2fsprogs; but if I have
> to launch a VM, the VM overhead pretty much eats all or most of that
> time budget right there.

This is exactly the way we work with KTF as well:
Change the kernel module under test, and/or the test code,
compile, unload/load modules and run the tests again, all within seconds.
The overhead of rebooting one or more VMs (network tests sometimes
require more than one node) or even a physical system,
if the issue cannot be reproduced without running non-virtualized,
would only be necessary if the test causes an oops or other crash
that prevents the unload/load path.

> From looking at your documentation of KTF, you are targetting the use
> case of continuous testing. That's a different testing scenario than
> what I'm describing; with continuous testing, overhead measured in
> minutes or even tens of minutes is not a big deal. But if you are
> trying to do real-time testing as part of your development process ---
> *real* Test Driven Development, then test latency is a really big
> deal.

My experience is that unless one can enforce tests to be run on
everyone else's changes as well, one often ends up having to pursue the bugs
introduced by others but caught by the tests, so I believe automation and
unit/low level testing really should go hand in hand. This is the reason for
the emphasis on automation in the KTF docs, but the primary driver for me has
always been as a developer toolkit.

> I'll grant that for people who are working on device drivers where
> architecture dependencies are a big deal, building for an architecture
> where you can run in a virtual environment or using test hardware is
> going to be a better way to go. And Brendan has said he's willing to
> look at adapting KUnit so it can be built for use in a virtual
> environment to accomodate your requirements.
>
> As far as I'm concerned, however, I would *not* be interested in KTF
> unless you could demonstrate to me that launching at test VM, somehow
> getting the kernel modules copied into the VM, and running the tests
> as kernel modules, has zero overhead compared to using UML.

As you alluded to above, the development cycle time is really the crucial
thing here - if what you get has more value to you, then you'd be willing
to wait just a few more seconds for it. And IMHO the real interesting notion of
time is the time from saving the changes until you can verify that
the test either passes, or even more important: Why it failed..

> Ultimately, I'm a pragmatist. If KTF serves your needs best, good for
> you. If other approaches are better for other parts of the kernel,
> let's not try to impose a strict "There Must Be Only One" religion.
> That's already not true today, and for good reason. There are many
> different kinds of kernel code, and many different types of test
> philosophies. Trying to force all kernel testing into a single
> Procrustean Bed is simply not productive.

I definitely pragmatically prefer evolution over revolution myself,
no doubt about that, and I definitely appreciate this detailed view seen from the
filesystem side,

Thanks!
Knut

>
> Regards,
>
> - Ted

2019-05-09 15:21:17

by Masahiro Yamada

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Thu, May 2, 2019 at 8:02 AM Brendan Higgins
<[email protected]> wrote:
>
> ## TLDR
>
> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> 5.2.
>
> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
> we would merge through your tree when the time came? Am I remembering
> correctly?
>
> ## Background
>
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
>
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
>
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.
>
> ## What's so special about unit testing?
>
> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this
> makes unit tests deterministic: a failing unit test always indicates a
> problem. Finally, because unit tests necessarily have finer granularity,
> they are able to test all code paths easily solving the classic problem
> of difficulty in exercising error handling code.
>
> ## Is KUnit trying to replace other testing frameworks for the kernel?
>
> No. Most existing tests for the Linux kernel are end-to-end tests, which
> have their place. A well tested system has lots of unit tests, a
> reasonable number of integration tests, and some end-to-end tests. KUnit
> is just trying to address the unit test space which is currently not
> being addressed.
>
> ## More information on KUnit
>
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/v5.1-rc7/v1
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/v5.1-rc7/v1 branch.
>
> ## Changes Since Last Version
>
> None. I just rebased the last patchset on v5.1-rc7.
>
> --
> 2.21.0.593.g511ec345e18-goog
>

The following is the log of 'git am' of this series.
I see several 'new blank line at EOF' warnings.



masahiro@pug:~/workspace/bsp/linux$ git am ~/Downloads/*.patch
Applying: kunit: test: add KUnit test runner core
Applying: kunit: test: add test resource management API
Applying: kunit: test: add string_stream a std::stream like string builder
.git/rebase-apply/patch:223: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: kunit: test: add kunit_stream a std::stream like logger
Applying: kunit: test: add the concept of expectations
.git/rebase-apply/patch:475: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: kbuild: enable building KUnit
Applying: kunit: test: add initial tests
.git/rebase-apply/patch:203: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: kunit: test: add support for test abort
.git/rebase-apply/patch:453: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: kunit: test: add tests for kunit test abort
Applying: kunit: test: add the concept of assertions
.git/rebase-apply/patch:518: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: kunit: test: add test managed resource tests
Applying: kunit: tool: add Python wrappers for running KUnit tests
.git/rebase-apply/patch:457: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: kunit: defconfig: add defconfigs for building KUnit tests
Applying: Documentation: kunit: add documentation for KUnit
.git/rebase-apply/patch:71: new blank line at EOF.
+
.git/rebase-apply/patch:209: new blank line at EOF.
+
.git/rebase-apply/patch:848: new blank line at EOF.
+
warning: 3 lines add whitespace errors.
Applying: MAINTAINERS: add entry for KUnit the unit testing framework
Applying: kernel/sysctl-test: Add null pointer test for sysctl.c:proc_dointvec()
Applying: MAINTAINERS: add proc sysctl KUnit test to PROC SYSCTL section






--
Best Regards
Masahiro Yamada

2019-05-09 17:01:56

by Bird, Tim

[permalink] [raw]
Subject: RE: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

> -----Original Message-----
> From: Theodore Ts'o
>
> On Thu, May 09, 2019 at 01:52:15PM +0200, Knut Omang wrote:
> > 1) Tests that exercises typically algorithmic or intricate, complex
> > code with relatively few outside dependencies, or where the
> dependencies
> > are considered worth mocking, such as the basics of container data
> > structures or page table code. If I get you right, Ted, the tests
> > you refer to in this thread are such tests. I believe covering this space
> > is the goal Brendan has in mind for KUnit.
>
> Yes, that's correct. I'd also add that one of the key differences is
> that it sounds like Frank and you are coming from the perspective of
> testing *device drivers* where in general there aren't a lot of
> complex code which is hardware independent.

Ummm. Not to speak for Frank, but he's representing the device tree
layer, which I'd argue sits exactly at the intersection of testing device drivers
AND lots of complex code which is hardware independent. So maybe his
case is special.

> After all, the vast
> majority of device drivers are primarily interface code to hardware,
> with as much as possible abstracted away to common code. (Take, for
> example, the model of the SCSI layer; or all of the kobject code.)
>
> > 2) Tests that exercises interaction between a module under test and other
> > parts of the kernel, such as testing intricacies of the interaction of
> > a driver or file system with the rest of the kernel, and with hardware,
> > whether that is real hardware or a model/emulation.
> > Using your testing needs as example again, Ted, from my shallow
> understanding,
> > you have such needs within the context of xfstests
> (https://github.com/tytso/xfstests)
>
> Well, upstream is for xfstests is git://git.kernel.org/pub/scm/fs/xfs/xfstests-
> dev.git
>
> The test framework where I can run 20 hours worth of xfstests
> (multiple file system features enabled, multiple mount options, etc.)
> in 3 hours of wall clock time using multiple cloud VM is something
> called gce-xfstests.
>
> I also have kvm-xfstests, which optimizes low test latency, where I
> want to run a one or a small number of tests with a minimum of
> overhead --- gce startup and shutdown is around 2 minutes, where as
> kvm startup and shutdown is about 7 seconds. As far as I'm concerned,
> 7 seconds is still too slow, but that's the best I've been able to do
> given all of the other things I want a test framework to do, including
> archiving test results, parsing the test results so it's easy to
> interpret, etc. Both kvm-xfstests and gce-xfstests are located at:
>
> git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
>
> So if Frank's primary argument is "too many frameworks", it's already
> too late. The block layer has blktests has a seprate framework,
> called blktests --- and yeah, it's a bit painful to launch or learn
> how to set things up.
>
> That's why I added support to run blktests using gce-xfstests and
> kvm-xfstests, so that "gce-xfstests --blktests" or "kvm-xfstests
> --xfstests" will pluck a kernel from your build tree, and launch at
> test appliance VM using that kernel and run the block layer tests.
>
> The point is we *already* have multiple test frameworks, which are
> optimized for testing different parts of the kernel. And if you plan
> to do a lot of work in these parts of the kernel, you're going to have
> to learn how to use some other test framework other than kselftest.
> Sorry, that's just the way it goes.
>
> Of course, I'll accept trivial patches that haven't been tested using
> xfstests --- but that's because I can trivially run the smoke test for
> you. Of course, if I get a lot of patches from a contributor which
> cause test regressions, I'll treat them much like someone who
> contribute patches which fail to build. I'll apply pressure to the
> contributor to actually build test, or run a ten minute kvm-xfstests
> smoke test. Part of the reason why I feel comfortable to do this is
> it's really easy to run the smoke test. There are pre-compiled test
> appliances, and a lot of documentation:
>
> https://github.com/tytso/xfstests-bld/blob/master/Documentation/kvm-
> quickstart.md
>
> This is why I have close to zero sympathy to Frank's complaint that
> extra test frameworks are a bad thing. To me, that's whining. I've
> done a huge amount of work to meet contributors more than half-way.
> The insistence that "There Must Be One", ala the Highlander movie, is
> IMHO so wrong that it's not even close. Is it really that hard to do
> a "git pull", download a test appliance, set up a config file to tell
> kvm-xfstests where to find your build tree, and then run "kvm-xfstests
> --smoke" or "gce-xfstests --smoke"? Cry me a river.

Handling these types of things that are not "really that hard to do" is
exactly what meta-frameworks like KCI, Fuego, and LKFT are for.
For a core developer in a sub-system, having them learn a particular
specialized framework is OK. However, for someone doing integration
testing of the kernel (not a core developer
in a particular subsystem), having lots of different frameworks turns
into death by a thousand cuts. But we're working to fix that.
(Which reminds me that I have an outstanding action item to add an xfstest
test definition to Fuego. :-) )

>
> There are already multiple test frameworks, and if you expect to do a
> lot of work in a particular subsystem, you'll be expected to use the
> Maintainer's choice of tests. Deal with it. We do this so we can
> scale to the number of contributors we have in our subsystem.

This seems to me to be exactly backwards. You scale your contributors
by making it easier for them, which means adopting something already
well-know or established - not by being different.

I understand your vise grip metaphor, and agree with you. In my opinion
kselftest and kunit are optimized for different things, and are different tools
in the Linux kernel testing toolbox. But if you start having too many tools, or
the tools are too specialized, there are less people familiar with them and
ready to use them to help contribute.

>
> > To 1) I agree with Frank in that the problem with using UML is that you still
> have to
> > relate to the complexity of a kernel run time system, while what you really
> want for these
> > types of tests is just to compile a couple of kernel source files in a normal
> user land
> > context, to allow the use of Valgrind and other user space tools on the
> code.
>
> "Just compiling a couple of kernel source files in a normal user land"
> is much harder than you think. It requires writing vast numbers of
> mocking functions --- for a file system I would have to simulate the
> block device layer, large portions of the VFS layer, the scheduler and
> the locking layer if I want to test locking bugs, etc., etc. In
> practice, UML itself is serving as mocking layer, by its mere
> existence. So when Frank says that KUnit doesn't provide any mocking
> functions, I don't at all agree. Using KUnit and UML makes testing
> internal interfaces *far* simpler, especially if the comparison is
> "just compile some kernel source files as part of a userspace test
> program".

I had one thing I wanted to ask about here. You said previously that
you plan to use KUnit to test a complicated but hardware independent
part of the filesystem code. If you test only via UML, will that give you
coverage for non-x86 platforms? More specifically, will you get coverage
for 32-bit, for big-endian as well as little-endian, for weird architectures?
It seems like the software for these complicated sections of code is
subject to regressions due to toolchain issues as much as from coding errors.
That's why I was initially turned off when I heard that KUnit only planned
to support UML and not cross-compilation.

I'm not sure what the status is of UML for all the weird embedded processors
that get only cross-compiled and not natively-compiled, but there are multiple
reasons why UML is less commonly used in the embedded space.

> Perhaps your and Frank's experience is different --- perhaps that can
> be explained by your past experience and interest in testing device
> drivers as opposed to file systems.
>
> The other thing I'd add is that at least for me, a really important
> consideration is how quickly we can run tests. I consider
> minimization of developer friction (e.g., all you need to do is
> running "make ; kvm-xfstests --smoke" to run tests), and maximizing
> developer velocity to be high priority goals. Developer velocity is
> how quickly can you run the tests; ideally, less than 5-10 seconds.
>
> And that's the other reason why I consider unit tests to be a
> complement to integration tests. "gce-xfstests --smoke" takes 10-15
> minutes. If I can have unit tests which takes 5-15 seconds for a
> smoke test of the specific part of ext4 that I am modifying (and often
> with much better coverage than integration tests from userspace),
> that's at really big deal. I can do this for e2fsprogs; but if I have
> to launch a VM, the VM overhead pretty much eats all or most of that
> time budget right there.
>
> From looking at your documentation of KTF, you are targetting the use
> case of continuous testing. That's a different testing scenario than
> what I'm describing; with continuous testing, overhead measured in
> minutes or even tens of minutes is not a big deal. But if you are
> trying to do real-time testing as part of your development process ---
> *real* Test Driven Development, then test latency is a really big
> deal.
>
> I'll grant that for people who are working on device drivers where
> architecture dependencies are a big deal, building for an architecture
> where you can run in a virtual environment or using test hardware is
> going to be a better way to go. And Brendan has said he's willing to
> look at adapting KUnit so it can be built for use in a virtual
> environment to accomodate your requirements.

This might solve my cross-compile needs, so that's good.

>
> As far as I'm concerned, however, I would *not* be interested in KTF
> unless you could demonstrate to me that launching at test VM, somehow
> getting the kernel modules copied into the VM, and running the tests
> as kernel modules, has zero overhead compared to using UML.
>
> Ultimately, I'm a pragmatist. If KTF serves your needs best, good for
> you. If other approaches are better for other parts of the kernel,
> let's not try to impose a strict "There Must Be Only One" religion.
> That's already not true today, and for good reason. There are many
> different kinds of kernel code, and many different types of test
> philosophies. Trying to force all kernel testing into a single
> Procrustean Bed is simply not productive.

Had to look up "Procrustean Bed" - great phrase. :-)

I'm not of the opinion that there must only be one test framework
in the kernel. But we should avoid unnecessary multiplication. Every
person is going to have a different idea for where the line of necessity
is drawn. My own opinion is that what KUnit is adding is different enough
from kselftest, that it's a valuable addition.

-- Tim


2019-05-09 17:41:33

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 14/17] Documentation: kunit: add documentation for KUnit

> Hi,
>
> On 5/1/19 4:01 PM, Brendan Higgins wrote:
> > Add documentation for KUnit, the Linux kernel unit testing framework.
> > - Add intro and usage guide for KUnit
> > - Add API reference
> >
> > Signed-off-by: Felix Guo <[email protected]>
> > Signed-off-by: Brendan Higgins <[email protected]>
> > ---
> > Documentation/index.rst | 1 +
> > Documentation/kunit/api/index.rst | 16 ++
> > Documentation/kunit/api/test.rst | 15 +
> > Documentation/kunit/faq.rst | 46 +++
> > Documentation/kunit/index.rst | 80 ++++++
> > Documentation/kunit/start.rst | 180 ++++++++++++
> > Documentation/kunit/usage.rst | 447 ++++++++++++++++++++++++++++++
> > 7 files changed, 785 insertions(+)
> > create mode 100644 Documentation/kunit/api/index.rst
> > create mode 100644 Documentation/kunit/api/test.rst
> > create mode 100644 Documentation/kunit/faq.rst
> > create mode 100644 Documentation/kunit/index.rst
> > create mode 100644 Documentation/kunit/start.rst
> > create mode 100644 Documentation/kunit/usage.rst
> >
>
> > diff --git a/Documentation/kunit/api/index.rst b/Documentation/kunit/api/index.rst
> > new file mode 100644
> > index 0000000000000..c31c530088153
> > --- /dev/null
> > +++ b/Documentation/kunit/api/index.rst
> > @@ -0,0 +1,16 @@
> > +.. SPDX-License-Identifier: GPL-2.0
> > +
> > +=============
> > +API Reference
> > +=============
> > +.. toctree::
> > +
> > + test
> > +
> > +This section documents the KUnit kernel testing API. It is divided into 3
> > +sections:
> > +
> > +================================= ==============================================
> > +:doc:`test` documents all of the standard testing API
> > + excluding mocking or mocking related features.
> > +================================= ==============================================
>
> What 3 sections does the above refer to? seems to be missing.

Whoops, that references documentation added in a later patch (not
included in this patchset). Thanks for pointing this out, will fix in
next revision.

> > diff --git a/Documentation/kunit/start.rst b/Documentation/kunit/start.rst
> > new file mode 100644
> > index 0000000000000..5cdba5091905e
> > --- /dev/null
> > +++ b/Documentation/kunit/start.rst
> > @@ -0,0 +1,180 @@
> > +.. SPDX-License-Identifier: GPL-2.0
> > +
> > +===============
> > +Getting Started
> > +===============
> > +
> > +Installing dependencies
> > +=======================
> > +KUnit has the same dependencies as the Linux kernel. As long as you can build
> > +the kernel, you can run KUnit.
> > +
> > +KUnit Wrapper
> > +=============
> > +Included with KUnit is a simple Python wrapper that helps format the output to
> > +easily use and read KUnit output. It handles building and running the kernel, as
> > +well as formatting the output.
> > +
> > +The wrapper can be run with:
> > +
> > +.. code-block:: bash
> > +
> > + ./tools/testing/kunit/kunit.py
> > +
> > +Creating a kunitconfig
> > +======================
> > +The Python script is a thin wrapper around Kbuild as such, it needs to be
>
> around Kbuild. As such,
>
> > +configured with a ``kunitconfig`` file. This file essentially contains the
> > +regular Kernel config, with the specific test targets as well.
> > +
> > +.. code-block:: bash
> > +
> > + git clone -b master https://kunit.googlesource.com/kunitconfig $PATH_TO_KUNITCONFIG_REPO
> > + cd $PATH_TO_LINUX_REPO
> > + ln -s $PATH_TO_KUNIT_CONFIG_REPO/kunitconfig kunitconfig
> > +
> > +You may want to add kunitconfig to your local gitignore.
> > +
> > +Verifying KUnit Works
> > +-------------------------
>
> I would expect Sphinx to complain about the underline length not being the
> same as the header/title above it.

Hmmm...I am pretty sure it wasn't complaining to me, but that might
just be because I didn't build with the right verbosity options or
something.

My experience mostly comes from Markdown which doesn't care about this.

In any case, after some random spot checks it looks like everyone else
uniformly keeps the line under sections to be the same length. So it
looks like I need to fix this regardless.

Will fix in the next revision. Thanks for pointing this out!

> > +
> > +To make sure that everything is set up correctly, simply invoke the Python
> > +wrapper from your kernel repo:
> > +
> > +.. code-block:: bash
> > +
> > + ./tools/testing/kunit/kunit.py
> > +
> > +.. note::
> > + You may want to run ``make mrproper`` first.
> > +
> > +If everything worked correctly, you should see the following:
> > +
> > +.. code-block:: bash
> > +
> > + Generating .config ...
> > + Building KUnit Kernel ...
> > + Starting KUnit Kernel ...
> > +
> > +followed by a list of tests that are run. All of them should be passing.
> > +
> > +.. note::
> > + Because it is building a lot of sources for the first time, the ``Building
> > + kunit kernel`` step may take a while.
> > +
> > +Writing your first test
> > +==========================
>
> underline length warning?
>
> > +
> > +In your kernel repo let's add some code that we can test. Create a file
> > +``drivers/misc/example.h`` with the contents:
> > +
> > +.. code-block:: c
> > +
> > + int misc_example_add(int left, int right);
> > +
> > +create a file ``drivers/misc/example.c``:
> > +
> > +.. code-block:: c
> > +
> > + #include <linux/errno.h>
> > +
> > + #include "example.h"
> > +
> > + int misc_example_add(int left, int right)
> > + {
> > + return left + right;
> > + }
> > +
> > +Now add the following lines to ``drivers/misc/Kconfig``:
> > +
> > +.. code-block:: kconfig
> > +
> > + config MISC_EXAMPLE
> > + bool "My example"
> > +
> > +and the following lines to ``drivers/misc/Makefile``:
> > +
> > +.. code-block:: make
> > +
> > + obj-$(CONFIG_MISC_EXAMPLE) += example.o
> > +
> > +Now we are ready to write the test. The test will be in
> > +``drivers/misc/example-test.c``:
> > +
> > +.. code-block:: c
> > +
> > + #include <kunit/test.h>
> > + #include "example.h"
> > +
> > + /* Define the test cases. */
> > +
> > + static void misc_example_add_test_basic(struct kunit *test)
> > + {
> > + KUNIT_EXPECT_EQ(test, 1, misc_example_add(1, 0));
> > + KUNIT_EXPECT_EQ(test, 2, misc_example_add(1, 1));
> > + KUNIT_EXPECT_EQ(test, 0, misc_example_add(-1, 1));
> > + KUNIT_EXPECT_EQ(test, INT_MAX, misc_example_add(0, INT_MAX));
> > + KUNIT_EXPECT_EQ(test, -1, misc_example_add(INT_MAX, INT_MIN));
> > + }
> > +
> > + static void misc_example_test_failure(struct kunit *test)
> > + {
> > + KUNIT_FAIL(test, "This test never passes.");
> > + }
> > +
> > + static struct kunit_case misc_example_test_cases[] = {
> > + KUNIT_CASE(misc_example_add_test_basic),
> > + KUNIT_CASE(misc_example_test_failure),
> > + {},
> > + };
> > +
> > + static struct kunit_module misc_example_test_module = {
> > + .name = "misc-example",
> > + .test_cases = misc_example_test_cases,
> > + };
> > + module_test(misc_example_test_module);
> > +
> > +Now add the following to ``drivers/misc/Kconfig``:
> > +
> > +.. code-block:: kconfig
> > +
> > + config MISC_EXAMPLE_TEST
> > + bool "Test for my example"
> > + depends on MISC_EXAMPLE && KUNIT
> > +
> > +and the following to ``drivers/misc/Makefile``:
> > +
> > +.. code-block:: make
> > +
> > + obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o
> > +
> > +Now add it to your ``kunitconfig``:
> > +
> > +.. code-block:: none
> > +
> > + CONFIG_MISC_EXAMPLE=y
> > + CONFIG_MISC_EXAMPLE_TEST=y
> > +
> > +Now you can run the test:
> > +
> > +.. code-block:: bash
> > +
> > + ./tools/testing/kunit/kunit.py
> > +
> > +You should see the following failure:
> > +
> > +.. code-block:: none
> > +
> > + ...
> > + [16:08:57] [PASSED] misc-example:misc_example_add_test_basic
> > + [16:08:57] [FAILED] misc-example:misc_example_test_failure
> > + [16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17
> > + [16:08:57] This test never passes.
> > + ...
> > +
> > +Congrats! You just wrote your first KUnit test!
> > +
> > +Next Steps
> > +=============
>
> underline length warning. (?)
>
> > +* Check out the :doc:`usage` page for a more
> > + in-depth explanation of KUnit.
> > diff --git a/Documentation/kunit/usage.rst b/Documentation/kunit/usage.rst
> > new file mode 100644
> > index 0000000000000..5c83ea9e21bc5
> > --- /dev/null
> > +++ b/Documentation/kunit/usage.rst
> > @@ -0,0 +1,447 @@
> > +.. SPDX-License-Identifier: GPL-2.0
> > +
> > +=============
> > +Using KUnit
> > +=============
>
> over/underline length warnings?
>
> > +
> > +The purpose of this document is to describe what KUnit is, how it works, how it
> > +is intended to be used, and all the concepts and terminology that are needed to
> > +understand it. This guide assumes a working knowledge of the Linux kernel and
> > +some basic knowledge of testing.
> > +
> > +For a high level introduction to KUnit, including setting up KUnit for your
> > +project, see :doc:`start`.
> > +
> > +Organization of this document
> > +=================================
>
> underline length? (and more below, but not being marked)
>
> > +
> > +This document is organized into two main sections: Testing and Isolating
> > +Behavior. The first covers what a unit test is and how to use KUnit to write
> > +them. The second covers how to use KUnit to isolate code and make it possible
> > +to unit test code that was otherwise un-unit-testable.
> > +
> > +Testing
> > +==========
> > +
> > +What is KUnit?
> > +------------------
> > +
> > +"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing
> > +Framework." KUnit is intended first and foremost for writing unit tests; it is
> > +general enough that it can be used to write integration tests; however, this is
> > +a secondary goal. KUnit has no ambition of being the only testing framework for
> > +the kernel; for example, it does not intend to be an end-to-end testing
> > +framework.
> > +
> > +What is Unit Testing?
> > +-------------------------

Thanks!

2019-05-09 17:44:23

by Daniel Vetter

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Thu, May 9, 2019 at 7:00 PM <[email protected]> wrote:
> > -----Original Message-----
> > From: Theodore Ts'o
> >
> > On Thu, May 09, 2019 at 01:52:15PM +0200, Knut Omang wrote:
> > > 1) Tests that exercises typically algorithmic or intricate, complex
> > > code with relatively few outside dependencies, or where the
> > dependencies
> > > are considered worth mocking, such as the basics of container data
> > > structures or page table code. If I get you right, Ted, the tests
> > > you refer to in this thread are such tests. I believe covering this space
> > > is the goal Brendan has in mind for KUnit.
> >
> > Yes, that's correct. I'd also add that one of the key differences is
> > that it sounds like Frank and you are coming from the perspective of
> > testing *device drivers* where in general there aren't a lot of
> > complex code which is hardware independent.
>
> Ummm. Not to speak for Frank, but he's representing the device tree
> layer, which I'd argue sits exactly at the intersection of testing device drivers
> AND lots of complex code which is hardware independent. So maybe his
> case is special.

Jumping in with a pure device driver hat: We already have add-hoc unit
tests in drivers/gpu, which somewhat shoddily integrate into
kselftests and our own gpu test suite from userspace. We'd like to do
a lot more in this area (there's enormous amounts of code in a gpu
driver that's worth testing on its own, or against a mocked model of a
part of the real hw), and I think a unit test framework for the entire
kernel would be great. Plus gpu/drm isn't the only subsystem by far
that already has a home-grown solution. So it's actually worse than
what Ted said: We don't just have a multitude of test frameworks
already, we have a multitude of ad-hoc unit test frameworks, each with
their own way to run tests, write tests and mock parts of the system.
Kunit hopefully helps us to standardize more in this area. I do plan
to look into converting all the drm selftest we have already as soon
as this lands (and as soon as I find some time ...).

Cheers, Daniel

>
> > After all, the vast
> > majority of device drivers are primarily interface code to hardware,
> > with as much as possible abstracted away to common code. (Take, for
> > example, the model of the SCSI layer; or all of the kobject code.)
> >
> > > 2) Tests that exercises interaction between a module under test and other
> > > parts of the kernel, such as testing intricacies of the interaction of
> > > a driver or file system with the rest of the kernel, and with hardware,
> > > whether that is real hardware or a model/emulation.
> > > Using your testing needs as example again, Ted, from my shallow
> > understanding,
> > > you have such needs within the context of xfstests
> > (https://github.com/tytso/xfstests)
> >
> > Well, upstream is for xfstests is git://git.kernel.org/pub/scm/fs/xfs/xfstests-
> > dev.git
> >
> > The test framework where I can run 20 hours worth of xfstests
> > (multiple file system features enabled, multiple mount options, etc.)
> > in 3 hours of wall clock time using multiple cloud VM is something
> > called gce-xfstests.
> >
> > I also have kvm-xfstests, which optimizes low test latency, where I
> > want to run a one or a small number of tests with a minimum of
> > overhead --- gce startup and shutdown is around 2 minutes, where as
> > kvm startup and shutdown is about 7 seconds. As far as I'm concerned,
> > 7 seconds is still too slow, but that's the best I've been able to do
> > given all of the other things I want a test framework to do, including
> > archiving test results, parsing the test results so it's easy to
> > interpret, etc. Both kvm-xfstests and gce-xfstests are located at:
> >
> > git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
> >
> > So if Frank's primary argument is "too many frameworks", it's already
> > too late. The block layer has blktests has a seprate framework,
> > called blktests --- and yeah, it's a bit painful to launch or learn
> > how to set things up.
> >
> > That's why I added support to run blktests using gce-xfstests and
> > kvm-xfstests, so that "gce-xfstests --blktests" or "kvm-xfstests
> > --xfstests" will pluck a kernel from your build tree, and launch at
> > test appliance VM using that kernel and run the block layer tests.
> >
> > The point is we *already* have multiple test frameworks, which are
> > optimized for testing different parts of the kernel. And if you plan
> > to do a lot of work in these parts of the kernel, you're going to have
> > to learn how to use some other test framework other than kselftest.
> > Sorry, that's just the way it goes.
> >
> > Of course, I'll accept trivial patches that haven't been tested using
> > xfstests --- but that's because I can trivially run the smoke test for
> > you. Of course, if I get a lot of patches from a contributor which
> > cause test regressions, I'll treat them much like someone who
> > contribute patches which fail to build. I'll apply pressure to the
> > contributor to actually build test, or run a ten minute kvm-xfstests
> > smoke test. Part of the reason why I feel comfortable to do this is
> > it's really easy to run the smoke test. There are pre-compiled test
> > appliances, and a lot of documentation:
> >
> > https://github.com/tytso/xfstests-bld/blob/master/Documentation/kvm-
> > quickstart.md
> >
> > This is why I have close to zero sympathy to Frank's complaint that
> > extra test frameworks are a bad thing. To me, that's whining. I've
> > done a huge amount of work to meet contributors more than half-way.
> > The insistence that "There Must Be One", ala the Highlander movie, is
> > IMHO so wrong that it's not even close. Is it really that hard to do
> > a "git pull", download a test appliance, set up a config file to tell
> > kvm-xfstests where to find your build tree, and then run "kvm-xfstests
> > --smoke" or "gce-xfstests --smoke"? Cry me a river.
>
> Handling these types of things that are not "really that hard to do" is
> exactly what meta-frameworks like KCI, Fuego, and LKFT are for.
> For a core developer in a sub-system, having them learn a particular
> specialized framework is OK. However, for someone doing integration
> testing of the kernel (not a core developer
> in a particular subsystem), having lots of different frameworks turns
> into death by a thousand cuts. But we're working to fix that.
> (Which reminds me that I have an outstanding action item to add an xfstest
> test definition to Fuego. :-) )
>
> >
> > There are already multiple test frameworks, and if you expect to do a
> > lot of work in a particular subsystem, you'll be expected to use the
> > Maintainer's choice of tests. Deal with it. We do this so we can
> > scale to the number of contributors we have in our subsystem.
>
> This seems to me to be exactly backwards. You scale your contributors
> by making it easier for them, which means adopting something already
> well-know or established - not by being different.
>
> I understand your vise grip metaphor, and agree with you. In my opinion
> kselftest and kunit are optimized for different things, and are different tools
> in the Linux kernel testing toolbox. But if you start having too many tools, or
> the tools are too specialized, there are less people familiar with them and
> ready to use them to help contribute.
>
> >
> > > To 1) I agree with Frank in that the problem with using UML is that you still
> > have to
> > > relate to the complexity of a kernel run time system, while what you really
> > want for these
> > > types of tests is just to compile a couple of kernel source files in a normal
> > user land
> > > context, to allow the use of Valgrind and other user space tools on the
> > code.
> >
> > "Just compiling a couple of kernel source files in a normal user land"
> > is much harder than you think. It requires writing vast numbers of
> > mocking functions --- for a file system I would have to simulate the
> > block device layer, large portions of the VFS layer, the scheduler and
> > the locking layer if I want to test locking bugs, etc., etc. In
> > practice, UML itself is serving as mocking layer, by its mere
> > existence. So when Frank says that KUnit doesn't provide any mocking
> > functions, I don't at all agree. Using KUnit and UML makes testing
> > internal interfaces *far* simpler, especially if the comparison is
> > "just compile some kernel source files as part of a userspace test
> > program".
>
> I had one thing I wanted to ask about here. You said previously that
> you plan to use KUnit to test a complicated but hardware independent
> part of the filesystem code. If you test only via UML, will that give you
> coverage for non-x86 platforms? More specifically, will you get coverage
> for 32-bit, for big-endian as well as little-endian, for weird architectures?
> It seems like the software for these complicated sections of code is
> subject to regressions due to toolchain issues as much as from coding errors.
> That's why I was initially turned off when I heard that KUnit only planned
> to support UML and not cross-compilation.
>
> I'm not sure what the status is of UML for all the weird embedded processors
> that get only cross-compiled and not natively-compiled, but there are multiple
> reasons why UML is less commonly used in the embedded space.
>
> > Perhaps your and Frank's experience is different --- perhaps that can
> > be explained by your past experience and interest in testing device
> > drivers as opposed to file systems.
> >
> > The other thing I'd add is that at least for me, a really important
> > consideration is how quickly we can run tests. I consider
> > minimization of developer friction (e.g., all you need to do is
> > running "make ; kvm-xfstests --smoke" to run tests), and maximizing
> > developer velocity to be high priority goals. Developer velocity is
> > how quickly can you run the tests; ideally, less than 5-10 seconds.
> >
> > And that's the other reason why I consider unit tests to be a
> > complement to integration tests. "gce-xfstests --smoke" takes 10-15
> > minutes. If I can have unit tests which takes 5-15 seconds for a
> > smoke test of the specific part of ext4 that I am modifying (and often
> > with much better coverage than integration tests from userspace),
> > that's at really big deal. I can do this for e2fsprogs; but if I have
> > to launch a VM, the VM overhead pretty much eats all or most of that
> > time budget right there.
> >
> > From looking at your documentation of KTF, you are targetting the use
> > case of continuous testing. That's a different testing scenario than
> > what I'm describing; with continuous testing, overhead measured in
> > minutes or even tens of minutes is not a big deal. But if you are
> > trying to do real-time testing as part of your development process ---
> > *real* Test Driven Development, then test latency is a really big
> > deal.
> >
> > I'll grant that for people who are working on device drivers where
> > architecture dependencies are a big deal, building for an architecture
> > where you can run in a virtual environment or using test hardware is
> > going to be a better way to go. And Brendan has said he's willing to
> > look at adapting KUnit so it can be built for use in a virtual
> > environment to accomodate your requirements.
>
> This might solve my cross-compile needs, so that's good.
>
> >
> > As far as I'm concerned, however, I would *not* be interested in KTF
> > unless you could demonstrate to me that launching at test VM, somehow
> > getting the kernel modules copied into the VM, and running the tests
> > as kernel modules, has zero overhead compared to using UML.
> >
> > Ultimately, I'm a pragmatist. If KTF serves your needs best, good for
> > you. If other approaches are better for other parts of the kernel,
> > let's not try to impose a strict "There Must Be Only One" religion.
> > That's already not true today, and for good reason. There are many
> > different kinds of kernel code, and many different types of test
> > philosophies. Trying to force all kernel testing into a single
> > Procrustean Bed is simply not productive.
>
> Had to look up "Procrustean Bed" - great phrase. :-)
>
> I'm not of the opinion that there must only be one test framework
> in the kernel. But we should avoid unnecessary multiplication. Every
> person is going to have a different idea for where the line of necessity
> is drawn. My own opinion is that what KUnit is adding is different enough
> from kselftest, that it's a valuable addition.
>
> -- Tim
>
>


--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

2019-05-09 18:14:49

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On 5/9/19 10:00 AM, [email protected] wrote:
>> -----Original Message-----
>> From: Theodore Ts'o
>>

< massive snip >

I'll reply in more detail to some other earlier messages in this thread
later.

This reply is an attempt to return to the intent of my original reply to
patch 0 of this series.


>> Ultimately, I'm a pragmatist. If KTF serves your needs best, good for
>> you. If other approaches are better for other parts of the kernel,
>> let's not try to impose a strict "There Must Be Only One" religion.
>> That's already not true today, and for good reason. There are many
>> different kinds of kernel code, and many different types of test
>> philosophies. Trying to force all kernel testing into a single
>> Procrustean Bed is simply not productive.
>
> Had to look up "Procrustean Bed" - great phrase. :-)
>
> I'm not of the opinion that there must only be one test framework
> in the kernel. But we should avoid unnecessary multiplication. Every
> person is going to have a different idea for where the line of necessity
> is drawn. My own opinion is that what KUnit is adding is different enough
> from kselftest, that it's a valuable addition.
>
> -- Tim

My first reply to patch 0 was in the context of knowing next to nothing
about kselftest. My level of understanding was essentially from slideware.
(As the thread progressed, I dug a little deeper into kselftest, so now have
a slightly better, though still fairly surface understanding of kselftest).

Maybe I did not explain myself clearly enough in that reply. I will try
again here.

Patch 0 provided a one paragraph explanation of why KUnit exists, titled

"## What's so special about unit testing?"

Patch 0 also provided a statement that it is not meant to replace
kselftest, in a paragraph titled

"## Is KUnit trying to replace other testing frameworks for the kernel?"

I stated:

"My understanding is that the intent of KUnit is to avoid booting a kernel on
real hardware or in a virtual machine. That seems to be a matter of semantics
to me because isn't invoking a UML Linux just running the Linux kernel in
a different form of virtualization?

So I do not understand why KUnit is an improvement over kselftest.

...

What am I missing?"


I was looking for a fuller, better explanation than was given in patch 0
of how KUnit provides something that is different than what kselftest
provides for creating unit tests for kernel code.

New question "(2)":

(2) If KUnit provides something unique, then the obvious follow on
question would be: does it make sense to (a) integrate that feature into
kselftest, (b) integrate the current kselftest in kernel test functionality
into KUnit and convert the in kernel test portion of kselftest to use
KUnit, or (c) KUnit and kselftest are so different that they need to
remain separate features.

***** Please do not reply to this email with a discussion of "(2)".
***** Such a discussion is premature if there is not an answer
***** to my first question.

Observation (3), rephrased:

(3) I also brought up the issue that if option (c) was the answer to
question "(2)" (instead of either option (a) or option (b)) then this
is extra overhead for any developer or maintainer involved in different
subsystems that use the different frameworks. I intended this as one
possible motivation for why my first question mattered.

Ted grabbed hold of this issue and basically ignored what I intended
to be my core question. Nobody else has answered my core question,
though Knut's first reply did manage to get somewhere near the
intent of my core question.

***** Please do not reply to this email with a discussion of "(3)".
***** Such a discussion is premature if there is not an answer
***** to my first question.
*****
***** Also that discussion is already occurring in this thread,
***** feel free to discuss it there if you want, even though I
***** feel such discussion is premature.

I still do not have an answer to my original question.

-Frank

2019-05-09 21:45:22

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Thu, May 09, 2019 at 11:12:12AM -0700, Frank Rowand wrote:
>
> "My understanding is that the intent of KUnit is to avoid booting a kernel on
> real hardware or in a virtual machine. That seems to be a matter of semantics
> to me because isn't invoking a UML Linux just running the Linux kernel in
> a different form of virtualization?
>
> So I do not understand why KUnit is an improvement over kselftest.
>
> ...
>
> What am I missing?"

One major difference: kselftest requires a userspace environment; it
starts systemd, requires a root file system from which you can load
modules, etc. Kunit doesn't require a root file system; doesn't
require that you start systemd; doesn't allow you to run arbitrary
perl, python, bash, etc. scripts. As such, it's much lighter weight
than kselftest, and will have much less overhead before you can start
running tests. So it's not really the same kind of virtualization.

Does this help?

- Ted

2019-05-09 22:23:38

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework



On 2019-05-09 3:42 p.m., Theodore Ts'o wrote:
> On Thu, May 09, 2019 at 11:12:12AM -0700, Frank Rowand wrote:
>>
>> "My understanding is that the intent of KUnit is to avoid booting a kernel on
>> real hardware or in a virtual machine. That seems to be a matter of semantics
>> to me because isn't invoking a UML Linux just running the Linux kernel in
>> a different form of virtualization?
>>
>> So I do not understand why KUnit is an improvement over kselftest.
>>
>> ...
>>
>> What am I missing?"
>
> One major difference: kselftest requires a userspace environment; it
> starts systemd, requires a root file system from which you can load
> modules, etc. Kunit doesn't require a root file system; doesn't
> require that you start systemd; doesn't allow you to run arbitrary
> perl, python, bash, etc. scripts. As such, it's much lighter weight
> than kselftest, and will have much less overhead before you can start
> running tests. So it's not really the same kind of virtualization.

I largely agree with everything Ted has said in this thread, but I
wonder if we are conflating two different ideas that is causing an
impasse. From what I see, Kunit actually provides two different things:

1) An execution environment that can be run very quickly in userspace on
tests in the kernel source. This speeds up the tests and gives a lot of
benefit to developers using those tests because they can get feedback on
their code changes a *lot* quicker.

2) A framework to write unit tests that provides a lot of the same
facilities as other common unit testing frameworks from userspace (ie. a
runner that runs a list of tests and a bunch of helpers such as
KUNIT_EXPECT_* to simplify test passes and failures).

The first item from Kunit is novel and I see absolutely no overlap with
anything kselftest does. It's also the valuable thing I'd like to see
merged and grow.

The second item, arguably, does have significant overlap with kselftest.
Whether you are running short tests in a light weight UML environment or
higher level tests in an heavier VM the two could be using the same
framework for writing or defining in-kernel tests. It *may* also be
valuable for some people to be able to run all the UML tests in the
heavy VM environment along side other higher level tests.

Looking at the selftests tree in the repo, we already have similar items
to what Kunit is adding as I described in point (2) above.
kselftest_harness.h contains macros like EXPECT_* and ASSERT_* with very
similar intentions to the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.

However, the number of users of this harness appears to be quite small.
Most of the code in the selftests tree seems to be a random mismash of
scripts and userspace code so it's not hard to see it as something
completely different from the new Kunit:

$ git grep --files-with-matches kselftest_harness.h *
Documentation/dev-tools/kselftest.rst
MAINTAINERS
tools/testing/selftests/kselftest_harness.h
tools/testing/selftests/net/tls.c
tools/testing/selftests/rtc/rtctest.c
tools/testing/selftests/seccomp/Makefile
tools/testing/selftests/seccomp/seccomp_bpf.c
tools/testing/selftests/uevent/Makefile
tools/testing/selftests/uevent/uevent_filtering.c

Thus, I can personally see a lot of value in integrating the kunit test
framework with this kselftest harness. There's only a small number of
users of the kselftest harness today, so one way or another it seems
like getting this integrated early would be a good idea. Letting Kunit
and Kselftests progress independently for a few years will only make
this worse and may become something we end up regretting.

Logan







2019-05-09 23:33:54

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
>
> The second item, arguably, does have significant overlap with kselftest.
> Whether you are running short tests in a light weight UML environment or
> higher level tests in an heavier VM the two could be using the same
> framework for writing or defining in-kernel tests. It *may* also be valuable
> for some people to be able to run all the UML tests in the heavy VM
> environment along side other higher level tests.
>
> Looking at the selftests tree in the repo, we already have similar items to
> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
>
> However, the number of users of this harness appears to be quite small. Most
> of the code in the selftests tree seems to be a random mismash of scripts
> and userspace code so it's not hard to see it as something completely
> different from the new Kunit:
>
> $ git grep --files-with-matches kselftest_harness.h *

To the extent that we can unify how tests are written, I agree that
this would be a good thing. However, you should note that
kselftest_harness.h is currently assums that it will be included in
userspace programs. This is most obviously seen if you look closely
at the functions defined in the header files which makes calls to
fork(), abort() and fprintf().

So Kunit can't reuse kselftest_harness.h unmodified. And whether or
not the actual implementation of the header file can be reused or
refactored, making the unit tests use the same or similar syntax would
be a good thing.

Cheers,

- Ted

2019-05-09 23:42:34

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework



On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
>>
>> The second item, arguably, does have significant overlap with kselftest.
>> Whether you are running short tests in a light weight UML environment or
>> higher level tests in an heavier VM the two could be using the same
>> framework for writing or defining in-kernel tests. It *may* also be valuable
>> for some people to be able to run all the UML tests in the heavy VM
>> environment along side other higher level tests.
>>
>> Looking at the selftests tree in the repo, we already have similar items to
>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
>>
>> However, the number of users of this harness appears to be quite small. Most
>> of the code in the selftests tree seems to be a random mismash of scripts
>> and userspace code so it's not hard to see it as something completely
>> different from the new Kunit:
>>
>> $ git grep --files-with-matches kselftest_harness.h *
>
> To the extent that we can unify how tests are written, I agree that
> this would be a good thing. However, you should note that
> kselftest_harness.h is currently assums that it will be included in
> userspace programs. This is most obviously seen if you look closely
> at the functions defined in the header files which makes calls to
> fork(), abort() and fprintf().

Ah, yes. I obviously did not dig deep enough. Using kunit for in-kernel
tests and kselftest_harness for userspace tests seems like a sensible
line to draw to me. Trying to unify kernel and userspace here sounds
like it could be difficult so it's probably not worth forcing the issue
unless someone wants to do some really fancy work to get it done.

Based on some of the other commenters, I was under the impression that
kselftests had in-kernel tests but I'm not sure where or if they exist.
If they do exists, it seems like it would make sense to convert those to
kunit and have Kunit tests run-able in a VM or baremetal instance.

Logan

2019-05-10 03:48:25

by Masahiro Yamada

[permalink] [raw]
Subject: Re: [PATCH v2 06/17] kbuild: enable building KUnit

On Thu, May 2, 2019 at 8:03 AM Brendan Higgins
<[email protected]> wrote:
>
> Add KUnit to root Kconfig and Makefile allowing it to actually be built.
>
> Signed-off-by: Brendan Higgins <[email protected]>

You need to make sure
to not break git-bisect'abililty.


With this commit, I see build error.

CC kunit/test.o
kunit/test.c:11:10: fatal error: os.h: No such file or directory
#include <os.h>
^~~~~~
compilation terminated.
make[1]: *** [scripts/Makefile.build;279: kunit/test.o] Error 1
make: *** [Makefile;1763: kunit/] Error 2








> ---
> Kconfig | 2 ++
> Makefile | 2 +-
> 2 files changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/Kconfig b/Kconfig
> index 48a80beab6853..10428501edb78 100644
> --- a/Kconfig
> +++ b/Kconfig
> @@ -30,3 +30,5 @@ source "crypto/Kconfig"
> source "lib/Kconfig"
>
> source "lib/Kconfig.debug"
> +
> +source "kunit/Kconfig"
> diff --git a/Makefile b/Makefile
> index 2b99679148dc7..77368f498d84c 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -969,7 +969,7 @@ endif
> PHONY += prepare0
>
> ifeq ($(KBUILD_EXTMOD),)
> -core-y += kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/
> +core-y += kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/ kunit/
>
> vmlinux-dirs := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
> $(core-y) $(core-m) $(drivers-y) $(drivers-m) \
> --
> 2.21.0.593.g511ec345e18-goog
>


--
Best Regards
Masahiro Yamada

2019-05-10 04:54:35

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Thu, May 09, 2019 at 05:40:48PM -0600, Logan Gunthorpe wrote:
>
> Based on some of the other commenters, I was under the impression that
> kselftests had in-kernel tests but I'm not sure where or if they exist. If
> they do exists, it seems like it would make sense to convert those to kunit
> and have Kunit tests run-able in a VM or baremetal instance.

There are kselftests tests which are shell scripts which load a
module, and the module runs the in-kernel code. However, I didn't see
much infrastructure for the in-kernel test code; the one or two test
modules called from kselftests looked pretty ad hoc to me.

That's why I used the "vise grips" analogy. You can use a pair of
vise grips like a monkey wrench; but it's not really a monkey wrench,
and might not be the best tool to loosen or tighten nuts and bolts.

- Ted

2019-05-10 05:12:14

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

Hi Ted,

I'll try answering this again.

The first time I was a little flippant in part of my answer because I
thought your comments somewhat flippant. This time I'll provide a
more complete answer.


On 5/8/19 7:13 PM, Frank Rowand wrote:
> On 5/8/19 6:58 PM, Theodore Ts'o wrote:
>> On Wed, May 08, 2019 at 05:43:35PM -0700, Frank Rowand wrote:

***** begin context from Greg KH that you snipped *****

On 5/7/19 1:01 AM, Greg KH wrote:

<< note that I have snipped my original question above this point >>

>
> kselftest provides no in-kernel framework for testing kernel code
> specifically. That should be what kunit provides, an "easy" way to
> write in-kernel tests for things.

***** end context from Greg KH that you snipped *****

>>> kselftest provides a mechanism for in-kernel tests via modules. For
>>> example, see:
>>>
>>> tools/testing/selftests/vm/run_vmtests invokes:
>>> tools/testing/selftests/vm/test_vmalloc.sh
>>> loads module:
>>> test_vmalloc
>>> (which is built from lib/test_vmalloc.c if CONFIG_TEST_VMALLOC)
>>
>> The majority of the kselftests are implemented as userspace programs.

My flippant answer:

> Non-argument.

This time:

My reply to Greg was pointing out that in-kernel tests do exist in
kselftest.

Your comment that the majority of kselftests are implemented as userspace
programs has no bearing on whether kselftest support in-kernel tests.
It does not counter the fact the kselftest supports in-kernel tests.


>> You *can* run in-kernel test using modules; but there is no framework
>> for the in-kernel code found in the test modules, which means each of
>> the in-kernel code has to create their own in-kernel test
>> infrastructure.

The kselftest in-kernel tests follow a common pattern. As such, there
is a framework.

This next two paragraphs you ignored entirely in your reply:

> Why create an entire new subsystem (KUnit) when you can add a header
> file (and .c code as appropriate) that outputs the proper TAP formatted
> results from kselftest kernel test modules?
>
> There are already a multitude of in kernel test modules used by
> kselftest. It would be good if they all used a common TAP compliant
> mechanism to report results.



>> That's much like saying you can use vice grips to turn a nut or
>> bolt-head. You *can*, but it might be that using a monkey wrench
>> would be a much better tool that is much easier.
>>
>> What would you say to a wood worker objecting that a toolbox should
>> contain a monkey wrench because he already knows how to use vise
>> grips, and his tiny brain shouldn't be forced to learn how to use a
>> wrench when he knows how to use a vise grip, which is a perfectly good
>> tool?
>>
>> If you want to use vice grips as a hammer, screwdriver, monkey wrench,
>> etc. there's nothing stopping you from doing that. But it's not fair
>> to object to other people who might want to use better tools.
>>
>> The reality is that we have a lot of testing tools. It's not just
>> kselftests. There is xfstests for file system code, blktests for
>> block layer tests, etc. We use the right tool for the right job.

My flippant answer:

> More specious arguments.

This time:

I took your answer as a straw man, and had no desire to spend time
countering a straw man.

I am not proposing using a vice grips (to use your analogy). I
am saying that maybe the monkey wrench already exists.

My point of this whole thread has been to try to get the submitter
to provide a better, more complete explanation of how and why KUnit
is a better tool.

I have not yet objected to the number (and differences among) the
many sub-system tests. I am questioning whether there is a need
for another _test framework_ for in-kernel testing. If there is
something important that KUnit provides that does not exist in
existing frameworks then the next question would be to ask how
to implement that important thing (improve the existing
framework, replace the existing framework, or have two
frameworks). I still think it is premature to ask this
question until we first know the answer to what unique features
KUnit adds (and apparently until we know what the existing
framework provides).

-Frank

>
> -Frank
>
>>
>> - Ted
>>
>
>

2019-05-10 05:37:24

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
>
>
> On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
>> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
>>>
>>> The second item, arguably, does have significant overlap with kselftest.
>>> Whether you are running short tests in a light weight UML environment or
>>> higher level tests in an heavier VM the two could be using the same
>>> framework for writing or defining in-kernel tests. It *may* also be valuable
>>> for some people to be able to run all the UML tests in the heavy VM
>>> environment along side other higher level tests.
>>>
>>> Looking at the selftests tree in the repo, we already have similar items to
>>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
>>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
>>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
>>>
>>> However, the number of users of this harness appears to be quite small. Most
>>> of the code in the selftests tree seems to be a random mismash of scripts
>>> and userspace code so it's not hard to see it as something completely
>>> different from the new Kunit:
>>>
>>> $ git grep --files-with-matches kselftest_harness.h *
>>
>> To the extent that we can unify how tests are written, I agree that
>> this would be a good thing.  However, you should note that
>> kselftest_harness.h is currently assums that it will be included in
>> userspace programs.  This is most obviously seen if you look closely
>> at the functions defined in the header files which makes calls to
>> fork(), abort() and fprintf().
>
> Ah, yes. I obviously did not dig deep enough. Using kunit for
> in-kernel tests and kselftest_harness for userspace tests seems like
> a sensible line to draw to me. Trying to unify kernel and userspace
> here sounds like it could be difficult so it's probably not worth
> forcing the issue unless someone wants to do some really fancy work
> to get it done.
>
> Based on some of the other commenters, I was under the impression
> that kselftests had in-kernel tests but I'm not sure where or if they
> exist.

YES, kselftest has in-kernel tests. (Excuse the shouting...)

Here is a likely list of them in the kernel source tree:

$ grep module_init lib/test_*.c
lib/test_bitfield.c:module_init(test_bitfields)
lib/test_bitmap.c:module_init(test_bitmap_init);
lib/test_bpf.c:module_init(test_bpf_init);
lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
lib/test_firmware.c:module_init(test_firmware_init);
lib/test_hash.c:module_init(test_hash_init); /* Does everything */
lib/test_hexdump.c:module_init(test_hexdump_init);
lib/test_ida.c:module_init(ida_checks);
lib/test_kasan.c:module_init(kmalloc_tests_init);
lib/test_list_sort.c:module_init(list_sort_test);
lib/test_memcat_p.c:module_init(test_memcat_p_init);
lib/test_module.c:static int __init test_module_init(void)
lib/test_module.c:module_init(test_module_init);
lib/test_objagg.c:module_init(test_objagg_init);
lib/test_overflow.c:static int __init test_module_init(void)
lib/test_overflow.c:module_init(test_module_init);
lib/test_parman.c:module_init(test_parman_init);
lib/test_printf.c:module_init(test_printf_init);
lib/test_rhashtable.c:module_init(test_rht_init);
lib/test_siphash.c:module_init(siphash_test_init);
lib/test_sort.c:module_init(test_sort_init);
lib/test_stackinit.c:module_init(test_stackinit_init);
lib/test_static_key_base.c:module_init(test_static_key_base_init);
lib/test_static_keys.c:module_init(test_static_key_init);
lib/test_string.c:module_init(string_selftest_init);
lib/test_ubsan.c:module_init(test_ubsan_init);
lib/test_user_copy.c:module_init(test_user_copy_init);
lib/test_uuid.c:module_init(test_uuid_init);
lib/test_vmalloc.c:module_init(vmalloc_test_init)
lib/test_xarray.c:module_init(xarray_checks);


> If they do exists, it seems like it would make sense to
> convert those to kunit and have Kunit tests run-able in a VM or
> baremetal instance.

They already run in a VM.

They already run on bare metal.

They already run in UML.

This is not to say that KUnit does not make sense. But I'm still trying
to get a better description of the KUnit features (and there are
some).

-Frank

2019-05-10 05:52:28

by Knut Omang

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> >
> >
> > On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> >> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> >>>
> >>> The second item, arguably, does have significant overlap with kselftest.
> >>> Whether you are running short tests in a light weight UML environment or
> >>> higher level tests in an heavier VM the two could be using the same
> >>> framework for writing or defining in-kernel tests. It *may* also be valuable
> >>> for some people to be able to run all the UML tests in the heavy VM
> >>> environment along side other higher level tests.
> >>>
> >>> Looking at the selftests tree in the repo, we already have similar items to
> >>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> >>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> >>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> >>>
> >>> However, the number of users of this harness appears to be quite small. Most
> >>> of the code in the selftests tree seems to be a random mismash of scripts
> >>> and userspace code so it's not hard to see it as something completely
> >>> different from the new Kunit:
> >>>
> >>> $ git grep --files-with-matches kselftest_harness.h *
> >>
> >> To the extent that we can unify how tests are written, I agree that
> >> this would be a good thing. However, you should note that
> >> kselftest_harness.h is currently assums that it will be included in
> >> userspace programs. This is most obviously seen if you look closely
> >> at the functions defined in the header files which makes calls to
> >> fork(), abort() and fprintf().
> >
> > Ah, yes. I obviously did not dig deep enough. Using kunit for
> > in-kernel tests and kselftest_harness for userspace tests seems like
> > a sensible line to draw to me. Trying to unify kernel and userspace
> > here sounds like it could be difficult so it's probably not worth
> > forcing the issue unless someone wants to do some really fancy work
> > to get it done.
> >
> > Based on some of the other commenters, I was under the impression
> > that kselftests had in-kernel tests but I'm not sure where or if they
> > exist.
>
> YES, kselftest has in-kernel tests. (Excuse the shouting...)
>
> Here is a likely list of them in the kernel source tree:
>
> $ grep module_init lib/test_*.c
> lib/test_bitfield.c:module_init(test_bitfields)
> lib/test_bitmap.c:module_init(test_bitmap_init);
> lib/test_bpf.c:module_init(test_bpf_init);
> lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> lib/test_firmware.c:module_init(test_firmware_init);
> lib/test_hash.c:module_init(test_hash_init); /* Does everything */
> lib/test_hexdump.c:module_init(test_hexdump_init);
> lib/test_ida.c:module_init(ida_checks);
> lib/test_kasan.c:module_init(kmalloc_tests_init);
> lib/test_list_sort.c:module_init(list_sort_test);
> lib/test_memcat_p.c:module_init(test_memcat_p_init);
> lib/test_module.c:static int __init test_module_init(void)
> lib/test_module.c:module_init(test_module_init);
> lib/test_objagg.c:module_init(test_objagg_init);
> lib/test_overflow.c:static int __init test_module_init(void)
> lib/test_overflow.c:module_init(test_module_init);
> lib/test_parman.c:module_init(test_parman_init);
> lib/test_printf.c:module_init(test_printf_init);
> lib/test_rhashtable.c:module_init(test_rht_init);
> lib/test_siphash.c:module_init(siphash_test_init);
> lib/test_sort.c:module_init(test_sort_init);
> lib/test_stackinit.c:module_init(test_stackinit_init);
> lib/test_static_key_base.c:module_init(test_static_key_base_init);
> lib/test_static_keys.c:module_init(test_static_key_init);
> lib/test_string.c:module_init(string_selftest_init);
> lib/test_ubsan.c:module_init(test_ubsan_init);
> lib/test_user_copy.c:module_init(test_user_copy_init);
> lib/test_uuid.c:module_init(test_uuid_init);
> lib/test_vmalloc.c:module_init(vmalloc_test_init)
> lib/test_xarray.c:module_init(xarray_checks);
>
>
> > If they do exists, it seems like it would make sense to
> > convert those to kunit and have Kunit tests run-able in a VM or
> > baremetal instance.
>
> They already run in a VM.
>
> They already run on bare metal.
>
> They already run in UML.
>
> This is not to say that KUnit does not make sense. But I'm still trying
> to get a better description of the KUnit features (and there are
> some).

FYI, I have a master student who looks at converting some of these to KTF, such as for
instance the XArray tests, which lended themselves quite good to a semi-automated
conversion.

The result is also a somewhat more compact code as well as the flexibility
provided by the Googletest executor and the KTF frameworks, such as running selected
tests, output formatting, debugging features etc.

Knut

> -Frank

2019-05-10 08:24:04

by Daniel Vetter

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Fri, May 10, 2019 at 7:49 AM Knut Omang <[email protected]> wrote:
>
> On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> > On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> > >
> > >
> > > On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> > >> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> > >>>
> > >>> The second item, arguably, does have significant overlap with kselftest.
> > >>> Whether you are running short tests in a light weight UML environment or
> > >>> higher level tests in an heavier VM the two could be using the same
> > >>> framework for writing or defining in-kernel tests. It *may* also be valuable
> > >>> for some people to be able to run all the UML tests in the heavy VM
> > >>> environment along side other higher level tests.
> > >>>
> > >>> Looking at the selftests tree in the repo, we already have similar items to
> > >>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> > >>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> > >>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> > >>>
> > >>> However, the number of users of this harness appears to be quite small. Most
> > >>> of the code in the selftests tree seems to be a random mismash of scripts
> > >>> and userspace code so it's not hard to see it as something completely
> > >>> different from the new Kunit:
> > >>>
> > >>> $ git grep --files-with-matches kselftest_harness.h *
> > >>
> > >> To the extent that we can unify how tests are written, I agree that
> > >> this would be a good thing. However, you should note that
> > >> kselftest_harness.h is currently assums that it will be included in
> > >> userspace programs. This is most obviously seen if you look closely
> > >> at the functions defined in the header files which makes calls to
> > >> fork(), abort() and fprintf().
> > >
> > > Ah, yes. I obviously did not dig deep enough. Using kunit for
> > > in-kernel tests and kselftest_harness for userspace tests seems like
> > > a sensible line to draw to me. Trying to unify kernel and userspace
> > > here sounds like it could be difficult so it's probably not worth
> > > forcing the issue unless someone wants to do some really fancy work
> > > to get it done.
> > >
> > > Based on some of the other commenters, I was under the impression
> > > that kselftests had in-kernel tests but I'm not sure where or if they
> > > exist.
> >
> > YES, kselftest has in-kernel tests. (Excuse the shouting...)
> >
> > Here is a likely list of them in the kernel source tree:
> >
> > $ grep module_init lib/test_*.c
> > lib/test_bitfield.c:module_init(test_bitfields)
> > lib/test_bitmap.c:module_init(test_bitmap_init);
> > lib/test_bpf.c:module_init(test_bpf_init);
> > lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> > lib/test_firmware.c:module_init(test_firmware_init);
> > lib/test_hash.c:module_init(test_hash_init); /* Does everything */
> > lib/test_hexdump.c:module_init(test_hexdump_init);
> > lib/test_ida.c:module_init(ida_checks);
> > lib/test_kasan.c:module_init(kmalloc_tests_init);
> > lib/test_list_sort.c:module_init(list_sort_test);
> > lib/test_memcat_p.c:module_init(test_memcat_p_init);
> > lib/test_module.c:static int __init test_module_init(void)
> > lib/test_module.c:module_init(test_module_init);
> > lib/test_objagg.c:module_init(test_objagg_init);
> > lib/test_overflow.c:static int __init test_module_init(void)
> > lib/test_overflow.c:module_init(test_module_init);
> > lib/test_parman.c:module_init(test_parman_init);
> > lib/test_printf.c:module_init(test_printf_init);
> > lib/test_rhashtable.c:module_init(test_rht_init);
> > lib/test_siphash.c:module_init(siphash_test_init);
> > lib/test_sort.c:module_init(test_sort_init);
> > lib/test_stackinit.c:module_init(test_stackinit_init);
> > lib/test_static_key_base.c:module_init(test_static_key_base_init);
> > lib/test_static_keys.c:module_init(test_static_key_init);
> > lib/test_string.c:module_init(string_selftest_init);
> > lib/test_ubsan.c:module_init(test_ubsan_init);
> > lib/test_user_copy.c:module_init(test_user_copy_init);
> > lib/test_uuid.c:module_init(test_uuid_init);
> > lib/test_vmalloc.c:module_init(vmalloc_test_init)
> > lib/test_xarray.c:module_init(xarray_checks);
> >
> >
> > > If they do exists, it seems like it would make sense to
> > > convert those to kunit and have Kunit tests run-able in a VM or
> > > baremetal instance.
> >
> > They already run in a VM.
> >
> > They already run on bare metal.
> >
> > They already run in UML.
> >
> > This is not to say that KUnit does not make sense. But I'm still trying
> > to get a better description of the KUnit features (and there are
> > some).
>
> FYI, I have a master student who looks at converting some of these to KTF, such as for
> instance the XArray tests, which lended themselves quite good to a semi-automated
> conversion.
>
> The result is also a somewhat more compact code as well as the flexibility
> provided by the Googletest executor and the KTF frameworks, such as running selected
> tests, output formatting, debugging features etc.

So is KTF already in upstream? Or is the plan to unify the KTF and
Kunit in-kernel test harnesses? Because there's tons of these
in-kernel unit tests already, and every merge we get more (Frank's
list didn't even look into drivers or anywhere else, e.g. it's missing
the locking self tests I worked on in the past), and a more structured
approach would really be good.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

2019-05-10 10:29:48

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 06/17] kbuild: enable building KUnit

> On Thu, May 2, 2019 at 8:03 AM Brendan Higgins
> <[email protected]> wrote:
> >
> > Add KUnit to root Kconfig and Makefile allowing it to actually be built.
> >
> > Signed-off-by: Brendan Higgins <[email protected]>
>
> You need to make sure
> to not break git-bisect'abililty.
>
>
> With this commit, I see build error.
>
> CC kunit/test.o
> kunit/test.c:11:10: fatal error: os.h: No such file or directory
> #include <os.h>
> ^~~~~~
> compilation terminated.
> make[1]: *** [scripts/Makefile.build;279: kunit/test.o] Error 1
> make: *** [Makefile;1763: kunit/] Error 2

Nice catch! That header shouldn't even be in there.

Sorry about that. I will have it fixed in the next revision.

2019-05-10 10:32:23

by Masahiro Yamada

[permalink] [raw]
Subject: Re: [PATCH v2 06/17] kbuild: enable building KUnit

On Fri, May 10, 2019 at 7:27 PM Brendan Higgins
<[email protected]> wrote:
>
> > On Thu, May 2, 2019 at 8:03 AM Brendan Higgins
> > <[email protected]> wrote:
> > >
> > > Add KUnit to root Kconfig and Makefile allowing it to actually be built.
> > >
> > > Signed-off-by: Brendan Higgins <[email protected]>
> >
> > You need to make sure
> > to not break git-bisect'abililty.
> >
> >
> > With this commit, I see build error.
> >
> > CC kunit/test.o
> > kunit/test.c:11:10: fatal error: os.h: No such file or directory
> > #include <os.h>
> > ^~~~~~
> > compilation terminated.
> > make[1]: *** [scripts/Makefile.build;279: kunit/test.o] Error 1
> > make: *** [Makefile;1763: kunit/] Error 2
>
> Nice catch! That header shouldn't even be in there.
>
> Sorry about that. I will have it fixed in the next revision.


BTW, I applied whole of this series
to my kernel.org repository.

0day bot started to report issues.
I hope several reports reached you,
and they are useful to fix your code.


--
Best Regards
Masahiro Yamada

2019-05-10 10:35:16

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 06/17] kbuild: enable building KUnit

> On Fri, May 10, 2019 at 7:27 PM Brendan Higgins
> <[email protected]> wrote:
> >
> > > On Thu, May 2, 2019 at 8:03 AM Brendan Higgins
> > > <[email protected]> wrote:
> > > >
> > > > Add KUnit to root Kconfig and Makefile allowing it to actually be built.
> > > >
> > > > Signed-off-by: Brendan Higgins <[email protected]>
> > >
> > > You need to make sure
> > > to not break git-bisect'abililty.
> > >
> > >
> > > With this commit, I see build error.
> > >
> > > CC kunit/test.o
> > > kunit/test.c:11:10: fatal error: os.h: No such file or directory
> > > #include <os.h>
> > > ^~~~~~
> > > compilation terminated.
> > > make[1]: *** [scripts/Makefile.build;279: kunit/test.o] Error 1
> > > make: *** [Makefile;1763: kunit/] Error 2
> >
> > Nice catch! That header shouldn't even be in there.
> >
> > Sorry about that. I will have it fixed in the next revision.
>
>
> BTW, I applied whole of this series
> to my kernel.org repository.
>
> 0day bot started to report issues.
> I hope several reports reached you,
> and they are useful to fix your code.

Yep, I have received several. They are very helpful.

I greatly appreciate it.

Thanks!

2019-05-10 11:10:10

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

> On Fri, May 10, 2019 at 7:49 AM Knut Omang <[email protected]> wrote:
> >
> > On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> > > On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> > > >
> > > >
> > > > On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> > > >> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> > > >>>
> > > >>> The second item, arguably, does have significant overlap with kselftest.
> > > >>> Whether you are running short tests in a light weight UML environment or
> > > >>> higher level tests in an heavier VM the two could be using the same
> > > >>> framework for writing or defining in-kernel tests. It *may* also be valuable
> > > >>> for some people to be able to run all the UML tests in the heavy VM
> > > >>> environment along side other higher level tests.
> > > >>>
> > > >>> Looking at the selftests tree in the repo, we already have similar items to
> > > >>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> > > >>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> > > >>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> > > >>>
> > > >>> However, the number of users of this harness appears to be quite small. Most
> > > >>> of the code in the selftests tree seems to be a random mismash of scripts
> > > >>> and userspace code so it's not hard to see it as something completely
> > > >>> different from the new Kunit:
> > > >>>
> > > >>> $ git grep --files-with-matches kselftest_harness.h *
> > > >>
> > > >> To the extent that we can unify how tests are written, I agree that
> > > >> this would be a good thing. However, you should note that
> > > >> kselftest_harness.h is currently assums that it will be included in
> > > >> userspace programs. This is most obviously seen if you look closely
> > > >> at the functions defined in the header files which makes calls to
> > > >> fork(), abort() and fprintf().
> > > >
> > > > Ah, yes. I obviously did not dig deep enough. Using kunit for
> > > > in-kernel tests and kselftest_harness for userspace tests seems like
> > > > a sensible line to draw to me. Trying to unify kernel and userspace
> > > > here sounds like it could be difficult so it's probably not worth
> > > > forcing the issue unless someone wants to do some really fancy work
> > > > to get it done.
> > > >
> > > > Based on some of the other commenters, I was under the impression
> > > > that kselftests had in-kernel tests but I'm not sure where or if they
> > > > exist.
> > >
> > > YES, kselftest has in-kernel tests. (Excuse the shouting...)
> > >
> > > Here is a likely list of them in the kernel source tree:
> > >
> > > $ grep module_init lib/test_*.c
> > > lib/test_bitfield.c:module_init(test_bitfields)
> > > lib/test_bitmap.c:module_init(test_bitmap_init);
> > > lib/test_bpf.c:module_init(test_bpf_init);
> > > lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> > > lib/test_firmware.c:module_init(test_firmware_init);
> > > lib/test_hash.c:module_init(test_hash_init); /* Does everything */
> > > lib/test_hexdump.c:module_init(test_hexdump_init);
> > > lib/test_ida.c:module_init(ida_checks);
> > > lib/test_kasan.c:module_init(kmalloc_tests_init);
> > > lib/test_list_sort.c:module_init(list_sort_test);
> > > lib/test_memcat_p.c:module_init(test_memcat_p_init);
> > > lib/test_module.c:static int __init test_module_init(void)
> > > lib/test_module.c:module_init(test_module_init);
> > > lib/test_objagg.c:module_init(test_objagg_init);
> > > lib/test_overflow.c:static int __init test_module_init(void)
> > > lib/test_overflow.c:module_init(test_module_init);
> > > lib/test_parman.c:module_init(test_parman_init);
> > > lib/test_printf.c:module_init(test_printf_init);
> > > lib/test_rhashtable.c:module_init(test_rht_init);
> > > lib/test_siphash.c:module_init(siphash_test_init);
> > > lib/test_sort.c:module_init(test_sort_init);
> > > lib/test_stackinit.c:module_init(test_stackinit_init);
> > > lib/test_static_key_base.c:module_init(test_static_key_base_init);
> > > lib/test_static_keys.c:module_init(test_static_key_init);
> > > lib/test_string.c:module_init(string_selftest_init);
> > > lib/test_ubsan.c:module_init(test_ubsan_init);
> > > lib/test_user_copy.c:module_init(test_user_copy_init);
> > > lib/test_uuid.c:module_init(test_uuid_init);
> > > lib/test_vmalloc.c:module_init(vmalloc_test_init)
> > > lib/test_xarray.c:module_init(xarray_checks);
> > >
> > >
> > > > If they do exists, it seems like it would make sense to
> > > > convert those to kunit and have Kunit tests run-able in a VM or
> > > > baremetal instance.
> > >
> > > They already run in a VM.
> > >
> > > They already run on bare metal.
> > >
> > > They already run in UML.
> > >
> > > This is not to say that KUnit does not make sense. But I'm still trying
> > > to get a better description of the KUnit features (and there are
> > > some).
> >
> > FYI, I have a master student who looks at converting some of these to KTF, such as for
> > instance the XArray tests, which lended themselves quite good to a semi-automated
> > conversion.
> >
> > The result is also a somewhat more compact code as well as the flexibility
> > provided by the Googletest executor and the KTF frameworks, such as running selected
> > tests, output formatting, debugging features etc.
>
> So is KTF already in upstream? Or is the plan to unify the KTF and

I am not certain about KTF's upstream plans, but I assume that Knut
would have CC'ed me on the thread if he had started working on it.

> Kunit in-kernel test harnesses? Because there's tons of these

No, no plan. Knut and I talked about this a good while ago and it
seemed that we had pretty fundamentally different approaches both in
terms of implementation and end goal. Combining them seemed pretty
infeasible, at least from a technical perspective. Anyway, I am sure
Knut would like to give him perspective on the matter and I don't want
to say too much without first giving him a chance to chime in on the
matter.

Nevertheless, I hope you don't see resolving this as a condition for
accepting this patchset. I had several rounds of RFC on KUnit, and no
one had previously brought this up.

> in-kernel unit tests already, and every merge we get more (Frank's
> list didn't even look into drivers or anywhere else, e.g. it's missing
> the locking self tests I worked on in the past), and a more structured
> approach would really be good.

Well, that's what I am trying to do. I hope you like it!

Cheers!

2019-05-10 11:10:12

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

> On Thu, May 2, 2019 at 8:02 AM Brendan Higgins
> <[email protected]> wrote:
> >
> > ## TLDR
> >
> > I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> > 5.2.
> >
> > Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
> > we would merge through your tree when the time came? Am I remembering
> > correctly?
> >
> > ## Background
> >
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
> >
> > Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> > it does not require installing the kernel on a test machine or in a VM
> > and does not require tests to be written in userspace running on a host
> > kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> > can run several dozen tests in under a second. Currently, the entire
> > KUnit test suite for KUnit runs in under a second from the initial
> > invocation (build time excluded).
> >
> > KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> > Googletest/Googlemock for C++. KUnit provides facilities for defining
> > unit test cases, grouping related test cases into test suites, providing
> > common infrastructure for running tests, mocking, spying, and much more.
> >
> > ## What's so special about unit testing?
> >
> > A unit test is supposed to test a single unit of code in isolation,
> > hence the name. There should be no dependencies outside the control of
> > the test; this means no external dependencies, which makes tests orders
> > of magnitudes faster. Likewise, since there are no external dependencies,
> > there are no hoops to jump through to run the tests. Additionally, this
> > makes unit tests deterministic: a failing unit test always indicates a
> > problem. Finally, because unit tests necessarily have finer granularity,
> > they are able to test all code paths easily solving the classic problem
> > of difficulty in exercising error handling code.
> >
> > ## Is KUnit trying to replace other testing frameworks for the kernel?
> >
> > No. Most existing tests for the Linux kernel are end-to-end tests, which
> > have their place. A well tested system has lots of unit tests, a
> > reasonable number of integration tests, and some end-to-end tests. KUnit
> > is just trying to address the unit test space which is currently not
> > being addressed.
> >
> > ## More information on KUnit
> >
> > There is a bunch of documentation near the end of this patch set that
> > describes how to use KUnit and best practices for writing unit tests.
> > For convenience I am hosting the compiled docs here:
> > https://google.github.io/kunit-docs/third_party/kernel/docs/
> > Additionally for convenience, I have applied these patches to a branch:
> > https://kunit.googlesource.com/linux/+/kunit/rfc/v5.1-rc7/v1
> > The repo may be cloned with:
> > git clone https://kunit.googlesource.com/linux
> > This patchset is on the kunit/rfc/v5.1-rc7/v1 branch.
> >
> > ## Changes Since Last Version
> >
> > None. I just rebased the last patchset on v5.1-rc7.
> >
> > --
> > 2.21.0.593.g511ec345e18-goog
> >
>
> The following is the log of 'git am' of this series.
> I see several 'new blank line at EOF' warnings.
>
>
>
> masahiro@pug:~/workspace/bsp/linux$ git am ~/Downloads/*.patch
> Applying: kunit: test: add KUnit test runner core
> Applying: kunit: test: add test resource management API
> Applying: kunit: test: add string_stream a std::stream like string builder
> .git/rebase-apply/patch:223: new blank line at EOF.
> +
> warning: 1 line adds whitespace errors.
> Applying: kunit: test: add kunit_stream a std::stream like logger
> Applying: kunit: test: add the concept of expectations
> .git/rebase-apply/patch:475: new blank line at EOF.
> +
> warning: 1 line adds whitespace errors.
> Applying: kbuild: enable building KUnit
> Applying: kunit: test: add initial tests
> .git/rebase-apply/patch:203: new blank line at EOF.
> +
> warning: 1 line adds whitespace errors.
> Applying: kunit: test: add support for test abort
> .git/rebase-apply/patch:453: new blank line at EOF.
> +
> warning: 1 line adds whitespace errors.
> Applying: kunit: test: add tests for kunit test abort
> Applying: kunit: test: add the concept of assertions
> .git/rebase-apply/patch:518: new blank line at EOF.
> +
> warning: 1 line adds whitespace errors.
> Applying: kunit: test: add test managed resource tests
> Applying: kunit: tool: add Python wrappers for running KUnit tests
> .git/rebase-apply/patch:457: new blank line at EOF.
> +
> warning: 1 line adds whitespace errors.
> Applying: kunit: defconfig: add defconfigs for building KUnit tests
> Applying: Documentation: kunit: add documentation for KUnit
> .git/rebase-apply/patch:71: new blank line at EOF.
> +
> .git/rebase-apply/patch:209: new blank line at EOF.
> +
> .git/rebase-apply/patch:848: new blank line at EOF.
> +
> warning: 3 lines add whitespace errors.
> Applying: MAINTAINERS: add entry for KUnit the unit testing framework
> Applying: kernel/sysctl-test: Add null pointer test for sysctl.c:proc_dointvec()
> Applying: MAINTAINERS: add proc sysctl KUnit test to PROC SYSCTL section

Sorry about this! I will have it fixed on the next revision.

2019-05-10 11:11:41

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Thu, May 09, 2019 at 10:11:01PM -0700, Frank Rowand wrote:
> >> You *can* run in-kernel test using modules; but there is no framework
> >> for the in-kernel code found in the test modules, which means each of
> >> the in-kernel code has to create their own in-kernel test
> >> infrastructure.
>
> The kselftest in-kernel tests follow a common pattern. As such, there
> is a framework.

So we may have different definitions of "framework". In my book, code
reuse by "cut and paste" does not make a framework. Could they be
rewritten to *use* a framework, whether it be KTF or KUnit? Sure!
But they are not using a framework *today*.

> This next two paragraphs you ignored entirely in your reply:
>
> > Why create an entire new subsystem (KUnit) when you can add a header
> > file (and .c code as appropriate) that outputs the proper TAP formatted
> > results from kselftest kernel test modules?

And you keep ignoring my main observation, which is that spinning up a
VM, letting systemd start, mounting a root file system, etc., is all
unnecessary overhead which takes time. This is important to me,
because developer velocity is extremely important if you are doing
test driven development.

Yes, you can manually unload a module, recompile the module, somehow
get the module back into the VM (perhaps by using virtio-9p), and then
reloading the module with the in-kernel test code, and the restart the
test. BUT: (a) even if it is faster, it requires a lot of manual
steps, and would be very hard to automate, and (b) if the test code
ever OOPS or triggers a lockdep warning, you will need to restart the
VM, and so this involves all of the VM restart overhead, plus trying
to automate determining when you actually do need to restart the VM
versus unloading and reloading the module. It's clunky.

Being able to do the equivalent of "make && make check" is a really
big deal. And "make check" needs to go fast.

You keep ignore this point, perhaps because you don't care about this
issue? Which is fine, and why we may just need to agree to disagree.

Cheers,

- Ted

P.S. Running scripts is Turing-equivalent, so it's self-evident that
*anything* you can do with other test frameworks you can somehow do in
kselftests. That argument doesn't impress me, and why I do consider
it quite flippant. (Heck, /bin/vi is Turing equivalent so we could
use vi to as a kernel test framework. Or we could use emacs. Let's
not. :-)

The question is whether it is the most best and most efficient way to
do that testing. And developer velocity is a really big part of my
evaluation function when judging whether or a test framework is fit
for that purpose.

2019-05-10 11:50:13

by Knut Omang

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Fri, 2019-05-10 at 10:12 +0200, Daniel Vetter wrote:
> On Fri, May 10, 2019 at 7:49 AM Knut Omang <[email protected]> wrote:
> >
> > On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> > > On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> > > >
> > > >
> > > > On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> > > >> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> > > >>>
> > > >>> The second item, arguably, does have significant overlap with kselftest.
> > > >>> Whether you are running short tests in a light weight UML environment or
> > > >>> higher level tests in an heavier VM the two could be using the same
> > > >>> framework for writing or defining in-kernel tests. It *may* also be valuable
> > > >>> for some people to be able to run all the UML tests in the heavy VM
> > > >>> environment along side other higher level tests.
> > > >>>
> > > >>> Looking at the selftests tree in the repo, we already have similar items to
> > > >>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> > > >>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> > > >>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> > > >>>
> > > >>> However, the number of users of this harness appears to be quite small. Most
> > > >>> of the code in the selftests tree seems to be a random mismash of scripts
> > > >>> and userspace code so it's not hard to see it as something completely
> > > >>> different from the new Kunit:
> > > >>>
> > > >>> $ git grep --files-with-matches kselftest_harness.h *
> > > >>
> > > >> To the extent that we can unify how tests are written, I agree that
> > > >> this would be a good thing. However, you should note that
> > > >> kselftest_harness.h is currently assums that it will be included in
> > > >> userspace programs. This is most obviously seen if you look closely
> > > >> at the functions defined in the header files which makes calls to
> > > >> fork(), abort() and fprintf().
> > > >
> > > > Ah, yes. I obviously did not dig deep enough. Using kunit for
> > > > in-kernel tests and kselftest_harness for userspace tests seems like
> > > > a sensible line to draw to me. Trying to unify kernel and userspace
> > > > here sounds like it could be difficult so it's probably not worth
> > > > forcing the issue unless someone wants to do some really fancy work
> > > > to get it done.
> > > >
> > > > Based on some of the other commenters, I was under the impression
> > > > that kselftests had in-kernel tests but I'm not sure where or if they
> > > > exist.
> > >
> > > YES, kselftest has in-kernel tests. (Excuse the shouting...)
> > >
> > > Here is a likely list of them in the kernel source tree:
> > >
> > > $ grep module_init lib/test_*.c
> > > lib/test_bitfield.c:module_init(test_bitfields)
> > > lib/test_bitmap.c:module_init(test_bitmap_init);
> > > lib/test_bpf.c:module_init(test_bpf_init);
> > > lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> > > lib/test_firmware.c:module_init(test_firmware_init);
> > > lib/test_hash.c:module_init(test_hash_init); /* Does everything */
> > > lib/test_hexdump.c:module_init(test_hexdump_init);
> > > lib/test_ida.c:module_init(ida_checks);
> > > lib/test_kasan.c:module_init(kmalloc_tests_init);
> > > lib/test_list_sort.c:module_init(list_sort_test);
> > > lib/test_memcat_p.c:module_init(test_memcat_p_init);
> > > lib/test_module.c:static int __init test_module_init(void)
> > > lib/test_module.c:module_init(test_module_init);
> > > lib/test_objagg.c:module_init(test_objagg_init);
> > > lib/test_overflow.c:static int __init test_module_init(void)
> > > lib/test_overflow.c:module_init(test_module_init);
> > > lib/test_parman.c:module_init(test_parman_init);
> > > lib/test_printf.c:module_init(test_printf_init);
> > > lib/test_rhashtable.c:module_init(test_rht_init);
> > > lib/test_siphash.c:module_init(siphash_test_init);
> > > lib/test_sort.c:module_init(test_sort_init);
> > > lib/test_stackinit.c:module_init(test_stackinit_init);
> > > lib/test_static_key_base.c:module_init(test_static_key_base_init);
> > > lib/test_static_keys.c:module_init(test_static_key_init);
> > > lib/test_string.c:module_init(string_selftest_init);
> > > lib/test_ubsan.c:module_init(test_ubsan_init);
> > > lib/test_user_copy.c:module_init(test_user_copy_init);
> > > lib/test_uuid.c:module_init(test_uuid_init);
> > > lib/test_vmalloc.c:module_init(vmalloc_test_init)
> > > lib/test_xarray.c:module_init(xarray_checks);
> > >
> > >
> > > > If they do exists, it seems like it would make sense to
> > > > convert those to kunit and have Kunit tests run-able in a VM or
> > > > baremetal instance.
> > >
> > > They already run in a VM.
> > >
> > > They already run on bare metal.
> > >
> > > They already run in UML.
> > >
> > > This is not to say that KUnit does not make sense. But I'm still trying
> > > to get a better description of the KUnit features (and there are
> > > some).
> >
> > FYI, I have a master student who looks at converting some of these to KTF, such as for
> > instance the XArray tests, which lended themselves quite good to a semi-automated
> > conversion.
> >
> > The result is also a somewhat more compact code as well as the flexibility
> > provided by the Googletest executor and the KTF frameworks, such as running selected
> > tests, output formatting, debugging features etc.
>
> So is KTF already in upstream?

No..

> Or is the plan to unify the KTF and
> Kunit in-kernel test harnesses?

Since KTF delegates reporting and test running to user space - reporting is based on
netlink communication with the user land frontend (Googletest based, but can in principle
be ported to any user land framework if need be) - little specific test harness features
are needed for KTF, so there's little if no direct overlap with the infrastructure in
kselftests (as far as I understand, I'd like to spend more time on the details there..)

> Because there's tons of these
> in-kernel unit tests already, and every merge we get more (Frank's
> list didn't even look into drivers or anywhere else, e.g. it's missing
> the locking self tests I worked on in the past), and a more structured
> approach would really be good.

That's been my thinking too. Having a unified way to gradually move to would benefit
anyone who needs to relate to these tests in that there will be less to learn.
But that would be a long term evolutionary goal of course.

Also I think each of these test suites/sets would benefit from being more available for
running even in kernels not made specifically for testing, and I think using the module
structure and a lean, common module (ktf.ko) as a library for the tests, but no
"polluting" of the "production" kernel code with anything else, is a kind of a policy that
comes implicitly with KTF that can make it easier to maintain a sort of standard
"language" for writing kernel tests.

Wrt KTF, we're working on making a suitable patch set for the kernel,
but also have projects running internally that uses KTF as a standalone git repository,
and that inspires new features and other changes, as well as a growing set of tests.
I want to make sure we find a good candidate kernel integration that can coexist nicely
with the separate git repo version without becoming too much of a back/forward porting
challenge.

Knut

> -Daniel

2019-05-10 12:50:11

by Knut Omang

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Fri, 2019-05-10 at 03:23 -0700, Brendan Higgins wrote:
> > On Fri, May 10, 2019 at 7:49 AM Knut Omang <[email protected]> wrote:
> > >
> > > On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> > > > On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> > > > >
> > > > >
> > > > > On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> > > > >> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> > > > >>>
> > > > >>> The second item, arguably, does have significant overlap with kselftest.
> > > > >>> Whether you are running short tests in a light weight UML environment or
> > > > >>> higher level tests in an heavier VM the two could be using the same
> > > > >>> framework for writing or defining in-kernel tests. It *may* also be valuable
> > > > >>> for some people to be able to run all the UML tests in the heavy VM
> > > > >>> environment along side other higher level tests.
> > > > >>>
> > > > >>> Looking at the selftests tree in the repo, we already have similar items to
> > > > >>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> > > > >>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> > > > >>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> > > > >>>
> > > > >>> However, the number of users of this harness appears to be quite small. Most
> > > > >>> of the code in the selftests tree seems to be a random mismash of scripts
> > > > >>> and userspace code so it's not hard to see it as something completely
> > > > >>> different from the new Kunit:
> > > > >>>
> > > > >>> $ git grep --files-with-matches kselftest_harness.h *
> > > > >>
> > > > >> To the extent that we can unify how tests are written, I agree that
> > > > >> this would be a good thing. However, you should note that
> > > > >> kselftest_harness.h is currently assums that it will be included in
> > > > >> userspace programs. This is most obviously seen if you look closely
> > > > >> at the functions defined in the header files which makes calls to
> > > > >> fork(), abort() and fprintf().
> > > > >
> > > > > Ah, yes. I obviously did not dig deep enough. Using kunit for
> > > > > in-kernel tests and kselftest_harness for userspace tests seems like
> > > > > a sensible line to draw to me. Trying to unify kernel and userspace
> > > > > here sounds like it could be difficult so it's probably not worth
> > > > > forcing the issue unless someone wants to do some really fancy work
> > > > > to get it done.
> > > > >
> > > > > Based on some of the other commenters, I was under the impression
> > > > > that kselftests had in-kernel tests but I'm not sure where or if they
> > > > > exist.
> > > >
> > > > YES, kselftest has in-kernel tests. (Excuse the shouting...)
> > > >
> > > > Here is a likely list of them in the kernel source tree:
> > > >
> > > > $ grep module_init lib/test_*.c
> > > > lib/test_bitfield.c:module_init(test_bitfields)
> > > > lib/test_bitmap.c:module_init(test_bitmap_init);
> > > > lib/test_bpf.c:module_init(test_bpf_init);
> > > > lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> > > > lib/test_firmware.c:module_init(test_firmware_init);
> > > > lib/test_hash.c:module_init(test_hash_init); /* Does everything */
> > > > lib/test_hexdump.c:module_init(test_hexdump_init);
> > > > lib/test_ida.c:module_init(ida_checks);
> > > > lib/test_kasan.c:module_init(kmalloc_tests_init);
> > > > lib/test_list_sort.c:module_init(list_sort_test);
> > > > lib/test_memcat_p.c:module_init(test_memcat_p_init);
> > > > lib/test_module.c:static int __init test_module_init(void)
> > > > lib/test_module.c:module_init(test_module_init);
> > > > lib/test_objagg.c:module_init(test_objagg_init);
> > > > lib/test_overflow.c:static int __init test_module_init(void)
> > > > lib/test_overflow.c:module_init(test_module_init);
> > > > lib/test_parman.c:module_init(test_parman_init);
> > > > lib/test_printf.c:module_init(test_printf_init);
> > > > lib/test_rhashtable.c:module_init(test_rht_init);
> > > > lib/test_siphash.c:module_init(siphash_test_init);
> > > > lib/test_sort.c:module_init(test_sort_init);
> > > > lib/test_stackinit.c:module_init(test_stackinit_init);
> > > > lib/test_static_key_base.c:module_init(test_static_key_base_init);
> > > > lib/test_static_keys.c:module_init(test_static_key_init);
> > > > lib/test_string.c:module_init(string_selftest_init);
> > > > lib/test_ubsan.c:module_init(test_ubsan_init);
> > > > lib/test_user_copy.c:module_init(test_user_copy_init);
> > > > lib/test_uuid.c:module_init(test_uuid_init);
> > > > lib/test_vmalloc.c:module_init(vmalloc_test_init)
> > > > lib/test_xarray.c:module_init(xarray_checks);
> > > >
> > > >
> > > > > If they do exists, it seems like it would make sense to
> > > > > convert those to kunit and have Kunit tests run-able in a VM or
> > > > > baremetal instance.
> > > >
> > > > They already run in a VM.
> > > >
> > > > They already run on bare metal.
> > > >
> > > > They already run in UML.
> > > >
> > > > This is not to say that KUnit does not make sense. But I'm still trying
> > > > to get a better description of the KUnit features (and there are
> > > > some).
> > >
> > > FYI, I have a master student who looks at converting some of these to KTF, such as
> for
> > > instance the XArray tests, which lended themselves quite good to a semi-automated
> > > conversion.
> > >
> > > The result is also a somewhat more compact code as well as the flexibility
> > > provided by the Googletest executor and the KTF frameworks, such as running selected
> > > tests, output formatting, debugging features etc.
> >
> > So is KTF already in upstream? Or is the plan to unify the KTF and
>
> I am not certain about KTF's upstream plans, but I assume that Knut
> would have CC'ed me on the thread if he had started working on it.

You are on the Github watcher list for KTF?
Quite a few of the commits there are preparatory for a forthcoming kernel patch set.
I'll of course CC: you on the patch set when we send it to the list.

> > Kunit in-kernel test harnesses? Because there's tons of these
>
> No, no plan. Knut and I talked about this a good while ago and it
> seemed that we had pretty fundamentally different approaches both in
> terms of implementation and end goal. Combining them seemed pretty
> infeasible, at least from a technical perspective. Anyway, I am sure
> Knut would like to give him perspective on the matter and I don't want
> to say too much without first giving him a chance to chime in on the
> matter.

I need more time to study KUnit details to say, but from a 10k feet perspective:
I think at least there's a potential for some API unification, in using the same macro
names. How about removing the KUNIT_ prefix to the test macros ;-) ?
That would make the names shorter, saving typing when writing tests, and storage ;-)
and also make the names more similar to KTF's, and those of user land unit test
frameworks? Also it will make it possible to have functions compiling both with KTF and
KUnit, facilitating moving code between the two.

Also the string stream facilities of KUnit looks interesting to share.
But as said, more work needed on my side to be sure about this..

Knut

> Nevertheless, I hope you don't see resolving this as a condition for
> accepting this patchset. I had several rounds of RFC on KUnit, and no
> one had previously brought this up.
>
> > in-kernel unit tests already, and every merge we get more (Frank's
> > list didn't even look into drivers or anywhere else, e.g. it's missing
> > the locking self tests I worked on in the past), and a more structured
> > approach would really be good.
>
> Well, that's what I am trying to do. I hope you like it!
>
> Cheers!

2019-05-10 16:19:47

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework



On 2019-05-09 11:18 p.m., Frank Rowand wrote:

> YES, kselftest has in-kernel tests. (Excuse the shouting...)

Cool. From my cursory look, in my opinion, these would be greatly
improved by converting them to the framework Brendan is proposing for Kunit.

>> If they do exists, it seems like it would make sense to
>> convert those to kunit and have Kunit tests run-able in a VM or
>> baremetal instance.
>
> They already run in a VM.
>
> They already run on bare metal.
>
> They already run in UML.

Simply being able to run in UML is not the only thing here. Kunit
provides the infrastructure to quickly build, run and report results for
all the tests from userspace without needing to worry about the details
of building and running a UML kernel, then parsing dmesg to figure out
what tests were run or not.

> This is not to say that KUnit does not make sense. But I'm still trying
> to get a better description of the KUnit features (and there are
> some).

So read the patches, or the documentation[1] or the LWN article[2]. It's
pretty well described in a lot of places -- that's one of the big
advantages of it. In contrast, few people seems to have any concept of
what kselftests are or where they are or how to run them (I was
surprised to find the in-kernel tests in the lib tree).

Logan

[1] https://google.github.io/kunit-docs/third_party/kernel/docs/
[2] https://lwn.net/Articles/780985/

2019-05-10 20:56:31

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Fri, May 10, 2019 at 5:13 AM Knut Omang <[email protected]> wrote:
> On Fri, 2019-05-10 at 03:23 -0700, Brendan Higgins wrote:
> > > On Fri, May 10, 2019 at 7:49 AM Knut Omang <[email protected]> wrote:
> > > >
> > > > On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> > > > > On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> > > > > >
> > > > > >
> > > > > > On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> > > > > >> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> > > > > >>>
> > > > > >>> The second item, arguably, does have significant overlap with kselftest.
> > > > > >>> Whether you are running short tests in a light weight UML environment or
> > > > > >>> higher level tests in an heavier VM the two could be using the same
> > > > > >>> framework for writing or defining in-kernel tests. It *may* also be valuable
> > > > > >>> for some people to be able to run all the UML tests in the heavy VM
> > > > > >>> environment along side other higher level tests.
> > > > > >>>
> > > > > >>> Looking at the selftests tree in the repo, we already have similar items to
> > > > > >>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> > > > > >>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> > > > > >>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> > > > > >>>
> > > > > >>> However, the number of users of this harness appears to be quite small. Most
> > > > > >>> of the code in the selftests tree seems to be a random mismash of scripts
> > > > > >>> and userspace code so it's not hard to see it as something completely
> > > > > >>> different from the new Kunit:
> > > > > >>>
> > > > > >>> $ git grep --files-with-matches kselftest_harness.h *
> > > > > >>
> > > > > >> To the extent that we can unify how tests are written, I agree that
> > > > > >> this would be a good thing. However, you should note that
> > > > > >> kselftest_harness.h is currently assums that it will be included in
> > > > > >> userspace programs. This is most obviously seen if you look closely
> > > > > >> at the functions defined in the header files which makes calls to
> > > > > >> fork(), abort() and fprintf().
> > > > > >
> > > > > > Ah, yes. I obviously did not dig deep enough. Using kunit for
> > > > > > in-kernel tests and kselftest_harness for userspace tests seems like
> > > > > > a sensible line to draw to me. Trying to unify kernel and userspace
> > > > > > here sounds like it could be difficult so it's probably not worth
> > > > > > forcing the issue unless someone wants to do some really fancy work
> > > > > > to get it done.
> > > > > >
> > > > > > Based on some of the other commenters, I was under the impression
> > > > > > that kselftests had in-kernel tests but I'm not sure where or if they
> > > > > > exist.
> > > > >
> > > > > YES, kselftest has in-kernel tests. (Excuse the shouting...)
> > > > >
> > > > > Here is a likely list of them in the kernel source tree:
> > > > >
> > > > > $ grep module_init lib/test_*.c
> > > > > lib/test_bitfield.c:module_init(test_bitfields)
> > > > > lib/test_bitmap.c:module_init(test_bitmap_init);
> > > > > lib/test_bpf.c:module_init(test_bpf_init);
> > > > > lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> > > > > lib/test_firmware.c:module_init(test_firmware_init);
> > > > > lib/test_hash.c:module_init(test_hash_init); /* Does everything */
> > > > > lib/test_hexdump.c:module_init(test_hexdump_init);
> > > > > lib/test_ida.c:module_init(ida_checks);
> > > > > lib/test_kasan.c:module_init(kmalloc_tests_init);
> > > > > lib/test_list_sort.c:module_init(list_sort_test);
> > > > > lib/test_memcat_p.c:module_init(test_memcat_p_init);
> > > > > lib/test_module.c:static int __init test_module_init(void)
> > > > > lib/test_module.c:module_init(test_module_init);
> > > > > lib/test_objagg.c:module_init(test_objagg_init);
> > > > > lib/test_overflow.c:static int __init test_module_init(void)
> > > > > lib/test_overflow.c:module_init(test_module_init);
> > > > > lib/test_parman.c:module_init(test_parman_init);
> > > > > lib/test_printf.c:module_init(test_printf_init);
> > > > > lib/test_rhashtable.c:module_init(test_rht_init);
> > > > > lib/test_siphash.c:module_init(siphash_test_init);
> > > > > lib/test_sort.c:module_init(test_sort_init);
> > > > > lib/test_stackinit.c:module_init(test_stackinit_init);
> > > > > lib/test_static_key_base.c:module_init(test_static_key_base_init);
> > > > > lib/test_static_keys.c:module_init(test_static_key_init);
> > > > > lib/test_string.c:module_init(string_selftest_init);
> > > > > lib/test_ubsan.c:module_init(test_ubsan_init);
> > > > > lib/test_user_copy.c:module_init(test_user_copy_init);
> > > > > lib/test_uuid.c:module_init(test_uuid_init);
> > > > > lib/test_vmalloc.c:module_init(vmalloc_test_init)
> > > > > lib/test_xarray.c:module_init(xarray_checks);
> > > > >
> > > > >
> > > > > > If they do exists, it seems like it would make sense to
> > > > > > convert those to kunit and have Kunit tests run-able in a VM or
> > > > > > baremetal instance.
> > > > >
> > > > > They already run in a VM.
> > > > >
> > > > > They already run on bare metal.
> > > > >
> > > > > They already run in UML.
> > > > >
> > > > > This is not to say that KUnit does not make sense. But I'm still trying
> > > > > to get a better description of the KUnit features (and there are
> > > > > some).
> > > >
> > > > FYI, I have a master student who looks at converting some of these to KTF, such as
> > for
> > > > instance the XArray tests, which lended themselves quite good to a semi-automated
> > > > conversion.
> > > >
> > > > The result is also a somewhat more compact code as well as the flexibility
> > > > provided by the Googletest executor and the KTF frameworks, such as running selected
> > > > tests, output formatting, debugging features etc.
> > >
> > > So is KTF already in upstream? Or is the plan to unify the KTF and
> >
> > I am not certain about KTF's upstream plans, but I assume that Knut
> > would have CC'ed me on the thread if he had started working on it.
>
> You are on the Github watcher list for KTF?

Yep! I have been since LPC in 2017.

> Quite a few of the commits there are preparatory for a forthcoming kernel patch set.
> I'll of course CC: you on the patch set when we send it to the list.

Awesome! I appreciate it.

>
> > > Kunit in-kernel test harnesses? Because there's tons of these
> >
> > No, no plan. Knut and I talked about this a good while ago and it
> > seemed that we had pretty fundamentally different approaches both in
> > terms of implementation and end goal. Combining them seemed pretty
> > infeasible, at least from a technical perspective. Anyway, I am sure
> > Knut would like to give him perspective on the matter and I don't want
> > to say too much without first giving him a chance to chime in on the
> > matter.
>
> I need more time to study KUnit details to say, but from a 10k feet perspective:
> I think at least there's a potential for some API unification, in using the same macro
> names. How about removing the KUNIT_ prefix to the test macros ;-) ?

Heh, heh. That's actually the way I had it in the earliest versions of
KUnit! But that was pretty much the very first thing everyone
complained about. I think I went from no prefix (like you are
suggesting) to TEST_* before the first version of the RFC at the
request of several people I was kicking the idea around with, and then
I think I was asked to go from TEST_* to KUNIT_* in the very first
revision of the RFC.

In short, I am sympathetic to your suggestion, but I think that is
non-negotiable at this point. The community has a clear policy in
place on the matter, and at this point I would really prefer not to
change all the symbol names again.

> That would make the names shorter, saving typing when writing tests, and storage ;-)
> and also make the names more similar to KTF's, and those of user land unit test

You mean the Googletest/Googlemock expectations/assertions?

It's a great library (with not so great a name), but unfortunately it
is written in C++, which I think pretty much counts it out here.

> frameworks? Also it will make it possible to have functions compiling both with KTF and
> KUnit, facilitating moving code between the two.

I think that would be cool, but again, I don't think this will be
possible with Googletest/Googlemock.

>
> Also the string stream facilities of KUnit looks interesting to share.

I am glad you think so!

If your biggest concern on my side is test macro names (which I think
is a no-go as I mentioned above), I think we should be in pretty good
shape once you are ready to move forward. Besides, I have a lot more
KUnit patches coming after this: landing this patchset is just the
beginning. So how about we keep moving forward on this patchset?

2019-05-10 21:06:36

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On 5/10/19 3:43 AM, Theodore Ts'o wrote:
> On Thu, May 09, 2019 at 10:11:01PM -0700, Frank Rowand wrote:
>>>> You *can* run in-kernel test using modules; but there is no framework
>>>> for the in-kernel code found in the test modules, which means each of
>>>> the in-kernel code has to create their own in-kernel test
>>>> infrastructure.
>>
>> The kselftest in-kernel tests follow a common pattern. As such, there
>> is a framework.
>
> So we may have different definitions of "framework". In my book, code
> reuse by "cut and paste" does not make a framework. Could they be
> rewritten to *use* a framework, whether it be KTF or KUnit? Sure!
> But they are not using a framework *today*.
>
>> This next two paragraphs you ignored entirely in your reply:
>>
>>> Why create an entire new subsystem (KUnit) when you can add a header
>>> file (and .c code as appropriate) that outputs the proper TAP formatted
>>> results from kselftest kernel test modules?
>
> And you keep ignoring my main observation, which is that spinning up a
> VM, letting systemd start, mounting a root file system, etc., is all
> unnecessary overhead which takes time. This is important to me,
> because developer velocity is extremely important if you are doing
> test driven development.

No, I do not "keep ignoring my main observation". You made that
observation in an email of Thu, 9 May 2019 09:35:51 -0400. In my
reply to Tim's reply to your email, I wrote:

"< massive snip >

I'll reply in more detail to some other earlier messages in this thread
later.

This reply is an attempt to return to the intent of my original reply to
patch 0 of this series."


I have not directly replied to any of your other emails that have made
that observation (I may have replied to other emails that were replies
to such an email of yours, but not in the context of the overhead).
After this email, my next reply will be my delayed response to your
original email about overhead.

And the "mommy, he hit me first" argument does not contribute to a
constructive conversation about a kernel patch submittal.


> Yes, you can manually unload a module, recompile the module, somehow
> get the module back into the VM (perhaps by using virtio-9p), and then
> reloading the module with the in-kernel test code, and the restart the
> test. BUT: (a) even if it is faster, it requires a lot of manual
> steps, and would be very hard to automate, and (b) if the test code
> ever OOPS or triggers a lockdep warning, you will need to restart the
> VM, and so this involves all of the VM restart overhead, plus trying
> to automate determining when you actually do need to restart the VM
> versus unloading and reloading the module. It's clunky.

I have mentioned before that the in-kernel kselftest tests can be
run in UML. You simply select the configure options to build them
into the kernel instead of building them as modules. Then build
a UML kernel and execute ("boot") the UML kernel.

This is exactly the same as for KUnit. No more overhead. No less
overhead. No more steps. No fewer steps.


>
> Being able to do the equivalent of "make && make check" is a really
> big deal. And "make check" needs to go fast.
>
> You keep ignore this point, perhaps because you don't care about this
> issue? Which is fine, and why we may just need to agree to disagree.

No, I agree that fast test execution is useful.


>
> Cheers,
>
> - Ted
>
> P.S. Running scripts is Turing-equivalent, so it's self-evident that
> *anything* you can do with other test frameworks you can somehow do in
> kselftests. That argument doesn't impress me, and why I do consider
> it quite flippant. (Heck, /bin/vi is Turing equivalent so we could
> use vi to as a kernel test framework. Or we could use emacs. Let's
> not. :-)

I have not been talking about running scripts, other than to the extent
that _one of the ways_ the in-kernel kselftests can be invoked is via
a script that loads the test module. The same exact in-kernel test can
instead be built into a UML kernel, as mentioned above.


> The question is whether it is the most best and most efficient way to
> do that testing. And developer velocity is a really big part of my
> evaluation function when judging whether or a test framework is fit
> for that purpose.
> .
>

2019-05-10 21:54:35

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On 5/9/19 3:20 PM, Logan Gunthorpe wrote:
>
>
> On 2019-05-09 3:42 p.m., Theodore Ts'o wrote:
>> On Thu, May 09, 2019 at 11:12:12AM -0700, Frank Rowand wrote:
>>>
>>>     "My understanding is that the intent of KUnit is to avoid booting a kernel on
>>>     real hardware or in a virtual machine.  That seems to be a matter of semantics
>>>     to me because isn't invoking a UML Linux just running the Linux kernel in
>>>     a different form of virtualization?
>>>
>>>     So I do not understand why KUnit is an improvement over kselftest.
>>>
>>>    ...
>>>
>>> What am I missing?"
>>
>> One major difference: kselftest requires a userspace environment;
>> it starts systemd, requires a root file system from which you can
>> load modules, etc. Kunit doesn't require a root file system;
>> doesn't require that you start systemd; doesn't allow you to run
>> arbitrary perl, python, bash, etc. scripts. As such, it's much
>> lighter weight than kselftest, and will have much less overhead
>> before you can start running tests. So it's not really the same
>> kind of virtualization.

I'm back to reply to this subthread, after a delay, as promised.


> I largely agree with everything Ted has said in this thread, but I
> wonder if we are conflating two different ideas that is causing an
> impasse. From what I see, Kunit actually provides two different
> things:

> 1) An execution environment that can be run very quickly in userspace
> on tests in the kernel source. This speeds up the tests and gives a
> lot of benefit to developers using those tests because they can get
> feedback on their code changes a *lot* quicker.

kselftest in-kernel tests provide exactly the same when the tests are
configured as "built-in" code instead of as modules.


> 2) A framework to write unit tests that provides a lot of the same
> facilities as other common unit testing frameworks from userspace
> (ie. a runner that runs a list of tests and a bunch of helpers such
> as KUNIT_EXPECT_* to simplify test passes and failures).

> The first item from Kunit is novel and I see absolutely no overlap
> with anything kselftest does. It's also the valuable thing I'd like
> to see merged and grow.

The first item exists in kselftest.


> The second item, arguably, does have significant overlap with
> kselftest. Whether you are running short tests in a light weight UML
> environment or higher level tests in an heavier VM the two could be
> using the same framework for writing or defining in-kernel tests. It
> *may* also be valuable for some people to be able to run all the UML
> tests in the heavy VM environment along side other higher level
> tests.
>
> Looking at the selftests tree in the repo, we already have similar
> items to what Kunit is adding as I described in point (2) above.
> kselftest_harness.h contains macros like EXPECT_* and ASSERT_* with
> very similar intentions to the new KUNIT_EXECPT_* and KUNIT_ASSERT_*
> macros.

I might be wrong here because I have not dug deeply enough into the
code!!! Does this framework apply to the userspace tests, the
in-kernel tests, or both? My "not having dug enough GUESS" is that
these are for the user space tests (although if so, they could be
extended for in-kernel use also).

So I think this one maybe does not have an overlap between KUnit
and kselftest.


> However, the number of users of this harness appears to be quite
> small. Most of the code in the selftests tree seems to be a random
> mismash of scripts and userspace code so it's not hard to see it as
> something completely different from the new Kunit:
> $ git grep --files-with-matches kselftest_harness.h *
> Documentation/dev-tools/kselftest.rst
> MAINTAINERS
> tools/testing/selftests/kselftest_harness.h
> tools/testing/selftests/net/tls.c
> tools/testing/selftests/rtc/rtctest.c
> tools/testing/selftests/seccomp/Makefile
> tools/testing/selftests/seccomp/seccomp_bpf.c
> tools/testing/selftests/uevent/Makefile
> tools/testing/selftests/uevent/uevent_filtering.c


> Thus, I can personally see a lot of value in integrating the kunit
> test framework with this kselftest harness. There's only a small
> number of users of the kselftest harness today, so one way or another
> it seems like getting this integrated early would be a good idea.
> Letting Kunit and Kselftests progress independently for a few years
> will only make this worse and may become something we end up
> regretting.

Yes, this I agree with.

-Frank

>
> Logan

2019-05-10 21:59:39

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On 5/9/19 2:42 PM, Theodore Ts'o wrote:
> On Thu, May 09, 2019 at 11:12:12AM -0700, Frank Rowand wrote:
>>
>> "My understanding is that the intent of KUnit is to avoid booting a kernel on
>> real hardware or in a virtual machine. That seems to be a matter of semantics
>> to me because isn't invoking a UML Linux just running the Linux kernel in
>> a different form of virtualization?
>>
>> So I do not understand why KUnit is an improvement over kselftest.
>>
>> ...
>>
>> What am I missing?"
>
> One major difference: kselftest requires a userspace environment; it
> starts systemd, requires a root file system from which you can load
> modules, etc. Kunit doesn't require a root file system; doesn't
> require that you start systemd; doesn't allow you to run arbitrary
> perl, python, bash, etc. scripts. As such, it's much lighter weight
> than kselftest, and will have much less overhead before you can start
> running tests. So it's not really the same kind of virtualization.
>
> Does this help?
>
> - Ted
>

I'm back to reply to this subthread, after a delay, as promised.

That is the type of information that I was looking for, so
thank you for the reply.

However, the reply is incorrect. Kselftest in-kernel tests (which
is the context here) can be configured as built in instead of as
a module, and built in a UML kernel. The UML kernel can boot,
running the in-kernel tests before UML attempts to invoke the
init process.

No userspace environment needed. So exactly the same overhead
as KUnit when invoked in that manner.

2019-05-10 22:08:30

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On 5/10/19 3:23 AM, Brendan Higgins wrote:
>> On Fri, May 10, 2019 at 7:49 AM Knut Omang <[email protected]> wrote:
>>>
>>> On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
>>>> On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
>>>>>
>>>>>
>>>>> On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
>>>>>> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
>>>>>>>
>>>>>>> The second item, arguably, does have significant overlap with kselftest.
>>>>>>> Whether you are running short tests in a light weight UML environment or
>>>>>>> higher level tests in an heavier VM the two could be using the same
>>>>>>> framework for writing or defining in-kernel tests. It *may* also be valuable
>>>>>>> for some people to be able to run all the UML tests in the heavy VM
>>>>>>> environment along side other higher level tests.
>>>>>>>
>>>>>>> Looking at the selftests tree in the repo, we already have similar items to
>>>>>>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
>>>>>>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
>>>>>>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
>>>>>>>
>>>>>>> However, the number of users of this harness appears to be quite small. Most
>>>>>>> of the code in the selftests tree seems to be a random mismash of scripts
>>>>>>> and userspace code so it's not hard to see it as something completely
>>>>>>> different from the new Kunit:
>>>>>>>
>>>>>>> $ git grep --files-with-matches kselftest_harness.h *
>>>>>>
>>>>>> To the extent that we can unify how tests are written, I agree that
>>>>>> this would be a good thing. However, you should note that
>>>>>> kselftest_harness.h is currently assums that it will be included in
>>>>>> userspace programs. This is most obviously seen if you look closely
>>>>>> at the functions defined in the header files which makes calls to
>>>>>> fork(), abort() and fprintf().
>>>>>
>>>>> Ah, yes. I obviously did not dig deep enough. Using kunit for
>>>>> in-kernel tests and kselftest_harness for userspace tests seems like
>>>>> a sensible line to draw to me. Trying to unify kernel and userspace
>>>>> here sounds like it could be difficult so it's probably not worth
>>>>> forcing the issue unless someone wants to do some really fancy work
>>>>> to get it done.
>>>>>
>>>>> Based on some of the other commenters, I was under the impression
>>>>> that kselftests had in-kernel tests but I'm not sure where or if they
>>>>> exist.
>>>>
>>>> YES, kselftest has in-kernel tests. (Excuse the shouting...)
>>>>
>>>> Here is a likely list of them in the kernel source tree:
>>>>
>>>> $ grep module_init lib/test_*.c
>>>> lib/test_bitfield.c:module_init(test_bitfields)
>>>> lib/test_bitmap.c:module_init(test_bitmap_init);
>>>> lib/test_bpf.c:module_init(test_bpf_init);
>>>> lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
>>>> lib/test_firmware.c:module_init(test_firmware_init);
>>>> lib/test_hash.c:module_init(test_hash_init); /* Does everything */
>>>> lib/test_hexdump.c:module_init(test_hexdump_init);
>>>> lib/test_ida.c:module_init(ida_checks);
>>>> lib/test_kasan.c:module_init(kmalloc_tests_init);
>>>> lib/test_list_sort.c:module_init(list_sort_test);
>>>> lib/test_memcat_p.c:module_init(test_memcat_p_init);
>>>> lib/test_module.c:static int __init test_module_init(void)
>>>> lib/test_module.c:module_init(test_module_init);
>>>> lib/test_objagg.c:module_init(test_objagg_init);
>>>> lib/test_overflow.c:static int __init test_module_init(void)
>>>> lib/test_overflow.c:module_init(test_module_init);
>>>> lib/test_parman.c:module_init(test_parman_init);
>>>> lib/test_printf.c:module_init(test_printf_init);
>>>> lib/test_rhashtable.c:module_init(test_rht_init);
>>>> lib/test_siphash.c:module_init(siphash_test_init);
>>>> lib/test_sort.c:module_init(test_sort_init);
>>>> lib/test_stackinit.c:module_init(test_stackinit_init);
>>>> lib/test_static_key_base.c:module_init(test_static_key_base_init);
>>>> lib/test_static_keys.c:module_init(test_static_key_init);
>>>> lib/test_string.c:module_init(string_selftest_init);
>>>> lib/test_ubsan.c:module_init(test_ubsan_init);
>>>> lib/test_user_copy.c:module_init(test_user_copy_init);
>>>> lib/test_uuid.c:module_init(test_uuid_init);
>>>> lib/test_vmalloc.c:module_init(vmalloc_test_init)
>>>> lib/test_xarray.c:module_init(xarray_checks);
>>>>
>>>>
>>>>> If they do exists, it seems like it would make sense to
>>>>> convert those to kunit and have Kunit tests run-able in a VM or
>>>>> baremetal instance.
>>>>
>>>> They already run in a VM.
>>>>
>>>> They already run on bare metal.
>>>>
>>>> They already run in UML.
>>>>
>>>> This is not to say that KUnit does not make sense. But I'm still trying
>>>> to get a better description of the KUnit features (and there are
>>>> some).
>>>
>>> FYI, I have a master student who looks at converting some of these to KTF, such as for
>>> instance the XArray tests, which lended themselves quite good to a semi-automated
>>> conversion.
>>>
>>> The result is also a somewhat more compact code as well as the flexibility
>>> provided by the Googletest executor and the KTF frameworks, such as running selected
>>> tests, output formatting, debugging features etc.
>>
>> So is KTF already in upstream? Or is the plan to unify the KTF and
>
> I am not certain about KTF's upstream plans, but I assume that Knut
> would have CC'ed me on the thread if he had started working on it.
>
>> Kunit in-kernel test harnesses? Because there's tons of these
>
> No, no plan. Knut and I talked about this a good while ago and it
> seemed that we had pretty fundamentally different approaches both in
> terms of implementation and end goal. Combining them seemed pretty
> infeasible, at least from a technical perspective. Anyway, I am sure
> Knut would like to give him perspective on the matter and I don't want
> to say too much without first giving him a chance to chime in on the
> matter.
>
> Nevertheless, I hope you don't see resolving this as a condition for
> accepting this patchset. I had several rounds of RFC on KUnit, and no
> one had previously brought this up.

I seem to recall a request in reply to the KUnit RFC email threads to
work together.

However whether that impacts acceptance of this patch set is up to
the maintainer and how she wants to resolve the potential collision
of KUnit and KTF (if there is indeed any sort of collision).


>> in-kernel unit tests already, and every merge we get more (Frank's
>> list didn't even look into drivers or anywhere else, e.g. it's missing
>> the locking self tests I worked on in the past), and a more structured
>> approach would really be good.
>
> Well, that's what I am trying to do. I hope you like it!
>
> Cheers!
> .
>

2019-05-10 22:16:13

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On 5/10/19 9:17 AM, Logan Gunthorpe wrote:
>
>
> On 2019-05-09 11:18 p.m., Frank Rowand wrote:
>
>> YES, kselftest has in-kernel tests. (Excuse the shouting...)
>
> Cool. From my cursory look, in my opinion, these would be greatly
> improved by converting them to the framework Brendan is proposing for Kunit.
>
>>> If they do exists, it seems like it would make sense to
>>> convert those to kunit and have Kunit tests run-able in a VM or
>>> baremetal instance.
>>
>> They already run in a VM.
>>
>> They already run on bare metal.
>>
>> They already run in UML.
>
> Simply being able to run in UML is not the only thing here. Kunit
> provides the infrastructure to quickly build, run and report results for
> all the tests from userspace without needing to worry about the details
> of building and running a UML kernel, then parsing dmesg to figure out
> what tests were run or not.

Yes. But that is not the only environment that KUnit must support to be
of use to me for devicetree unittests (this is not new, Brendan is quite
aware of my needs and is not ignoring them).


>> This is not to say that KUnit does not make sense. But I'm still trying
>> to get a better description of the KUnit features (and there are
>> some).
>
> So read the patches, or the documentation[1] or the LWN article[2]. It's
> pretty well described in a lot of places -- that's one of the big
> advantages of it. In contrast, few people seems to have any concept of
> what kselftests are or where they are or how to run them (I was
> surprised to find the in-kernel tests in the lib tree).
>
> Logan
>
> [1] https://google.github.io/kunit-docs/third_party/kernel/docs/
> [2] https://lwn.net/Articles/780985/

I have been following the RFC versions. I have installed the RFC patches
and run them to the extent that they worked (devicetree unittests were
a guinea pig for test conversion in the RFC series, but the converted
tests did not work). I read portions of the code while trying to
understand the unittests conversion. I made review comments based on
the portion of the code that I did read. I have read the documentation
(very nice btw, as I have said before, but should be expanded).

My comment is that the description to submit the patch series should
be fuller -- KUnit potentially has a lot of nice attributes, and I
still think I have only scratched the surface. The average reviewer
may have even less in-depth knowledge than I do. And as I have
commented before, I keep diving into areas that I had no previous
experience with (such as kselftest) to be able to properly judge this
patch series.

2019-05-10 22:29:06

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On 5/10/19 1:54 PM, Brendan Higgins wrote:
> On Fri, May 10, 2019 at 5:13 AM Knut Omang <[email protected]> wrote:
>> On Fri, 2019-05-10 at 03:23 -0700, Brendan Higgins wrote:
>>>> On Fri, May 10, 2019 at 7:49 AM Knut Omang <[email protected]> wrote:
>>>>>
>>>>> On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
>>>>>> On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
>>>>>>>> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
>>>>>>>>>
>>>>>>>>> The second item, arguably, does have significant overlap with kselftest.
>>>>>>>>> Whether you are running short tests in a light weight UML environment or
>>>>>>>>> higher level tests in an heavier VM the two could be using the same
>>>>>>>>> framework for writing or defining in-kernel tests. It *may* also be valuable
>>>>>>>>> for some people to be able to run all the UML tests in the heavy VM
>>>>>>>>> environment along side other higher level tests.
>>>>>>>>>
>>>>>>>>> Looking at the selftests tree in the repo, we already have similar items to
>>>>>>>>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
>>>>>>>>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
>>>>>>>>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
>>>>>>>>>
>>>>>>>>> However, the number of users of this harness appears to be quite small. Most
>>>>>>>>> of the code in the selftests tree seems to be a random mismash of scripts
>>>>>>>>> and userspace code so it's not hard to see it as something completely
>>>>>>>>> different from the new Kunit:
>>>>>>>>>
>>>>>>>>> $ git grep --files-with-matches kselftest_harness.h *
>>>>>>>>
>>>>>>>> To the extent that we can unify how tests are written, I agree that
>>>>>>>> this would be a good thing. However, you should note that
>>>>>>>> kselftest_harness.h is currently assums that it will be included in
>>>>>>>> userspace programs. This is most obviously seen if you look closely
>>>>>>>> at the functions defined in the header files which makes calls to
>>>>>>>> fork(), abort() and fprintf().
>>>>>>>
>>>>>>> Ah, yes. I obviously did not dig deep enough. Using kunit for
>>>>>>> in-kernel tests and kselftest_harness for userspace tests seems like
>>>>>>> a sensible line to draw to me. Trying to unify kernel and userspace
>>>>>>> here sounds like it could be difficult so it's probably not worth
>>>>>>> forcing the issue unless someone wants to do some really fancy work
>>>>>>> to get it done.
>>>>>>>
>>>>>>> Based on some of the other commenters, I was under the impression
>>>>>>> that kselftests had in-kernel tests but I'm not sure where or if they
>>>>>>> exist.
>>>>>>
>>>>>> YES, kselftest has in-kernel tests. (Excuse the shouting...)
>>>>>>
>>>>>> Here is a likely list of them in the kernel source tree:
>>>>>>
>>>>>> $ grep module_init lib/test_*.c
>>>>>> lib/test_bitfield.c:module_init(test_bitfields)
>>>>>> lib/test_bitmap.c:module_init(test_bitmap_init);
>>>>>> lib/test_bpf.c:module_init(test_bpf_init);
>>>>>> lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
>>>>>> lib/test_firmware.c:module_init(test_firmware_init);
>>>>>> lib/test_hash.c:module_init(test_hash_init); /* Does everything */
>>>>>> lib/test_hexdump.c:module_init(test_hexdump_init);
>>>>>> lib/test_ida.c:module_init(ida_checks);
>>>>>> lib/test_kasan.c:module_init(kmalloc_tests_init);
>>>>>> lib/test_list_sort.c:module_init(list_sort_test);
>>>>>> lib/test_memcat_p.c:module_init(test_memcat_p_init);
>>>>>> lib/test_module.c:static int __init test_module_init(void)
>>>>>> lib/test_module.c:module_init(test_module_init);
>>>>>> lib/test_objagg.c:module_init(test_objagg_init);
>>>>>> lib/test_overflow.c:static int __init test_module_init(void)
>>>>>> lib/test_overflow.c:module_init(test_module_init);
>>>>>> lib/test_parman.c:module_init(test_parman_init);
>>>>>> lib/test_printf.c:module_init(test_printf_init);
>>>>>> lib/test_rhashtable.c:module_init(test_rht_init);
>>>>>> lib/test_siphash.c:module_init(siphash_test_init);
>>>>>> lib/test_sort.c:module_init(test_sort_init);
>>>>>> lib/test_stackinit.c:module_init(test_stackinit_init);
>>>>>> lib/test_static_key_base.c:module_init(test_static_key_base_init);
>>>>>> lib/test_static_keys.c:module_init(test_static_key_init);
>>>>>> lib/test_string.c:module_init(string_selftest_init);
>>>>>> lib/test_ubsan.c:module_init(test_ubsan_init);
>>>>>> lib/test_user_copy.c:module_init(test_user_copy_init);
>>>>>> lib/test_uuid.c:module_init(test_uuid_init);
>>>>>> lib/test_vmalloc.c:module_init(vmalloc_test_init)
>>>>>> lib/test_xarray.c:module_init(xarray_checks);
>>>>>>
>>>>>>
>>>>>>> If they do exists, it seems like it would make sense to
>>>>>>> convert those to kunit and have Kunit tests run-able in a VM or
>>>>>>> baremetal instance.
>>>>>>
>>>>>> They already run in a VM.
>>>>>>
>>>>>> They already run on bare metal.
>>>>>>
>>>>>> They already run in UML.
>>>>>>
>>>>>> This is not to say that KUnit does not make sense. But I'm still trying
>>>>>> to get a better description of the KUnit features (and there are
>>>>>> some).
>>>>>
>>>>> FYI, I have a master student who looks at converting some of these to KTF, such as
>>> for
>>>>> instance the XArray tests, which lended themselves quite good to a semi-automated
>>>>> conversion.
>>>>>
>>>>> The result is also a somewhat more compact code as well as the flexibility
>>>>> provided by the Googletest executor and the KTF frameworks, such as running selected
>>>>> tests, output formatting, debugging features etc.
>>>>
>>>> So is KTF already in upstream? Or is the plan to unify the KTF and
>>>
>>> I am not certain about KTF's upstream plans, but I assume that Knut
>>> would have CC'ed me on the thread if he had started working on it.
>>
>> You are on the Github watcher list for KTF?
>
> Yep! I have been since LPC in 2017.
>
>> Quite a few of the commits there are preparatory for a forthcoming kernel patch set.
>> I'll of course CC: you on the patch set when we send it to the list.
>
> Awesome! I appreciate it.
>
>>
>>>> Kunit in-kernel test harnesses? Because there's tons of these
>>>
>>> No, no plan. Knut and I talked about this a good while ago and it
>>> seemed that we had pretty fundamentally different approaches both in
>>> terms of implementation and end goal. Combining them seemed pretty
>>> infeasible, at least from a technical perspective. Anyway, I am sure
>>> Knut would like to give him perspective on the matter and I don't want
>>> to say too much without first giving him a chance to chime in on the
>>> matter.
>>
>> I need more time to study KUnit details to say, but from a 10k feet perspective:
>> I think at least there's a potential for some API unification, in using the same macro
>> names. How about removing the KUNIT_ prefix to the test macros ;-) ?
>
> Heh, heh. That's actually the way I had it in the earliest versions of
> KUnit! But that was pretty much the very first thing everyone
> complained about. I think I went from no prefix (like you are
> suggesting) to TEST_* before the first version of the RFC at the
> request of several people I was kicking the idea around with, and then
> I think I was asked to go from TEST_* to KUNIT_* in the very first
> revision of the RFC.
>
> In short, I am sympathetic to your suggestion, but I think that is
> non-negotiable at this point. The community has a clear policy in
> place on the matter, and at this point I would really prefer not to
> change all the symbol names again.

This would not be the first time that a patch submitter has been
told "do B instead of A" for version 1, then told "do C instead of
B" for version 2, then told "do A instead of C" for the final version.

It sucks, but it happens.

As an aside, can you point to where the "clear policy in place" is
documented, and what the policy is?

-Frank


>> That would make the names shorter, saving typing when writing tests, and storage ;-)
>> and also make the names more similar to KTF's, and those of user land unit test
>
> You mean the Googletest/Googlemock expectations/assertions?
>
> It's a great library (with not so great a name), but unfortunately it
> is written in C++, which I think pretty much counts it out here.
>
>> frameworks? Also it will make it possible to have functions compiling both with KTF and
>> KUnit, facilitating moving code between the two.
>
> I think that would be cool, but again, I don't think this will be
> possible with Googletest/Googlemock.
>
>>
>> Also the string stream facilities of KUnit looks interesting to share.
>
> I am glad you think so!
>
> If your biggest concern on my side is test macro names (which I think
> is a no-go as I mentioned above), I think we should be in pretty good
> shape once you are ready to move forward. Besides, I have a lot more
> KUnit patches coming after this: landing this patchset is just the
> beginning. So how about we keep moving forward on this patchset?
>

2019-05-11 06:35:20

by Knut Omang

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Fri, 2019-05-10 at 15:18 -0700, Frank Rowand wrote:
> On 5/10/19 1:54 PM, Brendan Higgins wrote:
> > On Fri, May 10, 2019 at 5:13 AM Knut Omang <[email protected]> wrote:
> >> On Fri, 2019-05-10 at 03:23 -0700, Brendan Higgins wrote:
> >>>> On Fri, May 10, 2019 at 7:49 AM Knut Omang <[email protected]> wrote:
> >>>>>
> >>>>> On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> >>>>>> On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>> On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> >>>>>>>> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> >>>>>>>>>
> >>>>>>>>> The second item, arguably, does have significant overlap with kselftest.
> >>>>>>>>> Whether you are running short tests in a light weight UML environment or
> >>>>>>>>> higher level tests in an heavier VM the two could be using the same
> >>>>>>>>> framework for writing or defining in-kernel tests. It *may* also be valuable
> >>>>>>>>> for some people to be able to run all the UML tests in the heavy VM
> >>>>>>>>> environment along side other higher level tests.
> >>>>>>>>>
> >>>>>>>>> Looking at the selftests tree in the repo, we already have similar items to
> >>>>>>>>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> >>>>>>>>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> >>>>>>>>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> >>>>>>>>>
> >>>>>>>>> However, the number of users of this harness appears to be quite small. Most
> >>>>>>>>> of the code in the selftests tree seems to be a random mismash of scripts
> >>>>>>>>> and userspace code so it's not hard to see it as something completely
> >>>>>>>>> different from the new Kunit:
> >>>>>>>>>
> >>>>>>>>> $ git grep --files-with-matches kselftest_harness.h *
> >>>>>>>>
> >>>>>>>> To the extent that we can unify how tests are written, I agree that
> >>>>>>>> this would be a good thing. However, you should note that
> >>>>>>>> kselftest_harness.h is currently assums that it will be included in
> >>>>>>>> userspace programs. This is most obviously seen if you look closely
> >>>>>>>> at the functions defined in the header files which makes calls to
> >>>>>>>> fork(), abort() and fprintf().
> >>>>>>>
> >>>>>>> Ah, yes. I obviously did not dig deep enough. Using kunit for
> >>>>>>> in-kernel tests and kselftest_harness for userspace tests seems like
> >>>>>>> a sensible line to draw to me. Trying to unify kernel and userspace
> >>>>>>> here sounds like it could be difficult so it's probably not worth
> >>>>>>> forcing the issue unless someone wants to do some really fancy work
> >>>>>>> to get it done.
> >>>>>>>
> >>>>>>> Based on some of the other commenters, I was under the impression
> >>>>>>> that kselftests had in-kernel tests but I'm not sure where or if they
> >>>>>>> exist.
> >>>>>>
> >>>>>> YES, kselftest has in-kernel tests. (Excuse the shouting...)
> >>>>>>
> >>>>>> Here is a likely list of them in the kernel source tree:
> >>>>>>
> >>>>>> $ grep module_init lib/test_*.c
> >>>>>> lib/test_bitfield.c:module_init(test_bitfields)
> >>>>>> lib/test_bitmap.c:module_init(test_bitmap_init);
> >>>>>> lib/test_bpf.c:module_init(test_bpf_init);
> >>>>>> lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> >>>>>> lib/test_firmware.c:module_init(test_firmware_init);
> >>>>>> lib/test_hash.c:module_init(test_hash_init); /* Does everything */
> >>>>>> lib/test_hexdump.c:module_init(test_hexdump_init);
> >>>>>> lib/test_ida.c:module_init(ida_checks);
> >>>>>> lib/test_kasan.c:module_init(kmalloc_tests_init);
> >>>>>> lib/test_list_sort.c:module_init(list_sort_test);
> >>>>>> lib/test_memcat_p.c:module_init(test_memcat_p_init);
> >>>>>> lib/test_module.c:static int __init test_module_init(void)
> >>>>>> lib/test_module.c:module_init(test_module_init);
> >>>>>> lib/test_objagg.c:module_init(test_objagg_init);
> >>>>>> lib/test_overflow.c:static int __init test_module_init(void)
> >>>>>> lib/test_overflow.c:module_init(test_module_init);
> >>>>>> lib/test_parman.c:module_init(test_parman_init);
> >>>>>> lib/test_printf.c:module_init(test_printf_init);
> >>>>>> lib/test_rhashtable.c:module_init(test_rht_init);
> >>>>>> lib/test_siphash.c:module_init(siphash_test_init);
> >>>>>> lib/test_sort.c:module_init(test_sort_init);
> >>>>>> lib/test_stackinit.c:module_init(test_stackinit_init);
> >>>>>> lib/test_static_key_base.c:module_init(test_static_key_base_init);
> >>>>>> lib/test_static_keys.c:module_init(test_static_key_init);
> >>>>>> lib/test_string.c:module_init(string_selftest_init);
> >>>>>> lib/test_ubsan.c:module_init(test_ubsan_init);
> >>>>>> lib/test_user_copy.c:module_init(test_user_copy_init);
> >>>>>> lib/test_uuid.c:module_init(test_uuid_init);
> >>>>>> lib/test_vmalloc.c:module_init(vmalloc_test_init)
> >>>>>> lib/test_xarray.c:module_init(xarray_checks);
> >>>>>>
> >>>>>>
> >>>>>>> If they do exists, it seems like it would make sense to
> >>>>>>> convert those to kunit and have Kunit tests run-able in a VM or
> >>>>>>> baremetal instance.
> >>>>>>
> >>>>>> They already run in a VM.
> >>>>>>
> >>>>>> They already run on bare metal.
> >>>>>>
> >>>>>> They already run in UML.
> >>>>>>
> >>>>>> This is not to say that KUnit does not make sense. But I'm still trying
> >>>>>> to get a better description of the KUnit features (and there are
> >>>>>> some).
> >>>>>
> >>>>> FYI, I have a master student who looks at converting some of these to KTF, such as
> >>> for
> >>>>> instance the XArray tests, which lended themselves quite good to a semi-automated
> >>>>> conversion.
> >>>>>
> >>>>> The result is also a somewhat more compact code as well as the flexibility
> >>>>> provided by the Googletest executor and the KTF frameworks, such as running
> selected
> >>>>> tests, output formatting, debugging features etc.
> >>>>
> >>>> So is KTF already in upstream? Or is the plan to unify the KTF and
> >>>
> >>> I am not certain about KTF's upstream plans, but I assume that Knut
> >>> would have CC'ed me on the thread if he had started working on it.
> >>
> >> You are on the Github watcher list for KTF?
> >
> > Yep! I have been since LPC in 2017.
> >
> >> Quite a few of the commits there are preparatory for a forthcoming kernel patch set.
> >> I'll of course CC: you on the patch set when we send it to the list.
> >
> > Awesome! I appreciate it.
> >
> >>
> >>>> Kunit in-kernel test harnesses? Because there's tons of these
> >>>
> >>> No, no plan. Knut and I talked about this a good while ago and it
> >>> seemed that we had pretty fundamentally different approaches both in
> >>> terms of implementation and end goal. Combining them seemed pretty
> >>> infeasible, at least from a technical perspective. Anyway, I am sure
> >>> Knut would like to give him perspective on the matter and I don't want
> >>> to say too much without first giving him a chance to chime in on the
> >>> matter.
> >>
> >> I need more time to study KUnit details to say, but from a 10k feet perspective:
> >> I think at least there's a potential for some API unification, in using the same
> macro
> >> names. How about removing the KUNIT_ prefix to the test macros ;-) ?
> >
> > Heh, heh. That's actually the way I had it in the earliest versions of
> > KUnit! But that was pretty much the very first thing everyone
> > complained about. I think I went from no prefix (like you are
> > suggesting) to TEST_* before the first version of the RFC at the
> > request of several people I was kicking the idea around with, and then
> > I think I was asked to go from TEST_* to KUNIT_* in the very first
> > revision of the RFC.
> >
> > In short, I am sympathetic to your suggestion, but I think that is
> > non-negotiable at this point. The community has a clear policy in
> > place on the matter, and at this point I would really prefer not to
> > change all the symbol names again.
>
> This would not be the first time that a patch submitter has been
> told "do B instead of A" for version 1, then told "do C instead of
> B" for version 2, then told "do A instead of C" for the final version.
>
> It sucks, but it happens.

Sorry, I must have overlooked the B instead of A instance - otherwise
I would have objected against it - in addition to the recognizability
and portability issue I think it is important that these primitives are
not unnecessary long since they will be written a *lot* of times if things go
our way.

And the reason for using unique names elsewhere is to be able to co-exist with
other components with the same needs. In the case of tests, I believe they are
in a different category, they are not supposed to be part of production kernels,
and we want them to be used all over, that should warrant having the ASSERT_ and EXPECT_
prefixes "reserved" for the purpose, so I would urge some pragmatism here!

> As an aside, can you point to where the "clear policy in place" is
> documented, and what the policy is?
>
> -Frank
>
>
> >> That would make the names shorter, saving typing when writing tests, and storage ;-)
> >> and also make the names more similar to KTF's, and those of user land unit test
> >
> > You mean the Googletest/Googlemock expectations/assertions?
> >
> > It's a great library (with not so great a name), but unfortunately it
> > is written in C++, which I think pretty much counts it out here.

Using a similar syntax is a good thing, since it makes it easier for people
who write tests in user land frameworks to contribute to the kernel tests.
And if, lets say, someone later comes up with a way to run the KUnit tests in "real"
user land within Googletest ;-) then less editing would be needed..

> >> frameworks? Also it will make it possible to have functions compiling both with KTF
> and
> >> KUnit, facilitating moving code between the two.
> >
> > I think that would be cool, but again, I don't think this will be
> > possible with Googletest/Googlemock.

I was thinking of moves between KUnit tests and KTF tests, kernel code only.
Some test functions may easily be usable both in a "pure" mocking environment
and in an integrated setting with hardware/driver/userspace dependencies.

> >>
> >> Also the string stream facilities of KUnit looks interesting to share.
> >
> > I am glad you think so!
> >
> > If your biggest concern on my side is test macro names (which I think
> > is a no-go as I mentioned above), I think we should be in pretty good
> > shape once you are ready to move forward. Besides, I have a lot more
> > KUnit patches coming after this: landing this patchset is just the
> > beginning. So how about we keep moving forward on this patchset?

I think the importance of a well thought through
test API definition is not to be underestimated
- the sooner we can unify and establish a common base, the better,
I think.

My other concern is, as mentioned earlier, whether UML is really that
different from just running in a VM wrt debugging support, and that
there exists a better approach based on automatically generating
an environment where the test code and the source under test
can be compiled in a normal user land program. But it seems
there are enough clients of the UML approach to justify it
as a lightweight entry. We want to make it easy and inexcusable
not to test the code, having a low bar of entry is certainly good.
Other than that, I really would need to spend some more time with
the details on KUnit to verify my so far fairly
shallow observations!

Thanks,
Knut

2019-05-11 06:59:06

by Knut Omang

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Fri, 2019-05-10 at 14:59 -0700, Frank Rowand wrote:
> On 5/10/19 3:23 AM, Brendan Higgins wrote:
> >> On Fri, May 10, 2019 at 7:49 AM Knut Omang <[email protected]> wrote:
> >>>
> >>> On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> >>>> On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> >>>>>
> >>>>>
> >>>>> On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> >>>>>> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> >>>>>>>
> >>>>>>> The second item, arguably, does have significant overlap with kselftest.
> >>>>>>> Whether you are running short tests in a light weight UML environment or
> >>>>>>> higher level tests in an heavier VM the two could be using the same
> >>>>>>> framework for writing or defining in-kernel tests. It *may* also be valuable
> >>>>>>> for some people to be able to run all the UML tests in the heavy VM
> >>>>>>> environment along side other higher level tests.
> >>>>>>>
> >>>>>>> Looking at the selftests tree in the repo, we already have similar items to
> >>>>>>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> >>>>>>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> >>>>>>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> >>>>>>>
> >>>>>>> However, the number of users of this harness appears to be quite small. Most
> >>>>>>> of the code in the selftests tree seems to be a random mismash of scripts
> >>>>>>> and userspace code so it's not hard to see it as something completely
> >>>>>>> different from the new Kunit:
> >>>>>>>
> >>>>>>> $ git grep --files-with-matches kselftest_harness.h *
> >>>>>>
> >>>>>> To the extent that we can unify how tests are written, I agree that
> >>>>>> this would be a good thing. However, you should note that
> >>>>>> kselftest_harness.h is currently assums that it will be included in
> >>>>>> userspace programs. This is most obviously seen if you look closely
> >>>>>> at the functions defined in the header files which makes calls to
> >>>>>> fork(), abort() and fprintf().
> >>>>>
> >>>>> Ah, yes. I obviously did not dig deep enough. Using kunit for
> >>>>> in-kernel tests and kselftest_harness for userspace tests seems like
> >>>>> a sensible line to draw to me. Trying to unify kernel and userspace
> >>>>> here sounds like it could be difficult so it's probably not worth
> >>>>> forcing the issue unless someone wants to do some really fancy work
> >>>>> to get it done.
> >>>>>
> >>>>> Based on some of the other commenters, I was under the impression
> >>>>> that kselftests had in-kernel tests but I'm not sure where or if they
> >>>>> exist.
> >>>>
> >>>> YES, kselftest has in-kernel tests. (Excuse the shouting...)
> >>>>
> >>>> Here is a likely list of them in the kernel source tree:
> >>>>
> >>>> $ grep module_init lib/test_*.c
> >>>> lib/test_bitfield.c:module_init(test_bitfields)
> >>>> lib/test_bitmap.c:module_init(test_bitmap_init);
> >>>> lib/test_bpf.c:module_init(test_bpf_init);
> >>>> lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> >>>> lib/test_firmware.c:module_init(test_firmware_init);
> >>>> lib/test_hash.c:module_init(test_hash_init); /* Does everything */
> >>>> lib/test_hexdump.c:module_init(test_hexdump_init);
> >>>> lib/test_ida.c:module_init(ida_checks);
> >>>> lib/test_kasan.c:module_init(kmalloc_tests_init);
> >>>> lib/test_list_sort.c:module_init(list_sort_test);
> >>>> lib/test_memcat_p.c:module_init(test_memcat_p_init);
> >>>> lib/test_module.c:static int __init test_module_init(void)
> >>>> lib/test_module.c:module_init(test_module_init);
> >>>> lib/test_objagg.c:module_init(test_objagg_init);
> >>>> lib/test_overflow.c:static int __init test_module_init(void)
> >>>> lib/test_overflow.c:module_init(test_module_init);
> >>>> lib/test_parman.c:module_init(test_parman_init);
> >>>> lib/test_printf.c:module_init(test_printf_init);
> >>>> lib/test_rhashtable.c:module_init(test_rht_init);
> >>>> lib/test_siphash.c:module_init(siphash_test_init);
> >>>> lib/test_sort.c:module_init(test_sort_init);
> >>>> lib/test_stackinit.c:module_init(test_stackinit_init);
> >>>> lib/test_static_key_base.c:module_init(test_static_key_base_init);
> >>>> lib/test_static_keys.c:module_init(test_static_key_init);
> >>>> lib/test_string.c:module_init(string_selftest_init);
> >>>> lib/test_ubsan.c:module_init(test_ubsan_init);
> >>>> lib/test_user_copy.c:module_init(test_user_copy_init);
> >>>> lib/test_uuid.c:module_init(test_uuid_init);
> >>>> lib/test_vmalloc.c:module_init(vmalloc_test_init)
> >>>> lib/test_xarray.c:module_init(xarray_checks);
> >>>>
> >>>>
> >>>>> If they do exists, it seems like it would make sense to
> >>>>> convert those to kunit and have Kunit tests run-able in a VM or
> >>>>> baremetal instance.
> >>>>
> >>>> They already run in a VM.
> >>>>
> >>>> They already run on bare metal.
> >>>>
> >>>> They already run in UML.
> >>>>
> >>>> This is not to say that KUnit does not make sense. But I'm still trying
> >>>> to get a better description of the KUnit features (and there are
> >>>> some).
> >>>
> >>> FYI, I have a master student who looks at converting some of these to KTF, such as
> for
> >>> instance the XArray tests, which lended themselves quite good to a semi-automated
> >>> conversion.
> >>>
> >>> The result is also a somewhat more compact code as well as the flexibility
> >>> provided by the Googletest executor and the KTF frameworks, such as running selected
> >>> tests, output formatting, debugging features etc.
> >>
> >> So is KTF already in upstream? Or is the plan to unify the KTF and
> >
> > I am not certain about KTF's upstream plans, but I assume that Knut
> > would have CC'ed me on the thread if he had started working on it.
> >
> >> Kunit in-kernel test harnesses? Because there's tons of these
> >
> > No, no plan. Knut and I talked about this a good while ago and it
> > seemed that we had pretty fundamentally different approaches both in
> > terms of implementation and end goal. Combining them seemed pretty
> > infeasible, at least from a technical perspective. Anyway, I am sure
> > Knut would like to give him perspective on the matter and I don't want
> > to say too much without first giving him a chance to chime in on the
> > matter.
> >
> > Nevertheless, I hope you don't see resolving this as a condition for
> > accepting this patchset. I had several rounds of RFC on KUnit, and no
> > one had previously brought this up.
>
> I seem to recall a request in reply to the KUnit RFC email threads to
> work together.

You recall right.
I wanted us to work together to refine a common approach.
I still think that's possible.

> However whether that impacts acceptance of this patch set is up to
> the maintainer and how she wants to resolve the potential collision
> of KUnit and KTF (if there is indeed any sort of collision).

I believe there's overlap and potential for unification and code sharing.
My concern is to make sure that that can happen without disrupting too
many test users. I'd really like to get some more time to explore that.

It strikes me that the main difference in the two approaches
lies in the way reporting is done by default. Since KUnit handles all the
reporting itself, while KTF relies on Googletest for that, a lot more code
in KUnit revolves around that part, while with KTF we have focused more on
features to enable writing powerful and effective tests.

The reporting part can possibly be made configurable: Reporting with printk or reporting
via netlink to user space. In fact, as a KTF debugging option KTF already
supports printk reporting which can be enabled via sysfs.

If macros can have the same syntax, then there's
likely features in KTF that KUnit users would benefit from too.
But this of course have to be tried out.

Knut

> >> in-kernel unit tests already, and every merge we get more (Frank's
> >> list didn't even look into drivers or anywhere else, e.g. it's missing
> >> the locking self tests I worked on in the past), and a more structured
> >> approach would really be good.
> >
> > Well, that's what I am trying to do. I hope you like it!
> >
> > Cheers!
> > .
> >
>

2019-05-11 17:37:11

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> However, the reply is incorrect. Kselftest in-kernel tests (which
> is the context here) can be configured as built in instead of as
> a module, and built in a UML kernel. The UML kernel can boot,
> running the in-kernel tests before UML attempts to invoke the
> init process.

Um, Citation needed?

I don't see any evidence for this in the kselftest documentation, nor
do I see any evidence of this in the kselftest Makefiles.

There exists test modules in the kernel that run before the init
scripts run --- but that's not strictly speaking part of kselftests,
and do not have any kind of infrastructure. As noted, the
kselftests_harness header file fundamentally assumes that you are
running test code in userspace.

- Ted

2019-05-13 16:14:26

by Daniel Vetter

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Sat, May 11, 2019 at 01:33:44PM -0400, Theodore Ts'o wrote:
> On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> > However, the reply is incorrect. Kselftest in-kernel tests (which
> > is the context here) can be configured as built in instead of as
> > a module, and built in a UML kernel. The UML kernel can boot,
> > running the in-kernel tests before UML attempts to invoke the
> > init process.
>
> Um, Citation needed?
>
> I don't see any evidence for this in the kselftest documentation, nor
> do I see any evidence of this in the kselftest Makefiles.
>
> There exists test modules in the kernel that run before the init
> scripts run --- but that's not strictly speaking part of kselftests,
> and do not have any kind of infrastructure. As noted, the
> kselftests_harness header file fundamentally assumes that you are
> running test code in userspace.

Yeah I really like the "no userspace required at all" design of kunit,
while still collecting results in a well-defined way (unless the current
self-test that just run when you load the module, with maybe some
kselftest ad-hoc wrapper around to collect the results).

What I want to do long-term is to run these kernel unit tests as part of
the build-testing, most likely in gitlab (sooner or later, for drm.git
only ofc). So that people get their pull requests (and patch series, we
have some ideas to tie this into patchwork) automatically tested for this
super basic stuff.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

2019-05-14 06:05:46

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Mon, May 13, 2019 at 04:44:51PM +0200, Daniel Vetter wrote:
> On Sat, May 11, 2019 at 01:33:44PM -0400, Theodore Ts'o wrote:
> > On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> > > However, the reply is incorrect. Kselftest in-kernel tests (which
> > > is the context here) can be configured as built in instead of as
> > > a module, and built in a UML kernel. The UML kernel can boot,
> > > running the in-kernel tests before UML attempts to invoke the
> > > init process.
> >
> > Um, Citation needed?
> >
> > I don't see any evidence for this in the kselftest documentation, nor
> > do I see any evidence of this in the kselftest Makefiles.
> >
> > There exists test modules in the kernel that run before the init
> > scripts run --- but that's not strictly speaking part of kselftests,
> > and do not have any kind of infrastructure. As noted, the
> > kselftests_harness header file fundamentally assumes that you are
> > running test code in userspace.
>
> Yeah I really like the "no userspace required at all" design of kunit,
> while still collecting results in a well-defined way (unless the current
> self-test that just run when you load the module, with maybe some
> kselftest ad-hoc wrapper around to collect the results).
>
> What I want to do long-term is to run these kernel unit tests as part of
> the build-testing, most likely in gitlab (sooner or later, for drm.git

Totally! This is part of the reason I have been insisting on a minimum
of UML compatibility for all unit tests. If you can suffiently constrain
the environment that is required for tests to run in, it makes it much
easier not only for a human to run your tests, but it also makes it a
lot easier for an automated service to be able to run your tests.

I actually have a prototype presubmit already working on my
"stable/non-upstream" branch. You can checkout what presubmit results
look like here[1][2].

> only ofc). So that people get their pull requests (and patch series, we
> have some ideas to tie this into patchwork) automatically tested for this

Might that be Snowpatch[3]? I talked to Russell, the creator of Snowpatch,
and he seemed pretty open to collaboration.

Before I heard about Snowpatch, I had an intern write a translation
layer that made Prow (the presubmit service that I used in the prototype
above) work with LKML[4].

I am not married to either approach, but I think between the two of
them, most of the initial legwork has been done to make presubmit on
LKML a reality.

> super basic stuff.

I am really excited to hear back on what you think!

Cheers!

[1] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-7bfa40efb132e15c8388755c273837559911425c
[2] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-a6784496eafff442ac98fb068bf1a0f36ee73509
[3] https://developer.ibm.com/open/projects/snowpatch/
[4] https://kunit.googlesource.com/prow-lkml/

2019-05-14 06:43:03

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Sat, May 11, 2019 at 08:17:47AM +0200, Knut Omang wrote:
> On Fri, 2019-05-10 at 15:18 -0700, Frank Rowand wrote:
> > On 5/10/19 1:54 PM, Brendan Higgins wrote:
> > > On Fri, May 10, 2019 at 5:13 AM Knut Omang <[email protected]> wrote:
> > >> On Fri, 2019-05-10 at 03:23 -0700, Brendan Higgins wrote:
> > >>>> On Fri, May 10, 2019 at 7:49 AM Knut Omang <[email protected]> wrote:
> > >>>>>
> > >>>>> On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> > >>>>>> On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> > >>>>>>>
> > >>>>>>>
> > >>>>>>> On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> > >>>>>>>> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> > >>>>>>>>>
> > >>>>>>>>> The second item, arguably, does have significant overlap with kselftest.
> > >>>>>>>>> Whether you are running short tests in a light weight UML environment or
> > >>>>>>>>> higher level tests in an heavier VM the two could be using the same
> > >>>>>>>>> framework for writing or defining in-kernel tests. It *may* also be valuable
> > >>>>>>>>> for some people to be able to run all the UML tests in the heavy VM
> > >>>>>>>>> environment along side other higher level tests.
> > >>>>>>>>>
> > >>>>>>>>> Looking at the selftests tree in the repo, we already have similar items to
> > >>>>>>>>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> > >>>>>>>>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> > >>>>>>>>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> > >>>>>>>>>
> > >>>>>>>>> However, the number of users of this harness appears to be quite small. Most
> > >>>>>>>>> of the code in the selftests tree seems to be a random mismash of scripts
> > >>>>>>>>> and userspace code so it's not hard to see it as something completely
> > >>>>>>>>> different from the new Kunit:
> > >>>>>>>>>
> > >>>>>>>>> $ git grep --files-with-matches kselftest_harness.h *
> > >>>>>>>>
> > >>>>>>>> To the extent that we can unify how tests are written, I agree that
> > >>>>>>>> this would be a good thing. However, you should note that
> > >>>>>>>> kselftest_harness.h is currently assums that it will be included in
> > >>>>>>>> userspace programs. This is most obviously seen if you look closely
> > >>>>>>>> at the functions defined in the header files which makes calls to
> > >>>>>>>> fork(), abort() and fprintf().
> > >>>>>>>
> > >>>>>>> Ah, yes. I obviously did not dig deep enough. Using kunit for
> > >>>>>>> in-kernel tests and kselftest_harness for userspace tests seems like
> > >>>>>>> a sensible line to draw to me. Trying to unify kernel and userspace
> > >>>>>>> here sounds like it could be difficult so it's probably not worth
> > >>>>>>> forcing the issue unless someone wants to do some really fancy work
> > >>>>>>> to get it done.
> > >>>>>>>
> > >>>>>>> Based on some of the other commenters, I was under the impression
> > >>>>>>> that kselftests had in-kernel tests but I'm not sure where or if they
> > >>>>>>> exist.
> > >>>>>>
> > >>>>>> YES, kselftest has in-kernel tests. (Excuse the shouting...)
> > >>>>>>
> > >>>>>> Here is a likely list of them in the kernel source tree:
> > >>>>>>
> > >>>>>> $ grep module_init lib/test_*.c
> > >>>>>> lib/test_bitfield.c:module_init(test_bitfields)
> > >>>>>> lib/test_bitmap.c:module_init(test_bitmap_init);
> > >>>>>> lib/test_bpf.c:module_init(test_bpf_init);
> > >>>>>> lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> > >>>>>> lib/test_firmware.c:module_init(test_firmware_init);
> > >>>>>> lib/test_hash.c:module_init(test_hash_init); /* Does everything */
> > >>>>>> lib/test_hexdump.c:module_init(test_hexdump_init);
> > >>>>>> lib/test_ida.c:module_init(ida_checks);
> > >>>>>> lib/test_kasan.c:module_init(kmalloc_tests_init);
> > >>>>>> lib/test_list_sort.c:module_init(list_sort_test);
> > >>>>>> lib/test_memcat_p.c:module_init(test_memcat_p_init);
> > >>>>>> lib/test_module.c:static int __init test_module_init(void)
> > >>>>>> lib/test_module.c:module_init(test_module_init);
> > >>>>>> lib/test_objagg.c:module_init(test_objagg_init);
> > >>>>>> lib/test_overflow.c:static int __init test_module_init(void)
> > >>>>>> lib/test_overflow.c:module_init(test_module_init);
> > >>>>>> lib/test_parman.c:module_init(test_parman_init);
> > >>>>>> lib/test_printf.c:module_init(test_printf_init);
> > >>>>>> lib/test_rhashtable.c:module_init(test_rht_init);
> > >>>>>> lib/test_siphash.c:module_init(siphash_test_init);
> > >>>>>> lib/test_sort.c:module_init(test_sort_init);
> > >>>>>> lib/test_stackinit.c:module_init(test_stackinit_init);
> > >>>>>> lib/test_static_key_base.c:module_init(test_static_key_base_init);
> > >>>>>> lib/test_static_keys.c:module_init(test_static_key_init);
> > >>>>>> lib/test_string.c:module_init(string_selftest_init);
> > >>>>>> lib/test_ubsan.c:module_init(test_ubsan_init);
> > >>>>>> lib/test_user_copy.c:module_init(test_user_copy_init);
> > >>>>>> lib/test_uuid.c:module_init(test_uuid_init);
> > >>>>>> lib/test_vmalloc.c:module_init(vmalloc_test_init)
> > >>>>>> lib/test_xarray.c:module_init(xarray_checks);
> > >>>>>>
> > >>>>>>
> > >>>>>>> If they do exists, it seems like it would make sense to
> > >>>>>>> convert those to kunit and have Kunit tests run-able in a VM or
> > >>>>>>> baremetal instance.
> > >>>>>>
> > >>>>>> They already run in a VM.
> > >>>>>>
> > >>>>>> They already run on bare metal.
> > >>>>>>
> > >>>>>> They already run in UML.
> > >>>>>>
> > >>>>>> This is not to say that KUnit does not make sense. But I'm still trying
> > >>>>>> to get a better description of the KUnit features (and there are
> > >>>>>> some).
> > >>>>>
> > >>>>> FYI, I have a master student who looks at converting some of these to KTF, such as
> > >>> for
> > >>>>> instance the XArray tests, which lended themselves quite good to a semi-automated
> > >>>>> conversion.
> > >>>>>
> > >>>>> The result is also a somewhat more compact code as well as the flexibility
> > >>>>> provided by the Googletest executor and the KTF frameworks, such as running
> > selected
> > >>>>> tests, output formatting, debugging features etc.
> > >>>>
> > >>>> So is KTF already in upstream? Or is the plan to unify the KTF and
> > >>>
> > >>> I am not certain about KTF's upstream plans, but I assume that Knut
> > >>> would have CC'ed me on the thread if he had started working on it.
> > >>
> > >> You are on the Github watcher list for KTF?
> > >
> > > Yep! I have been since LPC in 2017.
> > >
> > >> Quite a few of the commits there are preparatory for a forthcoming kernel patch set.
> > >> I'll of course CC: you on the patch set when we send it to the list.
> > >
> > > Awesome! I appreciate it.
> > >
> > >>
> > >>>> Kunit in-kernel test harnesses? Because there's tons of these
> > >>>
> > >>> No, no plan. Knut and I talked about this a good while ago and it
> > >>> seemed that we had pretty fundamentally different approaches both in
> > >>> terms of implementation and end goal. Combining them seemed pretty
> > >>> infeasible, at least from a technical perspective. Anyway, I am sure
> > >>> Knut would like to give him perspective on the matter and I don't want
> > >>> to say too much without first giving him a chance to chime in on the
> > >>> matter.
> > >>
> > >> I need more time to study KUnit details to say, but from a 10k feet perspective:
> > >> I think at least there's a potential for some API unification, in using the same
> > macro
> > >> names. How about removing the KUNIT_ prefix to the test macros ;-) ?
> > >
> > > Heh, heh. That's actually the way I had it in the earliest versions of
> > > KUnit! But that was pretty much the very first thing everyone
> > > complained about. I think I went from no prefix (like you are
> > > suggesting) to TEST_* before the first version of the RFC at the
> > > request of several people I was kicking the idea around with, and then
> > > I think I was asked to go from TEST_* to KUNIT_* in the very first
> > > revision of the RFC.
> > >
> > > In short, I am sympathetic to your suggestion, but I think that is
> > > non-negotiable at this point. The community has a clear policy in
> > > place on the matter, and at this point I would really prefer not to
> > > change all the symbol names again.
> >
> > This would not be the first time that a patch submitter has been
> > told "do B instead of A" for version 1, then told "do C instead of
> > B" for version 2, then told "do A instead of C" for the final version.
> >
> > It sucks, but it happens.

Sure, I have been there before, but I thought those original opinions
were based on a pretty well established convention. Also, I don't think
those original opinions have changed. If they have, please chime in and
correct me.

> Sorry, I must have overlooked the B instead of A instance - otherwise
> I would have objected against it - in addition to the recognizability
> and portability issue I think it is important that these primitives are
> not unnecessary long since they will be written a *lot* of times if things go
> our way.

I get that. That is why I thought I might have been worthy of an
exception. It is definitely easier to write EXPECT_EQ(...) than
KUNIT_EXPECT_EQ(...) or TEST_EXPECT_EQ(...), but no one else seemed to
agree in the past.

Even now, it is only you (and maybe Frank?) telling me to change it; I
would like to maybe hear Shuah, Kees, or Greg chime in on this before I
go about actually chaning it back, as I distinctly remember each of them
telling me that I should go with KUNIT_*.

> And the reason for using unique names elsewhere is to be able to co-exist with
> other components with the same needs. In the case of tests, I believe they are

That's the policy I was talking about.

> in a different category, they are not supposed to be part of production kernels,
> and we want them to be used all over, that should warrant having the ASSERT_ and EXPECT_
> prefixes "reserved" for the purpose, so I would urge some pragmatism here!

I don't disagree. Again, this is what I initially proposed, but no one
agreed with me on this point.

I am not saying no. Nevertheless, this is a pretty consistently applied
pattern for new stuff, and I would really prefer not to make waves on
something that really doesn't matter all that much.

So like I said, if we can get some more discussion on this and it seems
like broad consensus says we can reserve ASSERT_* and EXPECT_*, then I
will go along with it.

> > As an aside, can you point to where the "clear policy in place" is
> > documented, and what the policy is?

Global namespacing policy that Knut mentioned above.

> >
> > -Frank
> >
> >
> > >> That would make the names shorter, saving typing when writing tests, and storage ;-)
> > >> and also make the names more similar to KTF's, and those of user land unit test
> > >
> > > You mean the Googletest/Googlemock expectations/assertions?
> > >
> > > It's a great library (with not so great a name), but unfortunately it
> > > is written in C++, which I think pretty much counts it out here.
>
> Using a similar syntax is a good thing, since it makes it easier for people
> who write tests in user land frameworks to contribute to the kernel tests.
> And if, lets say, someone later comes up with a way to run the KUnit tests in "real"
> user land within Googletest ;-) then less editing would be needed..
>
> > >> frameworks? Also it will make it possible to have functions compiling both with KTF
> > and
> > >> KUnit, facilitating moving code between the two.
> > >
> > > I think that would be cool, but again, I don't think this will be
> > > possible with Googletest/Googlemock.
>
> I was thinking of moves between KUnit tests and KTF tests, kernel code only.
> Some test functions may easily be usable both in a "pure" mocking environment
> and in an integrated setting with hardware/driver/userspace dependencies.

Speaking for KUnit, you are right. I got KUnit working on other
architectures a couple revisions ago (apparently I didn't advertise that
well enough).

> > >>
> > >> Also the string stream facilities of KUnit looks interesting to share.
> > >
> > > I am glad you think so!
> > >
> > > If your biggest concern on my side is test macro names (which I think
> > > is a no-go as I mentioned above), I think we should be in pretty good
> > > shape once you are ready to move forward. Besides, I have a lot more
> > > KUnit patches coming after this: landing this patchset is just the
> > > beginning. So how about we keep moving forward on this patchset?
>
> I think the importance of a well thought through
> test API definition is not to be underestimated

I agree. That is why I am saying that I think we are in good shape if we
are only arguing about the name.

> - the sooner we can unify and establish a common base, the better,
> I think.

Again, I agree. That is what I am trying to do here.

>
> My other concern is, as mentioned earlier, whether UML is really that
> different from just running in a VM wrt debugging support, and that
> there exists a better approach based on automatically generating
> an environment where the test code and the source under test
> can be compiled in a normal user land program. But it seems
> there are enough clients of the UML approach to justify it
> as a lightweight entry. We want to make it easy and inexcusable
> not to test the code, having a low bar of entry is certainly good.

Yep.

> Other than that, I really would need to spend some more time with
> the details on KUnit to verify my so far fairly
> shallow observations!

Are we arguing about anything other than naming schemes here? If that's
it, we can refactor the under the hood stuff later; if we even need to
at all.

Cheers!

2019-05-14 08:01:41

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Sat, May 11, 2019 at 08:43:23AM +0200, Knut Omang wrote:
> On Fri, 2019-05-10 at 14:59 -0700, Frank Rowand wrote:
> > On 5/10/19 3:23 AM, Brendan Higgins wrote:
> > >> On Fri, May 10, 2019 at 7:49 AM Knut Omang <[email protected]> wrote:
> > >>>
> > >>> On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> > >>>> On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> > >>>>>
> > >>>>>
> > >>>>> On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> > >>>>>> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> > >>>>>>>
> > >>>>>>> The second item, arguably, does have significant overlap with kselftest.
> > >>>>>>> Whether you are running short tests in a light weight UML environment or
> > >>>>>>> higher level tests in an heavier VM the two could be using the same
> > >>>>>>> framework for writing or defining in-kernel tests. It *may* also be valuable
> > >>>>>>> for some people to be able to run all the UML tests in the heavy VM
> > >>>>>>> environment along side other higher level tests.
> > >>>>>>>
> > >>>>>>> Looking at the selftests tree in the repo, we already have similar items to
> > >>>>>>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> > >>>>>>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> > >>>>>>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> > >>>>>>>
> > >>>>>>> However, the number of users of this harness appears to be quite small. Most
> > >>>>>>> of the code in the selftests tree seems to be a random mismash of scripts
> > >>>>>>> and userspace code so it's not hard to see it as something completely
> > >>>>>>> different from the new Kunit:
> > >>>>>>>
> > >>>>>>> $ git grep --files-with-matches kselftest_harness.h *
> > >>>>>>
> > >>>>>> To the extent that we can unify how tests are written, I agree that
> > >>>>>> this would be a good thing. However, you should note that
> > >>>>>> kselftest_harness.h is currently assums that it will be included in
> > >>>>>> userspace programs. This is most obviously seen if you look closely
> > >>>>>> at the functions defined in the header files which makes calls to
> > >>>>>> fork(), abort() and fprintf().
> > >>>>>
> > >>>>> Ah, yes. I obviously did not dig deep enough. Using kunit for
> > >>>>> in-kernel tests and kselftest_harness for userspace tests seems like
> > >>>>> a sensible line to draw to me. Trying to unify kernel and userspace
> > >>>>> here sounds like it could be difficult so it's probably not worth
> > >>>>> forcing the issue unless someone wants to do some really fancy work
> > >>>>> to get it done.
> > >>>>>
> > >>>>> Based on some of the other commenters, I was under the impression
> > >>>>> that kselftests had in-kernel tests but I'm not sure where or if they
> > >>>>> exist.
> > >>>>
> > >>>> YES, kselftest has in-kernel tests. (Excuse the shouting...)
> > >>>>
> > >>>> Here is a likely list of them in the kernel source tree:
> > >>>>
> > >>>> $ grep module_init lib/test_*.c
> > >>>> lib/test_bitfield.c:module_init(test_bitfields)
> > >>>> lib/test_bitmap.c:module_init(test_bitmap_init);
> > >>>> lib/test_bpf.c:module_init(test_bpf_init);
> > >>>> lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> > >>>> lib/test_firmware.c:module_init(test_firmware_init);
> > >>>> lib/test_hash.c:module_init(test_hash_init); /* Does everything */
> > >>>> lib/test_hexdump.c:module_init(test_hexdump_init);
> > >>>> lib/test_ida.c:module_init(ida_checks);
> > >>>> lib/test_kasan.c:module_init(kmalloc_tests_init);
> > >>>> lib/test_list_sort.c:module_init(list_sort_test);
> > >>>> lib/test_memcat_p.c:module_init(test_memcat_p_init);
> > >>>> lib/test_module.c:static int __init test_module_init(void)
> > >>>> lib/test_module.c:module_init(test_module_init);
> > >>>> lib/test_objagg.c:module_init(test_objagg_init);
> > >>>> lib/test_overflow.c:static int __init test_module_init(void)
> > >>>> lib/test_overflow.c:module_init(test_module_init);
> > >>>> lib/test_parman.c:module_init(test_parman_init);
> > >>>> lib/test_printf.c:module_init(test_printf_init);
> > >>>> lib/test_rhashtable.c:module_init(test_rht_init);
> > >>>> lib/test_siphash.c:module_init(siphash_test_init);
> > >>>> lib/test_sort.c:module_init(test_sort_init);
> > >>>> lib/test_stackinit.c:module_init(test_stackinit_init);
> > >>>> lib/test_static_key_base.c:module_init(test_static_key_base_init);
> > >>>> lib/test_static_keys.c:module_init(test_static_key_init);
> > >>>> lib/test_string.c:module_init(string_selftest_init);
> > >>>> lib/test_ubsan.c:module_init(test_ubsan_init);
> > >>>> lib/test_user_copy.c:module_init(test_user_copy_init);
> > >>>> lib/test_uuid.c:module_init(test_uuid_init);
> > >>>> lib/test_vmalloc.c:module_init(vmalloc_test_init)
> > >>>> lib/test_xarray.c:module_init(xarray_checks);
> > >>>>
> > >>>>
> > >>>>> If they do exists, it seems like it would make sense to
> > >>>>> convert those to kunit and have Kunit tests run-able in a VM or
> > >>>>> baremetal instance.
> > >>>>
> > >>>> They already run in a VM.
> > >>>>
> > >>>> They already run on bare metal.
> > >>>>
> > >>>> They already run in UML.
> > >>>>
> > >>>> This is not to say that KUnit does not make sense. But I'm still trying
> > >>>> to get a better description of the KUnit features (and there are
> > >>>> some).
> > >>>
> > >>> FYI, I have a master student who looks at converting some of these to KTF, such as
> > for
> > >>> instance the XArray tests, which lended themselves quite good to a semi-automated
> > >>> conversion.
> > >>>
> > >>> The result is also a somewhat more compact code as well as the flexibility
> > >>> provided by the Googletest executor and the KTF frameworks, such as running selected
> > >>> tests, output formatting, debugging features etc.
> > >>
> > >> So is KTF already in upstream? Or is the plan to unify the KTF and
> > >
> > > I am not certain about KTF's upstream plans, but I assume that Knut
> > > would have CC'ed me on the thread if he had started working on it.
> > >
> > >> Kunit in-kernel test harnesses? Because there's tons of these
> > >
> > > No, no plan. Knut and I talked about this a good while ago and it
> > > seemed that we had pretty fundamentally different approaches both in
> > > terms of implementation and end goal. Combining them seemed pretty
> > > infeasible, at least from a technical perspective. Anyway, I am sure
> > > Knut would like to give him perspective on the matter and I don't want
> > > to say too much without first giving him a chance to chime in on the
> > > matter.
> > >
> > > Nevertheless, I hope you don't see resolving this as a condition for
> > > accepting this patchset. I had several rounds of RFC on KUnit, and no
> > > one had previously brought this up.
> >
> > I seem to recall a request in reply to the KUnit RFC email threads to
> > work together.
>
> You recall right.
> I wanted us to work together to refine a common approach.

Are you talking about right after we met at LPC in 2017? We talked about
working together, but I thought you didn't really see much value in what
I was doing with KUnit.

Or are you talking about our fist discussion on the RFC[1]. In this
case, you seemed to assert that KTF's approach was fundamentally
different and superior. You also didn't really seem interested in trying
to merge KTF at the time. The discussion concluded with Luis suggesting
that we should just keep working on both separately and let individual
users decide.

What changed since then?

> I still think that's possible.

I hope you aren't asserting that I have been unwilling to work with you.
At the outset (before I sent out the RFC), I was willing to let you take
the lead on things, but you didn't seem very interested in anything that
I brought to the table and most importantly were not interested in
sending out any code for discussion on the mailing lists.

After I started the RFC, most of your comments were very high level and,
at least to me, seemed like pretty fundamental philosophical differences
in our approaches, which is fine! I think Luis recognized that and I
think that is part of why he suggested for us to continue to work on
them separately.

I am not trying to be mean, but I don't really know what you expected me
to do. I don't recall any specific changes you suggested me to make in
my code (I think you only ever made a single comment on a thread on
anything other than the cover letter.) And you never proposed any of
your own code demonstrating an alternative way of doing things.

Nevertheless, you are of course fully welcome to critique anything I
have proposed, or to propose your own way of doing things, but we need
to do so here on the mailing lists.

> > However whether that impacts acceptance of this patch set is up to
> > the maintainer and how she wants to resolve the potential collision
> > of KUnit and KTF (if there is indeed any sort of collision).
>
> I believe there's overlap and potential for unification and code sharing.
> My concern is to make sure that that can happen without disrupting too
> many test users. I'd really like to get some more time to explore that.

It's possible. Like I said before, our approaches are pretty
fundamentally different. It sounds like there might be some overlap
between our expectaion/assertion macros, but we probably cannot use the
ones from Googletest without introducing C++ into the kernel which is a
no-no.

> It strikes me that the main difference in the two approaches
> lies in the way reporting is done by default. Since KUnit handles all the
> reporting itself, while KTF relies on Googletest for that, a lot more code
> in KUnit revolves around that part, while with KTF we have focused more on
> features to enable writing powerful and effective tests.

I have a lot more features on the way, but what is in this initial
patchset are the absolutely core things needed to produce an MVP, and
yes, most of that code revolves around reporting and the fundamental
structure of tests.

Developing cool features is great, but you need to start off on a basis
that the community will accept. Sometimes you need to play around
downstream a bit to develop your ideas, but you always want to get
upstream feedback as early as possible; it's always possible that
someone might propose something, or point out something that breaks a
fundamental assumption that all your later features depend on.

> The reporting part can possibly be made configurable: Reporting with printk or reporting
> via netlink to user space. In fact, as a KTF debugging option KTF already
> supports printk reporting which can be enabled via sysfs.

Yep, I intentionally left an interface so printk (well actually
vprintk_emit) can be swapped out with another mechanism.

> If macros can have the same syntax, then there's
> likely features in KTF that KUnit users would benefit from too.
> But this of course have to be tried out.

Cool. Looking forward to hearing about it.

Cheers!

[1] https://lkml.org/lkml/2018/11/24/170

> > >> in-kernel unit tests already, and every merge we get more (Frank's
> > >> list didn't even look into drivers or anywhere else, e.g. it's missing
> > >> the locking self tests I worked on in the past), and a more structured
> > >> approach would really be good.
> > >
> > > Well, that's what I am trying to do. I hope you like it!
> > >
> > > Cheers!
> > > .
> > >
> >
>

2019-05-14 08:40:03

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Fri, May 10, 2019 at 03:13:40PM -0700, Frank Rowand wrote:
> On 5/10/19 9:17 AM, Logan Gunthorpe wrote:
> >
> >
> > On 2019-05-09 11:18 p.m., Frank Rowand wrote:
> >
> >> YES, kselftest has in-kernel tests. (Excuse the shouting...)
> >
> > Cool. From my cursory look, in my opinion, these would be greatly
> > improved by converting them to the framework Brendan is proposing for Kunit.
> >
> >>> If they do exists, it seems like it would make sense to
> >>> convert those to kunit and have Kunit tests run-able in a VM or
> >>> baremetal instance.
> >>
> >> They already run in a VM.
> >>
> >> They already run on bare metal.
> >>
> >> They already run in UML.
> >
> > Simply being able to run in UML is not the only thing here. Kunit
> > provides the infrastructure to quickly build, run and report results for
> > all the tests from userspace without needing to worry about the details
> > of building and running a UML kernel, then parsing dmesg to figure out
> > what tests were run or not.
>
> Yes. But that is not the only environment that KUnit must support to be
> of use to me for devicetree unittests (this is not new, Brendan is quite
> aware of my needs and is not ignoring them).
>
>
> >> This is not to say that KUnit does not make sense. But I'm still trying
> >> to get a better description of the KUnit features (and there are
> >> some).
> >
> > So read the patches, or the documentation[1] or the LWN article[2]. It's
> > pretty well described in a lot of places -- that's one of the big
> > advantages of it. In contrast, few people seems to have any concept of
> > what kselftests are or where they are or how to run them (I was
> > surprised to find the in-kernel tests in the lib tree).
> >
> > Logan
> >
> > [1] https://google.github.io/kunit-docs/third_party/kernel/docs/
> > [2] https://lwn.net/Articles/780985/
>
> I have been following the RFC versions. I have installed the RFC patches
> and run them to the extent that they worked (devicetree unittests were
> a guinea pig for test conversion in the RFC series, but the converted
> tests did not work). I read portions of the code while trying to
> understand the unittests conversion. I made review comments based on
> the portion of the code that I did read. I have read the documentation
> (very nice btw, as I have said before, but should be expanded).
>
> My comment is that the description to submit the patch series should
> be fuller -- KUnit potentially has a lot of nice attributes, and I
> still think I have only scratched the surface. The average reviewer
> may have even less in-depth knowledge than I do. And as I have
> commented before, I keep diving into areas that I had no previous
> experience with (such as kselftest) to be able to properly judge this
> patch series.

Thanks for the praise! That means a lot coming from you!

I really cannot disagree that I could use more documentation. You can
pretty much always use more documentation. Nevertheless, is there a
particular part of the documentation that you think it lacking?

It sounds like there was a pretty long discussion here about, a number
of different things.

Do you want a better description of what unit testing is and how KUnit
helps make it possible?

Do you want more of an explanation distinguishing KUnit from kselftest?
How so?

Do you just want better documentation on how to test the kernel? What
tools we have at our disposal and when to use what tools?

Thanks!

2019-05-14 08:44:35

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Wed, May 08, 2019 at 05:58:49PM -0700, Frank Rowand wrote:
> Hi Ted,
>
> On 5/7/19 10:22 AM, Theodore Ts'o wrote:
> > On Tue, May 07, 2019 at 10:01:19AM +0200, Greg KH wrote:
> Not very helpful to cut the text here, plus not explicitly indicating that
> text was cut (yes, I know the ">>>" will be a clue for the careful reader),
> losing the set up for my question.
>
>
> >>> My understanding is that the intent of KUnit is to avoid booting a kernel on
> >>> real hardware or in a virtual machine. That seems to be a matter of semantics
> >>> to me because isn't invoking a UML Linux just running the Linux kernel in
> >>> a different form of virtualization?
> >>>
> >>> So I do not understand why KUnit is an improvement over kselftest.
> >>>
> >>> It seems to me that KUnit is just another piece of infrastructure that I
> >>> am going to have to be familiar with as a kernel developer. More overhead,
> >>> more information to stuff into my tiny little brain.
> >>>
> >>> I would guess that some developers will focus on just one of the two test
> >>> environments (and some will focus on both), splitting the development
> >>> resources instead of pooling them on a common infrastructure.
> >>>
> >>> What am I missing?
> >>
> >> kselftest provides no in-kernel framework for testing kernel code
> >> specifically. That should be what kunit provides, an "easy" way to
> >> write in-kernel tests for things.
> >>
> >> Brendan, did I get it right?
> >
> > Yes, that's basically right. You don't *have* to use KUnit. It's
>
> If KUnit is added to the kernel, and a subsystem that I am submitting
> code for has chosen to use KUnit instead of kselftest, then yes, I do
> *have* to use KUnit if my submission needs to contain a test for the
> code unless I want to convince the maintainer that somehow my case
> is special and I prefer to use kselftest instead of KUnittest.
>
>
> > supposed to be a simple way to run a large number of small tests that
> > for specific small components in a system.
>
> kselftest also supports running a subset of tests. That subset of tests
> can also be a large number of small tests. There is nothing inherent
> in KUnit vs kselftest in this regard, as far as I am aware.
>
>
> > For example, I currently use xfstests using KVM and GCE to test all of
> > ext4. These tests require using multiple 5 GB and 20GB virtual disks,
> > and it works by mounting ext4 file systems and exercising ext4 through
> > the system call interfaces, using userspace tools such as fsstress,
> > fsx, fio, etc. It requires time overhead to start the VM, create and
> > allocate virtual disks, etc. For example, to run a single 3 seconds
> > xfstest (generic/001), it requires full 10 seconds to run it via
> > kvm-xfstests.
> >
>
>
> > KUnit is something else; it's specifically intended to allow you to
> > create lightweight tests quickly and easily, and by reducing the
> > effort needed to write and run unit tests, hopefully we'll have a lot
> > more of them and thus improve kernel quality.
>
> The same is true of kselftest. You can create lightweight tests in
> kselftest.
>
>
> > As an example, I have a volunteer working on developing KUinit tests
> > for ext4. We're going to start by testing the ext4 extent status
> > tree. The source code is at fs/ext4/extent_status.c; it's
> > approximately 1800 LOC. The Kunit tests for the extent status tree
> > will exercise all of the corner cases for the various extent status
> > tree functions --- e.g., ext4_es_insert_delayed_block(),
> > ext4_es_remove_extent(), ext4_es_cache_extent(), etc. And it will do
> > this in isolation without our needing to create a test file system or
> > using a test block device.
> >
>
> > Next we'll test the ext4 block allocator, again in isolation. To test
> > the block allocator we will have to write "mock functions" which
> > simulate reading allocation bitmaps from disk. Again, this will allow
> > the test writer to explicitly construct corner cases and validate that
> > the block allocator works as expected without having to reverese
> > engineer file system data structures which will force a particular
> > code path to be executed.
>
> This would be a difference, but mock functions do not exist in KUnit.
> The KUnit test will call the real kernel function in the UML kernel.
>
> I think Brendan has indicated a desire to have mock functions in the
> future.
>
> Brendan, do I understand that correctly?

Oh, sorry, I missed this comment from earlier.

Yes, you are correct. Function mocking is a feature I will be
introducing in a follow up patchset (assuming this one gets merged of
course ;-) ).

Cheers!

> -Frank
>
> > So this is why it's largely irrelevant to me that KUinit uses UML. In
> > fact, it's a feature. We're not testing device drivers, or the
> > scheduler, or anything else architecture-specific. UML is not about
> > virtualization. What it's about in this context is allowing us to
> > start running test code as quickly as possible. Booting KVM takes
> > about 3-4 seconds, and this includes initializing virtio_scsi and
> > other device drivers. If by using UML we can hold the amount of
> > unnecessary kernel subsystem initialization down to the absolute
> > minimum, and if it means that we can communicating to the test
> > framework via a userspace "printf" from UML/KUnit code, as opposed to
> > via a virtual serial port to KVM's virtual console, it all makes for
> > lighter weight testing.
> >
> > Why did I go looking for a volunteer to write KUnit tests for ext4?
> > Well, I have a plan to make some changes in restructing how ext4's
> > write path works, in order to support things like copy-on-write, a
> > more efficient delayed allocation system, etc. This will require
> > making changes to the extent status tree, and by having unit tests for
> > the extent status tree, we'll be able to detect any bugs that we might
> > accidentally introduce in the es tree far more quickly than if we
> > didn't have those tests available. Google has long found that having
> > these sorts of unit tests is a real win for developer velocity for any
> > non-trivial code module (or C++ class), even when you take into
> > account the time it takes to create the unit tests.
> >
> > - Ted>
> > P.S. Many thanks to Brendan for finding such a volunteer for me; the
> > person in question is a SRE from Switzerland who is interested in
> > getting involved with kernel testing, and this is going to be their
> > 20% project. :-)
> >
> >

2019-05-14 12:09:08

by Daniel Vetter

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Tue, May 14, 2019 at 8:04 AM Brendan Higgins
<[email protected]> wrote:
>
> On Mon, May 13, 2019 at 04:44:51PM +0200, Daniel Vetter wrote:
> > On Sat, May 11, 2019 at 01:33:44PM -0400, Theodore Ts'o wrote:
> > > On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> > > > However, the reply is incorrect. Kselftest in-kernel tests (which
> > > > is the context here) can be configured as built in instead of as
> > > > a module, and built in a UML kernel. The UML kernel can boot,
> > > > running the in-kernel tests before UML attempts to invoke the
> > > > init process.
> > >
> > > Um, Citation needed?
> > >
> > > I don't see any evidence for this in the kselftest documentation, nor
> > > do I see any evidence of this in the kselftest Makefiles.
> > >
> > > There exists test modules in the kernel that run before the init
> > > scripts run --- but that's not strictly speaking part of kselftests,
> > > and do not have any kind of infrastructure. As noted, the
> > > kselftests_harness header file fundamentally assumes that you are
> > > running test code in userspace.
> >
> > Yeah I really like the "no userspace required at all" design of kunit,
> > while still collecting results in a well-defined way (unless the current
> > self-test that just run when you load the module, with maybe some
> > kselftest ad-hoc wrapper around to collect the results).
> >
> > What I want to do long-term is to run these kernel unit tests as part of
> > the build-testing, most likely in gitlab (sooner or later, for drm.git
>
> Totally! This is part of the reason I have been insisting on a minimum
> of UML compatibility for all unit tests. If you can suffiently constrain
> the environment that is required for tests to run in, it makes it much
> easier not only for a human to run your tests, but it also makes it a
> lot easier for an automated service to be able to run your tests.
>
> I actually have a prototype presubmit already working on my
> "stable/non-upstream" branch. You can checkout what presubmit results
> look like here[1][2].

ug gerrit :-)

> > only ofc). So that people get their pull requests (and patch series, we
> > have some ideas to tie this into patchwork) automatically tested for this
>
> Might that be Snowpatch[3]? I talked to Russell, the creator of Snowpatch,
> and he seemed pretty open to collaboration.
>
> Before I heard about Snowpatch, I had an intern write a translation
> layer that made Prow (the presubmit service that I used in the prototype
> above) work with LKML[4].

There's about 3-4 forks/clones of patchwork. snowpatch is one, we have
a different one on freedesktop.org. It's a bit a mess :-/

> I am not married to either approach, but I think between the two of
> them, most of the initial legwork has been done to make presubmit on
> LKML a reality.

We do have presubmit CI working already with our freedesktop.org
patchwork. The missing glue is just tying that into gitlab CI somehow
(since we want to unify build testing more and make it easier for
contributors to adjust things).
-Daniel

> > super basic stuff.
>
> I am really excited to hear back on what you think!
>
> Cheers!
>
> [1] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-7bfa40efb132e15c8388755c273837559911425c
> [2] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-a6784496eafff442ac98fb068bf1a0f36ee73509
> [3] https://developer.ibm.com/open/projects/snowpatch/
> [4] https://kunit.googlesource.com/prow-lkml/
> _______________________________________________
> dri-devel mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel



--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

2019-05-14 18:37:49

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Tue, May 14, 2019 at 02:05:05PM +0200, Daniel Vetter wrote:
> On Tue, May 14, 2019 at 8:04 AM Brendan Higgins
> <[email protected]> wrote:
> >
> > On Mon, May 13, 2019 at 04:44:51PM +0200, Daniel Vetter wrote:
> > > On Sat, May 11, 2019 at 01:33:44PM -0400, Theodore Ts'o wrote:
> > > > On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> > > > > However, the reply is incorrect. Kselftest in-kernel tests (which
> > > > > is the context here) can be configured as built in instead of as
> > > > > a module, and built in a UML kernel. The UML kernel can boot,
> > > > > running the in-kernel tests before UML attempts to invoke the
> > > > > init process.
> > > >
> > > > Um, Citation needed?
> > > >
> > > > I don't see any evidence for this in the kselftest documentation, nor
> > > > do I see any evidence of this in the kselftest Makefiles.
> > > >
> > > > There exists test modules in the kernel that run before the init
> > > > scripts run --- but that's not strictly speaking part of kselftests,
> > > > and do not have any kind of infrastructure. As noted, the
> > > > kselftests_harness header file fundamentally assumes that you are
> > > > running test code in userspace.
> > >
> > > Yeah I really like the "no userspace required at all" design of kunit,
> > > while still collecting results in a well-defined way (unless the current
> > > self-test that just run when you load the module, with maybe some
> > > kselftest ad-hoc wrapper around to collect the results).
> > >
> > > What I want to do long-term is to run these kernel unit tests as part of
> > > the build-testing, most likely in gitlab (sooner or later, for drm.git
> >
> > Totally! This is part of the reason I have been insisting on a minimum
> > of UML compatibility for all unit tests. If you can suffiently constrain
> > the environment that is required for tests to run in, it makes it much
> > easier not only for a human to run your tests, but it also makes it a
> > lot easier for an automated service to be able to run your tests.
> >
> > I actually have a prototype presubmit already working on my
> > "stable/non-upstream" branch. You can checkout what presubmit results
> > look like here[1][2].
>
> ug gerrit :-)

Yeah, yeah, I know, but it is a lot easier for me to get a project set
up here using Gerrit, when we already use that for a lot of other
projects.

Also, Gerrit has gotten a lot better over the last two years or so. Two
years ago, I wouldn't touch it with a ten foot pole. It's not so bad
anymore, at least if you are used to using a web UI to review code.

> > > only ofc). So that people get their pull requests (and patch series, we
> > > have some ideas to tie this into patchwork) automatically tested for this
> >
> > Might that be Snowpatch[3]? I talked to Russell, the creator of Snowpatch,
> > and he seemed pretty open to collaboration.
> >
> > Before I heard about Snowpatch, I had an intern write a translation
> > layer that made Prow (the presubmit service that I used in the prototype
> > above) work with LKML[4].
>
> There's about 3-4 forks/clones of patchwork. snowpatch is one, we have
> a different one on freedesktop.org. It's a bit a mess :-/

Oh, I didn't realize that. I found your patchwork instance here[5], but
do you have a place where I can see the changes you have added to
support presubmit?

> > I am not married to either approach, but I think between the two of
> > them, most of the initial legwork has been done to make presubmit on
> > LKML a reality.
>
> We do have presubmit CI working already with our freedesktop.org
> patchwork. The missing glue is just tying that into gitlab CI somehow
> (since we want to unify build testing more and make it easier for
> contributors to adjust things).

I checked out a couple of your projects on your patchwork instance: AMD
X.Org drivers, DRI devel, and Wayland. I saw the tab you added for
tests, but none of them actually had any test results. Can you point me
at one that does?

Cheers!

[5] https://patchwork.freedesktop.org/

> > > super basic stuff.
> >
> > I am really excited to hear back on what you think!
> >
> > Cheers!
> >
> > [1] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-7bfa40efb132e15c8388755c273837559911425c
> > [2] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-a6784496eafff442ac98fb068bf1a0f36ee73509
> > [3] https://developer.ibm.com/open/projects/snowpatch/
> > [4] https://kunit.googlesource.com/prow-lkml/
> > _______________________________________________
> > dri-devel mailing list
> > [email protected]
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel

2019-05-14 20:56:55

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Fri, May 10, 2019 at 02:52:59PM -0700, Frank Rowand wrote:

Sorry, I forgot to get back to this thread.

> On 5/9/19 3:20 PM, Logan Gunthorpe wrote:
> >
> >
> > On 2019-05-09 3:42 p.m., Theodore Ts'o wrote:
> >> On Thu, May 09, 2019 at 11:12:12AM -0700, Frank Rowand wrote:
> >>>
> >>> ??? "My understanding is that the intent of KUnit is to avoid booting a kernel on
> >>> ??? real hardware or in a virtual machine.? That seems to be a matter of semantics
> >>> ??? to me because isn't invoking a UML Linux just running the Linux kernel in
> >>> ??? a different form of virtualization?
> >>>
> >>> ??? So I do not understand why KUnit is an improvement over kselftest.
> >>>
> >>> ?? ...
> >>>
> >>> What am I missing?"
> >>
> >> One major difference: kselftest requires a userspace environment;
> >> it starts systemd, requires a root file system from which you can
> >> load modules, etc. Kunit doesn't require a root file system;
> >> doesn't require that you start systemd; doesn't allow you to run
> >> arbitrary perl, python, bash, etc. scripts. As such, it's much
> >> lighter weight than kselftest, and will have much less overhead
> >> before you can start running tests. So it's not really the same
> >> kind of virtualization.
>
> I'm back to reply to this subthread, after a delay, as promised.
>
>
> > I largely agree with everything Ted has said in this thread, but I
> > wonder if we are conflating two different ideas that is causing an
> > impasse. From what I see, Kunit actually provides two different
> > things:
>
> > 1) An execution environment that can be run very quickly in userspace
> > on tests in the kernel source. This speeds up the tests and gives a
> > lot of benefit to developers using those tests because they can get
> > feedback on their code changes a *lot* quicker.
>
> kselftest in-kernel tests provide exactly the same when the tests are
> configured as "built-in" code instead of as modules.
>
>
> > 2) A framework to write unit tests that provides a lot of the same
> > facilities as other common unit testing frameworks from userspace
> > (ie. a runner that runs a list of tests and a bunch of helpers such
> > as KUNIT_EXPECT_* to simplify test passes and failures).
>
> > The first item from Kunit is novel and I see absolutely no overlap
> > with anything kselftest does. It's also the valuable thing I'd like
> > to see merged and grow.
>
> The first item exists in kselftest.
>
>
> > The second item, arguably, does have significant overlap with
> > kselftest. Whether you are running short tests in a light weight UML
> > environment or higher level tests in an heavier VM the two could be
> > using the same framework for writing or defining in-kernel tests. It
> > *may* also be valuable for some people to be able to run all the UML
> > tests in the heavy VM environment along side other higher level
> > tests.
> >
> > Looking at the selftests tree in the repo, we already have similar
> > items to what Kunit is adding as I described in point (2) above.
> > kselftest_harness.h contains macros like EXPECT_* and ASSERT_* with
> > very similar intentions to the new KUNIT_EXECPT_* and KUNIT_ASSERT_*
> > macros.
>
> I might be wrong here because I have not dug deeply enough into the
> code!!! Does this framework apply to the userspace tests, the
> in-kernel tests, or both? My "not having dug enough GUESS" is that
> these are for the user space tests (although if so, they could be
> extended for in-kernel use also).
>
> So I think this one maybe does not have an overlap between KUnit
> and kselftest.

You are right, Frank: the EXPECT_* and ASSERT_* in kselftest_harness.h
is for userspace only. kselftest_harness.h provides it's own main method
for running the tests[1]. It also makes assumptions around having access
to this main method[2].

There actually isn't that much infrastructure that that I can reuse
there. I can't even reuse the API definitions because they only pass the
context object (for me it is struct kunit, for them it is their fixture)
that they use to their test cases.

> > However, the number of users of this harness appears to be quite
> > small. Most of the code in the selftests tree seems to be a random
> > mismash of scripts and userspace code so it's not hard to see it as
> > something completely different from the new Kunit:
> > $ git grep --files-with-matches kselftest_harness.h *
> > Documentation/dev-tools/kselftest.rst
> > MAINTAINERS
> > tools/testing/selftests/kselftest_harness.h
> > tools/testing/selftests/net/tls.c
> > tools/testing/selftests/rtc/rtctest.c
> > tools/testing/selftests/seccomp/Makefile
> > tools/testing/selftests/seccomp/seccomp_bpf.c
> > tools/testing/selftests/uevent/Makefile
> > tools/testing/selftests/uevent/uevent_filtering.c
>
>
> > Thus, I can personally see a lot of value in integrating the kunit
> > test framework with this kselftest harness. There's only a small
> > number of users of the kselftest harness today, so one way or another
> > it seems like getting this integrated early would be a good idea.
> > Letting Kunit and Kselftests progress independently for a few years
> > will only make this worse and may become something we end up
> > regretting.
>
> Yes, this I agree with.

I think I agree with this point. I cannot see any reason not to have
KUnit tests able to be run from the kselftest harness.

Conceptually, I think we are mostly in agreement that kselftest and
KUnit are distinct things. Like Shuah said, kselftest is a black box
regression test framework, KUnit is a white box unit testing framework.
So making kselftest the only interface to use KUnit would be a mistake
in my opinion (and I think others on this thread would agree).

That being said, when you go to run kselftest, I think there is an
expectation that you run all your tests. Or at least that kselftest
should make that possible. From my experience, usually when someone
wants to run all the end-to-end tests, *they really just want to run all
the tests*. This would imply that all your KUnit tests get run too.

Another added benefit of making it possible for the kselftest harness to
run KUnit tests would be that it would somewhat guarantee that the
interfaces between the two would remain compatible meaning that test
automation tools like CI and presubmit systems are more likely to be
easy to integrate in each and less likely to break for either.

Would anyone object if I explore this in a follow-up patchset? I have an
idea of how I might start, but I think it would be easiest to explore in
it's own patchset. I don't expect it to be a trivial amount of work.

Cheers!

[1] https://elixir.bootlin.com/linux/v5.1.2/source/tools/testing/selftests/kselftest_harness.h#L329
[2] https://elixir.bootlin.com/linux/v5.1.2/source/tools/testing/selftests/kselftest_harness.h#L681

2019-05-15 00:16:33

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On 5/14/19 1:38 AM, Brendan Higgins wrote:
> On Fri, May 10, 2019 at 03:13:40PM -0700, Frank Rowand wrote:
>> On 5/10/19 9:17 AM, Logan Gunthorpe wrote:
>>>
>>>
>>> On 2019-05-09 11:18 p.m., Frank Rowand wrote:
>>>
>>>> YES, kselftest has in-kernel tests. (Excuse the shouting...)
>>>
>>> Cool. From my cursory look, in my opinion, these would be greatly
>>> improved by converting them to the framework Brendan is proposing for Kunit.
>>>
>>>>> If they do exists, it seems like it would make sense to
>>>>> convert those to kunit and have Kunit tests run-able in a VM or
>>>>> baremetal instance.
>>>>
>>>> They already run in a VM.
>>>>
>>>> They already run on bare metal.
>>>>
>>>> They already run in UML.
>>>
>>> Simply being able to run in UML is not the only thing here. Kunit
>>> provides the infrastructure to quickly build, run and report results for
>>> all the tests from userspace without needing to worry about the details
>>> of building and running a UML kernel, then parsing dmesg to figure out
>>> what tests were run or not.
>>
>> Yes. But that is not the only environment that KUnit must support to be
>> of use to me for devicetree unittests (this is not new, Brendan is quite
>> aware of my needs and is not ignoring them).
>>
>>
>>>> This is not to say that KUnit does not make sense. But I'm still trying
>>>> to get a better description of the KUnit features (and there are
>>>> some).
>>>
>>> So read the patches, or the documentation[1] or the LWN article[2]. It's
>>> pretty well described in a lot of places -- that's one of the big
>>> advantages of it. In contrast, few people seems to have any concept of
>>> what kselftests are or where they are or how to run them (I was
>>> surprised to find the in-kernel tests in the lib tree).
>>>
>>> Logan
>>>
>>> [1] https://google.github.io/kunit-docs/third_party/kernel/docs/
>>> [2] https://lwn.net/Articles/780985/
>>
>> I have been following the RFC versions. I have installed the RFC patches
>> and run them to the extent that they worked (devicetree unittests were
>> a guinea pig for test conversion in the RFC series, but the converted
>> tests did not work). I read portions of the code while trying to
>> understand the unittests conversion. I made review comments based on
>> the portion of the code that I did read. I have read the documentation
>> (very nice btw, as I have said before, but should be expanded).
>>
>> My comment is that the description to submit the patch series should
>> be fuller -- KUnit potentially has a lot of nice attributes, and I
>> still think I have only scratched the surface. The average reviewer
>> may have even less in-depth knowledge than I do. And as I have
>> commented before, I keep diving into areas that I had no previous
>> experience with (such as kselftest) to be able to properly judge this
>> patch series.
>
> Thanks for the praise! That means a lot coming from you!
>
> I really cannot disagree that I could use more documentation. You can
> pretty much always use more documentation. Nevertheless, is there a
> particular part of the documentation that you think it lacking?

I wasn't talking about the documentation that is part of KUnit. I was
targeting patch 0.


> It sounds like there was a pretty long discussion here about, a number
> of different things.
>
> Do you want a better description of what unit testing is and how KUnit
> helps make it possible?
>
> Do you want more of an explanation distinguishing KUnit from kselftest?
> How so?

The high level issue is to provide reviewers with enough context to be
able to evaluate the patch series. That is probably not very obvious
at this point in the thread. At this point I was responding to Logan's
response to me that I should be reading Documentation to get a better
description of KUnit features. I _think_ that Logan thought that I
did not understand KUnit features and was trying to be helpful by
pointing out where I could get more information. If so, he was missing
my intended point had been that patch 0 should provide more information
to justify adding this feature.

One thing that has become very apparent in the discussion of this patch
series is that some people do not understand that kselftest includes
in-kernel tests, not just userspace tests. As such, KUnit is an
additional implementation of "the same feature". (One can debate
exactly which in-kernel test features kselftest and KUnit provide,
and how much overlap exists or does not exist. So don't take "the
same feature" as my final opinion of how much overlap exists.) So
that is a key element to be noted and explained.

I don't have a goal of finding all the things to include in patch 0,
that is really your job as the patch submitter. But as a reviewer,
it is easy for me to point out an obvious hole, even if I don't find
all of the holes. kselftest vs KUnit overlap was an obvious hole to
me.

So, yes, more of an explanation about the in-kernel testing available
via kselftest vs the in-kernel testing available via KUnit, and how
they provide the same or different functionality. Should both exist?
Should one replace the other? If one provides additional features,
should the additional features be merged into a common base?

> Do you just want better documentation on how to test the kernel? What
> tools we have at our disposal and when to use what tools?
>
> Thanks!
> .
>

2019-05-15 00:28:20

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On 5/11/19 10:33 AM, Theodore Ts'o wrote:
> On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
>> However, the reply is incorrect. Kselftest in-kernel tests (which
>> is the context here) can be configured as built in instead of as
>> a module, and built in a UML kernel. The UML kernel can boot,
>> running the in-kernel tests before UML attempts to invoke the
>> init process.
>
> Um, Citation needed?

The paragraph that you quoted tells you exactly how to run a kselftest
in-kernel test in a UML kernel. Just to what that paragraph says.


>
> I don't see any evidence for this in the kselftest documentation, nor
> do I see any evidence of this in the kselftest Makefiles.
>
> There exists test modules in the kernel that run before the init
> scripts run --- but that's not strictly speaking part of kselftests,
> and do not have any kind of infrastructure. As noted, the
> kselftests_harness header file fundamentally assumes that you are
> running test code in userspace.

You are ignoring the kselftest in-kernel tests.

We are talking in circles. I'm done with this thread.

-Frank

>
> - Ted
> .
>

2019-05-15 00:28:46

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework



On 2019-05-14 6:14 p.m., Frank Rowand wrote:
> The high level issue is to provide reviewers with enough context to be
> able to evaluate the patch series. That is probably not very obvious
> at this point in the thread. At this point I was responding to Logan's
> response to me that I should be reading Documentation to get a better
> description of KUnit features. I _think_ that Logan thought that I
> did not understand KUnit features and was trying to be helpful by
> pointing out where I could get more information. If so, he was missing
> my intended point had been that patch 0 should provide more information
> to justify adding this feature.

Honestly, I lost track of wait exactly your point was. And, in my
opinion, Brendan has provided over and above the information required to
justify Kunit's inclusion.

> One thing that has become very apparent in the discussion of this patch
> series is that some people do not understand that kselftest includes
> in-kernel tests, not just userspace tests. As such, KUnit is an
> additional implementation of "the same feature". (One can debate
> exactly which in-kernel test features kselftest and KUnit provide,
> and how much overlap exists or does not exist. So don't take "the
> same feature" as my final opinion of how much overlap exists.) So
> that is a key element to be noted and explained.

From my perspective, once we were actually pointed to the in-kernel
kselftest code and took a look at it, it was clear there was no
over-arching framework to them and that Kunit could be used to
significantly improve those tests with a common structure. Based on my
reading of the thread, Ted came to the same conclusion.

I don't think we should block this feature from being merged, and for
future work, someone can update the in-kernel kselftests to use the new
framework.

Logan

2019-05-15 04:32:23

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Tue, May 14, 2019 at 05:26:47PM -0700, Frank Rowand wrote:
> On 5/11/19 10:33 AM, Theodore Ts'o wrote:
> > On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> >> However, the reply is incorrect. Kselftest in-kernel tests (which
> >> is the context here) can be configured as built in instead of as
> >> a module, and built in a UML kernel. The UML kernel can boot,
> >> running the in-kernel tests before UML attempts to invoke the
> >> init process.
> >
> > Um, Citation needed?
>
> The paragraph that you quoted tells you exactly how to run a kselftest
> in-kernel test in a UML kernel. Just to what that paragraph says.

I didn't quote a paragraph. But I'll quote from it now:

$ make -C tools/testing/selftests run_tests

This runs the kselftest harness, *in userspace*. That means you have
to have a root file system, and it's run after init has started, by
default. You asserted that kselftests allows you to run modules
before init has started. There is absolutely zero, cero, nada, zilch
mentions of any of anything like that in Documentation/dev-tools/kselftests.rst

> > There exists test modules in the kernel that run before the init
> > scripts run --- but that's not strictly speaking part of kselftests,
> > and do not have any kind of infrastructure. As noted, the
> > kselftests_harness header file fundamentally assumes that you are
> > running test code in userspace.
>
> You are ignoring the kselftest in-kernel tests.

I'm talking specifically about what you have been *claiming* to be
kselftest in-kernel tests above. And I'm asserting they are really
not kselftests. They are just ad hoc tests that are run in kernel
space, which, when compiled as modules, can be loaded by a kselftest
shell script. You can certainly hook in these ad hoc in-kernel tests
via kselftests --- but then they aren't run before init starts,
because kselftests is inherently a userspace-driven system.

If you build these tests (many of which existed before kselftests was
merged) into the kernel such that they are run before init starts,
without the kselftest harness, then they are not kselftests, by
definition. Both in how they are run, and since many of these
in-kernel tests predate the introduction of kselftests --- in some
cases, by many years.

> We are talking in circles. I'm done with this thread.

Yes, that sounds like it would be best.

- Ted

2019-05-15 07:45:21

by Daniel Vetter

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Tue, May 14, 2019 at 11:36:18AM -0700, Brendan Higgins wrote:
> On Tue, May 14, 2019 at 02:05:05PM +0200, Daniel Vetter wrote:
> > On Tue, May 14, 2019 at 8:04 AM Brendan Higgins
> > <[email protected]> wrote:
> > >
> > > On Mon, May 13, 2019 at 04:44:51PM +0200, Daniel Vetter wrote:
> > > > On Sat, May 11, 2019 at 01:33:44PM -0400, Theodore Ts'o wrote:
> > > > > On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> > > > > > However, the reply is incorrect. Kselftest in-kernel tests (which
> > > > > > is the context here) can be configured as built in instead of as
> > > > > > a module, and built in a UML kernel. The UML kernel can boot,
> > > > > > running the in-kernel tests before UML attempts to invoke the
> > > > > > init process.
> > > > >
> > > > > Um, Citation needed?
> > > > >
> > > > > I don't see any evidence for this in the kselftest documentation, nor
> > > > > do I see any evidence of this in the kselftest Makefiles.
> > > > >
> > > > > There exists test modules in the kernel that run before the init
> > > > > scripts run --- but that's not strictly speaking part of kselftests,
> > > > > and do not have any kind of infrastructure. As noted, the
> > > > > kselftests_harness header file fundamentally assumes that you are
> > > > > running test code in userspace.
> > > >
> > > > Yeah I really like the "no userspace required at all" design of kunit,
> > > > while still collecting results in a well-defined way (unless the current
> > > > self-test that just run when you load the module, with maybe some
> > > > kselftest ad-hoc wrapper around to collect the results).
> > > >
> > > > What I want to do long-term is to run these kernel unit tests as part of
> > > > the build-testing, most likely in gitlab (sooner or later, for drm.git
> > >
> > > Totally! This is part of the reason I have been insisting on a minimum
> > > of UML compatibility for all unit tests. If you can suffiently constrain
> > > the environment that is required for tests to run in, it makes it much
> > > easier not only for a human to run your tests, but it also makes it a
> > > lot easier for an automated service to be able to run your tests.
> > >
> > > I actually have a prototype presubmit already working on my
> > > "stable/non-upstream" branch. You can checkout what presubmit results
> > > look like here[1][2].
> >
> > ug gerrit :-)
>
> Yeah, yeah, I know, but it is a lot easier for me to get a project set
> up here using Gerrit, when we already use that for a lot of other
> projects.
>
> Also, Gerrit has gotten a lot better over the last two years or so. Two
> years ago, I wouldn't touch it with a ten foot pole. It's not so bad
> anymore, at least if you are used to using a web UI to review code.

I was somewhat joking, I'm just not used to gerrit ... And seems to indeed
be a lot more polished than last time I looked at it seriously.

> > > > only ofc). So that people get their pull requests (and patch series, we
> > > > have some ideas to tie this into patchwork) automatically tested for this
> > >
> > > Might that be Snowpatch[3]? I talked to Russell, the creator of Snowpatch,
> > > and he seemed pretty open to collaboration.
> > >
> > > Before I heard about Snowpatch, I had an intern write a translation
> > > layer that made Prow (the presubmit service that I used in the prototype
> > > above) work with LKML[4].
> >
> > There's about 3-4 forks/clones of patchwork. snowpatch is one, we have
> > a different one on freedesktop.org. It's a bit a mess :-/
>
> Oh, I didn't realize that. I found your patchwork instance here[5], but
> do you have a place where I can see the changes you have added to
> support presubmit?

Ok here's a few links. Aside from the usual patch view we've also added a
series view:

https://patchwork.freedesktop.org/project/intel-gfx/series/?ordering=-last_updated

This ties the patches + cover letter together, and it even (tries to at
least) track revisions. Here's an example which is currently at revision
9:

https://patchwork.freedesktop.org/series/57232/

Below the patch list for each revision we also have the test result list.
If you click on the grey bar it'll expand with the summary from CI, the
"See full logs" is link to the full results from our CI. This is driven
with some REST api from our jenkins.

Patchwork also sends out mails for these results.

Source is on gitlab: https://gitlab.freedesktop.org/patchwork-fdo

> > > I am not married to either approach, but I think between the two of
> > > them, most of the initial legwork has been done to make presubmit on
> > > LKML a reality.
> >
> > We do have presubmit CI working already with our freedesktop.org
> > patchwork. The missing glue is just tying that into gitlab CI somehow
> > (since we want to unify build testing more and make it easier for
> > contributors to adjust things).
>
> I checked out a couple of your projects on your patchwork instance: AMD
> X.Org drivers, DRI devel, and Wayland. I saw the tab you added for
> tests, but none of them actually had any test results. Can you point me
> at one that does?

Atm we use the CI stuff only on intel-gfx, with the our gpu CI farm, see
links above.

Cheers, Daniel

>
> Cheers!
>
> [5] https://patchwork.freedesktop.org/
>
> > > > super basic stuff.
> > >
> > > I am really excited to hear back on what you think!
> > >
> > > Cheers!
> > >
> > > [1] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-7bfa40efb132e15c8388755c273837559911425c
> > > [2] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-a6784496eafff442ac98fb068bf1a0f36ee73509
> > > [3] https://developer.ibm.com/open/projects/snowpatch/
> > > [4] https://kunit.googlesource.com/prow-lkml/
> > > _______________________________________________
> > > dri-devel mailing list
> > > [email protected]
> > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
>

--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

2019-05-22 21:40:26

by Brendan Higgins

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

+Bjorn Helgaas

On Wed, May 15, 2019 at 12:41 AM Daniel Vetter <[email protected]> wrote:
>
> On Tue, May 14, 2019 at 11:36:18AM -0700, Brendan Higgins wrote:
> > On Tue, May 14, 2019 at 02:05:05PM +0200, Daniel Vetter wrote:
> > > On Tue, May 14, 2019 at 8:04 AM Brendan Higgins
> > > <[email protected]> wrote:
> > > >
> > > > On Mon, May 13, 2019 at 04:44:51PM +0200, Daniel Vetter wrote:
> > > > > On Sat, May 11, 2019 at 01:33:44PM -0400, Theodore Ts'o wrote:
> > > > > > On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> > > > > > > However, the reply is incorrect. Kselftest in-kernel tests (which
> > > > > > > is the context here) can be configured as built in instead of as
> > > > > > > a module, and built in a UML kernel. The UML kernel can boot,
> > > > > > > running the in-kernel tests before UML attempts to invoke the
> > > > > > > init process.
> > > > > >
> > > > > > Um, Citation needed?
> > > > > >
> > > > > > I don't see any evidence for this in the kselftest documentation, nor
> > > > > > do I see any evidence of this in the kselftest Makefiles.
> > > > > >
> > > > > > There exists test modules in the kernel that run before the init
> > > > > > scripts run --- but that's not strictly speaking part of kselftests,
> > > > > > and do not have any kind of infrastructure. As noted, the
> > > > > > kselftests_harness header file fundamentally assumes that you are
> > > > > > running test code in userspace.
> > > > >
> > > > > Yeah I really like the "no userspace required at all" design of kunit,
> > > > > while still collecting results in a well-defined way (unless the current
> > > > > self-test that just run when you load the module, with maybe some
> > > > > kselftest ad-hoc wrapper around to collect the results).
> > > > >
> > > > > What I want to do long-term is to run these kernel unit tests as part of
> > > > > the build-testing, most likely in gitlab (sooner or later, for drm.git
> > > >
> > > > Totally! This is part of the reason I have been insisting on a minimum
> > > > of UML compatibility for all unit tests. If you can suffiently constrain
> > > > the environment that is required for tests to run in, it makes it much
> > > > easier not only for a human to run your tests, but it also makes it a
> > > > lot easier for an automated service to be able to run your tests.
> > > >
> > > > I actually have a prototype presubmit already working on my
> > > > "stable/non-upstream" branch. You can checkout what presubmit results
> > > > look like here[1][2].
> > >
> > > ug gerrit :-)
> >
> > Yeah, yeah, I know, but it is a lot easier for me to get a project set
> > up here using Gerrit, when we already use that for a lot of other
> > projects.
> >
> > Also, Gerrit has gotten a lot better over the last two years or so. Two
> > years ago, I wouldn't touch it with a ten foot pole. It's not so bad
> > anymore, at least if you are used to using a web UI to review code.
>
> I was somewhat joking, I'm just not used to gerrit ... And seems to indeed
> be a lot more polished than last time I looked at it seriously.

I mean, it is still not perfect, but I think it has finally gotten to
the point where I prefer it over reviewing by email for high context
patches where you don't expect a lot of deep discussion.

Still not great for patches where you want to have a lot of discussion.

> > > > > only ofc). So that people get their pull requests (and patch series, we
> > > > > have some ideas to tie this into patchwork) automatically tested for this
> > > >
> > > > Might that be Snowpatch[3]? I talked to Russell, the creator of Snowpatch,
> > > > and he seemed pretty open to collaboration.
> > > >
> > > > Before I heard about Snowpatch, I had an intern write a translation
> > > > layer that made Prow (the presubmit service that I used in the prototype
> > > > above) work with LKML[4].
> > >
> > > There's about 3-4 forks/clones of patchwork. snowpatch is one, we have
> > > a different one on freedesktop.org. It's a bit a mess :-/

I think Snowpatch is an ozlabs project; at least the maintainer works at IBM.

Patchwork originally was a ozlabs project, right?

Has any discussion taken place trying to consolidate some of the forks?

Presubmit clearly seems like a feature that a number of people want.

> > Oh, I didn't realize that. I found your patchwork instance here[5], but
> > do you have a place where I can see the changes you have added to
> > support presubmit?
>
> Ok here's a few links. Aside from the usual patch view we've also added a
> series view:
>
> https://patchwork.freedesktop.org/project/intel-gfx/series/?ordering=-last_updated
>
> This ties the patches + cover letter together, and it even (tries to at
> least) track revisions. Here's an example which is currently at revision
> 9:
>
> https://patchwork.freedesktop.org/series/57232/

Oooh, nice! That looks awesome! Looks like you have a number of presubmits too.

> Below the patch list for each revision we also have the test result list.
> If you click on the grey bar it'll expand with the summary from CI, the
> "See full logs" is link to the full results from our CI. This is driven
> with some REST api from our jenkins.
>
> Patchwork also sends out mails for these results.

Nice! There are obviously a lot of other bots on various kernel
mailing lists. Do you think people would object to sending presubmit
results to the mailing lists by default?

> Source is on gitlab: https://gitlab.freedesktop.org/patchwork-fdo

Err, looks like you forked from the ozlab's repo a good while ago.

Still, this all looks great!

> > > > I am not married to either approach, but I think between the two of
> > > > them, most of the initial legwork has been done to make presubmit on
> > > > LKML a reality.
> > >
> > > We do have presubmit CI working already with our freedesktop.org
> > > patchwork. The missing glue is just tying that into gitlab CI somehow
> > > (since we want to unify build testing more and make it easier for
> > > contributors to adjust things).
> >
> > I checked out a couple of your projects on your patchwork instance: AMD
> > X.Org drivers, DRI devel, and Wayland. I saw the tab you added for
> > tests, but none of them actually had any test results. Can you point me
> > at one that does?
>
> Atm we use the CI stuff only on intel-gfx, with the our gpu CI farm, see
> links above.
>
> Cheers, Daniel
>
> >
> > Cheers!
> >
> > [5] https://patchwork.freedesktop.org/
> >
> > > > > super basic stuff.
> > > >
> > > > I am really excited to hear back on what you think!
> > > >
> > > > Cheers!
> > > >
> > > > [1] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-7bfa40efb132e15c8388755c273837559911425c
> > > > [2] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-a6784496eafff442ac98fb068bf1a0f36ee73509
> > > > [3] https://developer.ibm.com/open/projects/snowpatch/
> > > > [4] https://kunit.googlesource.com/prow-lkml/
> > > > _______________________________________________
> > > > dri-devel mailing list
> > > > [email protected]
> > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel

Cheers!

2019-05-23 08:42:41

by Daniel Vetter

[permalink] [raw]
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

On Wed, May 22, 2019 at 02:38:48PM -0700, Brendan Higgins wrote:
> +Bjorn Helgaas
>
> On Wed, May 15, 2019 at 12:41 AM Daniel Vetter <[email protected]> wrote:
> >
> > On Tue, May 14, 2019 at 11:36:18AM -0700, Brendan Higgins wrote:
> > > On Tue, May 14, 2019 at 02:05:05PM +0200, Daniel Vetter wrote:
> > > > On Tue, May 14, 2019 at 8:04 AM Brendan Higgins
> > > > <[email protected]> wrote:
> > > > >
> > > > > On Mon, May 13, 2019 at 04:44:51PM +0200, Daniel Vetter wrote:
> > > > > > On Sat, May 11, 2019 at 01:33:44PM -0400, Theodore Ts'o wrote:
> > > > > > > On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> > > > > > > > However, the reply is incorrect. Kselftest in-kernel tests (which
> > > > > > > > is the context here) can be configured as built in instead of as
> > > > > > > > a module, and built in a UML kernel. The UML kernel can boot,
> > > > > > > > running the in-kernel tests before UML attempts to invoke the
> > > > > > > > init process.
> > > > > > >
> > > > > > > Um, Citation needed?
> > > > > > >
> > > > > > > I don't see any evidence for this in the kselftest documentation, nor
> > > > > > > do I see any evidence of this in the kselftest Makefiles.
> > > > > > >
> > > > > > > There exists test modules in the kernel that run before the init
> > > > > > > scripts run --- but that's not strictly speaking part of kselftests,
> > > > > > > and do not have any kind of infrastructure. As noted, the
> > > > > > > kselftests_harness header file fundamentally assumes that you are
> > > > > > > running test code in userspace.
> > > > > >
> > > > > > Yeah I really like the "no userspace required at all" design of kunit,
> > > > > > while still collecting results in a well-defined way (unless the current
> > > > > > self-test that just run when you load the module, with maybe some
> > > > > > kselftest ad-hoc wrapper around to collect the results).
> > > > > >
> > > > > > What I want to do long-term is to run these kernel unit tests as part of
> > > > > > the build-testing, most likely in gitlab (sooner or later, for drm.git
> > > > >
> > > > > Totally! This is part of the reason I have been insisting on a minimum
> > > > > of UML compatibility for all unit tests. If you can suffiently constrain
> > > > > the environment that is required for tests to run in, it makes it much
> > > > > easier not only for a human to run your tests, but it also makes it a
> > > > > lot easier for an automated service to be able to run your tests.
> > > > >
> > > > > I actually have a prototype presubmit already working on my
> > > > > "stable/non-upstream" branch. You can checkout what presubmit results
> > > > > look like here[1][2].
> > > >
> > > > ug gerrit :-)
> > >
> > > Yeah, yeah, I know, but it is a lot easier for me to get a project set
> > > up here using Gerrit, when we already use that for a lot of other
> > > projects.
> > >
> > > Also, Gerrit has gotten a lot better over the last two years or so. Two
> > > years ago, I wouldn't touch it with a ten foot pole. It's not so bad
> > > anymore, at least if you are used to using a web UI to review code.
> >
> > I was somewhat joking, I'm just not used to gerrit ... And seems to indeed
> > be a lot more polished than last time I looked at it seriously.
>
> I mean, it is still not perfect, but I think it has finally gotten to
> the point where I prefer it over reviewing by email for high context
> patches where you don't expect a lot of deep discussion.
>
> Still not great for patches where you want to have a lot of discussion.
>
> > > > > > only ofc). So that people get their pull requests (and patch series, we
> > > > > > have some ideas to tie this into patchwork) automatically tested for this
> > > > >
> > > > > Might that be Snowpatch[3]? I talked to Russell, the creator of Snowpatch,
> > > > > and he seemed pretty open to collaboration.
> > > > >
> > > > > Before I heard about Snowpatch, I had an intern write a translation
> > > > > layer that made Prow (the presubmit service that I used in the prototype
> > > > > above) work with LKML[4].
> > > >
> > > > There's about 3-4 forks/clones of patchwork. snowpatch is one, we have
> > > > a different one on freedesktop.org. It's a bit a mess :-/
>
> I think Snowpatch is an ozlabs project; at least the maintainer works at IBM.
>
> Patchwork originally was a ozlabs project, right?

So there's two patchworks (snowpatch makes the 3rd): the one on
freedesktop is another fork.

> Has any discussion taken place trying to consolidate some of the forks?

Yup, but didn't lead anywhere unfortunately :-/ At least between patchwork
and patchwork-fdo, I think snowpatch happened in parallel and once it was
done it's kinda too late. The trouble is that they all have slightly
different REST api and functionality, so for CI integration you can't just
switch one for the other.

> Presubmit clearly seems like a feature that a number of people want.
>
> > > Oh, I didn't realize that. I found your patchwork instance here[5], but
> > > do you have a place where I can see the changes you have added to
> > > support presubmit?
> >
> > Ok here's a few links. Aside from the usual patch view we've also added a
> > series view:
> >
> > https://patchwork.freedesktop.org/project/intel-gfx/series/?ordering=-last_updated
> >
> > This ties the patches + cover letter together, and it even (tries to at
> > least) track revisions. Here's an example which is currently at revision
> > 9:
> >
> > https://patchwork.freedesktop.org/series/57232/
>
> Oooh, nice! That looks awesome! Looks like you have a number of presubmits too.

We have a pretty big farm of machines that mostly just crunch through
premerge patch series. postmerge is mostly just for statistics (so we can
find the sporadic failures and better characterize them).

> > Below the patch list for each revision we also have the test result list.
> > If you click on the grey bar it'll expand with the summary from CI, the
> > "See full logs" is link to the full results from our CI. This is driven
> > with some REST api from our jenkins.
> >
> > Patchwork also sends out mails for these results.
>
> Nice! There are obviously a lot of other bots on various kernel
> mailing lists. Do you think people would object to sending presubmit
> results to the mailing lists by default?
>
> > Source is on gitlab: https://gitlab.freedesktop.org/patchwork-fdo
>
> Err, looks like you forked from the ozlab's repo a good while ago.

Yeah see above, it's a bit a complicated story. I think there's a total of
3 patchworks now :-/

> Still, this all looks great!

Cheers, Daniel

>
> > > > > I am not married to either approach, but I think between the two of
> > > > > them, most of the initial legwork has been done to make presubmit on
> > > > > LKML a reality.
> > > >
> > > > We do have presubmit CI working already with our freedesktop.org
> > > > patchwork. The missing glue is just tying that into gitlab CI somehow
> > > > (since we want to unify build testing more and make it easier for
> > > > contributors to adjust things).
> > >
> > > I checked out a couple of your projects on your patchwork instance: AMD
> > > X.Org drivers, DRI devel, and Wayland. I saw the tab you added for
> > > tests, but none of them actually had any test results. Can you point me
> > > at one that does?
> >
> > Atm we use the CI stuff only on intel-gfx, with the our gpu CI farm, see
> > links above.
> >
> > Cheers, Daniel
> >
> > >
> > > Cheers!
> > >
> > > [5] https://patchwork.freedesktop.org/
> > >
> > > > > > super basic stuff.
> > > > >
> > > > > I am really excited to hear back on what you think!
> > > > >
> > > > > Cheers!
> > > > >
> > > > > [1] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-7bfa40efb132e15c8388755c273837559911425c
> > > > > [2] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-a6784496eafff442ac98fb068bf1a0f36ee73509
> > > > > [3] https://developer.ibm.com/open/projects/snowpatch/
> > > > > [4] https://kunit.googlesource.com/prow-lkml/
> > > > > _______________________________________________
> > > > > dri-devel mailing list
> > > > > [email protected]
> > > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
> Cheers!

--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch