2012-05-17 15:21:51

by Hitoshi Mitake

[permalink] [raw]
Subject: [PATCH] perf bench: add new benchmark subsystem and suite "futex wait"

Hi Ingo, Eric and Darren,
(CCed perf and futex folks)

I wrote this patch for adding new subsystem "futex" and its suite "wait" to perf
bench on tip/master. This is based on futextest by Darren Hart.

Could you allow me to import your source code of futextest to perf bench, Darren?

Below is the patch, I'd like to hear your comments.



This patch adds new benchmark program for futex subsystem and its suite to perf
bench. The new benchmark is based on performance/futex_wait.c of futextest by
Darren Hart: http://git.kernel.org/?p=linux/kernel/git/dvhart/futextest.git

This new suite "futex wait" simply create and let worker threads iterate locking
and unlocking of futexes. After this iterations, the result is printed in unit
of Kiter/s: Kilo iteration per second. 1 iter means combination of one locking
and one unlocking.

command line options:
--futex-for-sync: If this option passed, futex will be used instead of pipe()
for sync between main thread and worker threads. This is only for mimicing
subtle behaviour of futextest.

--futexes <number of futexes>: Number of futexes which will be locked and
unlocked by worker threads. Futexes are distributed to worker threads fairly,
so the condition: <number of futexes> % <number of threads> == 0 must be true.

--threads <number of threads>: Number of worker threads for locking and
unlocking futexes.

--iterations <number of iterations>: Number of iteration (locking and unlocking
futex).

example usage:
$ ./perf bench futex wait --futexes 1 --threads 1
# Running futex/wait benchmark...
# 1 threads and 1 futexes (1 threads for 1 futex)
2.76s user, 0.00s system, 2.76s wall, 1.00 cores
Result: 36232 Kiter/s
$ ./perf bench futex wait --futexes 2 --threads 16
# Running futex/wait benchmark...
# 16 threads and 2 futexes (8 threads for 1 futex)
8.35s user, 16.53s system, 6.38s wall, 3.90 cores
Result: 15674 Kiter/s

I have to note that this patch produces three warns of checkpatch.pl, but these
are not essential and acceptable:

WARNING: line over 80 characters
#36: FILE: tools/perf/bench/bench.h:8:
+extern int bench_futex_wait(int argc, const char **argv, const char *prefix __used);

WARNING: do not add new typedefs
#76: FILE: tools/perf/bench/futex-wait.c:31:
+typedef volatile u_int32_t futex_t;

WARNING: Use of volatile is usually wrong: see Documentation/volatile-considered-harmful.txt
#76: FILE: tools/perf/bench/futex-wait.c:31:
+typedef volatile u_int32_t futex_t;

Cc: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Darren Hart <[email protected]>
Cc: Michel Lespinasse <[email protected]>
Cc: Rusty Russell <[email protected]>
Cc: Eric Dumazet <[email protected]>
Signed-off-by: Hitoshi Mitake <[email protected]>

---
tools/perf/Makefile | 1 +
tools/perf/bench/bench.h | 1 +
tools/perf/bench/futex-wait.c | 377 +++++++++++++++++++++++++++++++++++++++++
tools/perf/builtin-bench.c | 13 ++
4 files changed, 392 insertions(+)
create mode 100644 tools/perf/bench/futex-wait.c

diff --git a/tools/perf/Makefile b/tools/perf/Makefile
index 6f54efb..4a5aaa0 100644
--- a/tools/perf/Makefile
+++ b/tools/perf/Makefile
@@ -375,6 +375,7 @@ BUILTIN_OBJS += $(OUTPUT)bench/mem-memset-x86-64-asm.o
endif
BUILTIN_OBJS += $(OUTPUT)bench/mem-memcpy.o
BUILTIN_OBJS += $(OUTPUT)bench/mem-memset.o
+BUILTIN_OBJS += $(OUTPUT)bench/futex-wait.o

BUILTIN_OBJS += $(OUTPUT)builtin-diff.o
BUILTIN_OBJS += $(OUTPUT)builtin-evlist.o
diff --git a/tools/perf/bench/bench.h b/tools/perf/bench/bench.h
index a09bece..e927583 100644
--- a/tools/perf/bench/bench.h
+++ b/tools/perf/bench/bench.h
@@ -5,6 +5,7 @@ extern int bench_sched_messaging(int argc, const char **argv, const char *prefix
extern int bench_sched_pipe(int argc, const char **argv, const char *prefix);
extern int bench_mem_memcpy(int argc, const char **argv, const char *prefix __used);
extern int bench_mem_memset(int argc, const char **argv, const char *prefix);
+extern int bench_futex_wait(int argc, const char **argv, const char *prefix __used);

#define BENCH_FORMAT_DEFAULT_STR "default"
#define BENCH_FORMAT_DEFAULT 0
diff --git a/tools/perf/bench/futex-wait.c b/tools/perf/bench/futex-wait.c
new file mode 100644
index 0000000..091973a
--- /dev/null
+++ b/tools/perf/bench/futex-wait.c
@@ -0,0 +1,377 @@
+/*
+ * futex-wait.c
+ *
+ * Measure FUTEX_WAIT operations per second.
+ * based on futex_wait.c of futextest by Darren Hart <[email protected]>
+ * and Michel Lespinasse <[email protected]>
+ *
+ * ported to perf bench by Hitoshi Mitake <[email protected]>
+ *
+ * original futextest:
+ * http://git.kernel.org/?p=linux/kernel/git/dvhart/futextest.git
+ */
+
+#include "../perf.h"
+#include "../util.h"
+#include "../util/parse-options.h"
+
+#include "bench.h"
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <pthread.h>
+#include <sys/times.h>
+#include <sys/poll.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include <sys/socket.h>
+
+#include <linux/futex.h>
+
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
+#endif
+#include <poll.h>
+
+typedef volatile u_int32_t futex_t;
+
+struct thread_barrier {
+ futex_t threads;
+ futex_t unblock;
+};
+
+struct worker_ctx {
+ futex_t *futex;
+ unsigned int iterations;
+
+ int readyfd, wakefd;
+
+ struct thread_barrier *barrier_before, *barrier_after;
+};
+
+static void fdpair(int fds[2])
+{
+ if (pipe(fds) == 0)
+ return;
+
+ die("pipe() failed");
+}
+
+/**
+ * futex() - SYS_futex syscall wrapper
+ * @uaddr: address of first futex
+ * @op: futex op code
+ * @val: typically expected value of uaddr, but varies by op
+ * @timeout: typically an absolute struct timespec (except where noted
+ * otherwise). Overloaded by some ops
+ * @uaddr2: address of second futex for some ops\
+ * @val3: varies by op
+ * @opflags: flags to be bitwise OR'd with op, such as FUTEX_PRIVATE_FLAG
+ *
+ * futex() is used by all the following futex op wrappers. It can also be
+ * used for misuse and abuse testing. Generally, the specific op wrappers
+ * should be used instead. It is a macro instead of an static inline function as
+ * some of the types over overloaded (timeout is used for nr_requeue for
+ * example).
+ *
+ * These argument descriptions are the defaults for all
+ * like-named arguments in the following wrappers except where noted below.
+ */
+#define futex(uaddr, op, val, timeout, uaddr2, val3, opflags) \
+ syscall(SYS_futex, uaddr, op | opflags, val, timeout, uaddr2, val3)
+
+/**
+ * futex_wait() - block on uaddr with optional timeout
+ * @timeout: relative timeout
+ */
+static inline int
+futex_wait(futex_t *uaddr, futex_t val, struct timespec *timeout, int opflags)
+{
+ return futex(uaddr, FUTEX_WAIT, val, timeout, NULL, 0, opflags);
+}
+/**
+ * futex_wake() - wake one or more tasks blocked on uaddr
+ * @nr_wake: wake up to this many tasks
+ */
+static inline int
+futex_wake(futex_t *uaddr, int nr_wake, int opflags)
+{
+ return futex(uaddr, FUTEX_WAKE, nr_wake, NULL, NULL, 0, opflags);
+}
+
+/**
+ * futex_cmpxchg() - atomic compare and exchange
+ * @uaddr: The address of the futex to be modified
+ * @oldval: The expected value of the futex
+ * @newval: The new value to try and assign the futex
+ *
+ * Implement cmpxchg using gcc atomic builtins.
+ * http://gcc.gnu.org/onlinedocs/gcc-4.1.0/gcc/Atomic-Builtins.html
+ *
+ * Return the old futex value.
+ */
+static inline u_int32_t
+futex_cmpxchg(futex_t *uaddr, u_int32_t oldval, u_int32_t newval)
+{
+ return __sync_val_compare_and_swap(uaddr, oldval, newval);
+}
+
+/**
+ * futex_dec() - atomic decrement of the futex value
+ * @uaddr: The address of the futex to be modified
+ *
+ * Return the new futex value.
+ */
+static inline u_int32_t
+futex_dec(futex_t *uaddr)
+{
+ return __sync_sub_and_fetch(uaddr, 1);
+}
+
+/**
+ * futex_inc() - atomic increment of the futex value
+ * @uaddr: the address of the futex to be modified
+ *
+ * Return the new futex value.
+ */
+static inline u_int32_t
+futex_inc(futex_t *uaddr)
+{
+ return __sync_add_and_fetch(uaddr, 1);
+}
+
+static inline void futex_wait_lock(futex_t *futex)
+{
+ int status = *futex;
+ if (status == 0)
+ status = futex_cmpxchg(futex, 0, 1);
+ while (status != 0) {
+ if (status == 1)
+ status = futex_cmpxchg(futex, 1, 2);
+ if (status != 0) {
+ futex_wait(futex, 2, NULL, FUTEX_PRIVATE_FLAG);
+ status = *futex;
+ }
+ if (status == 0)
+ status = futex_cmpxchg(futex, 0, 2);
+ }
+}
+
+static inline void futex_cmpxchg_unlock(futex_t *futex)
+{
+ int status = *futex;
+ if (status == 1)
+ status = futex_cmpxchg(futex, 1, 0);
+ if (status == 2) {
+ futex_cmpxchg(futex, 2, 0);
+ futex_wake(futex, 1, FUTEX_PRIVATE_FLAG);
+ }
+}
+
+/* Called by main thread to initialize barrier */
+static void barrier_init(struct thread_barrier *barrier, int threads)
+{
+ barrier->threads = threads;
+ barrier->unblock = 0;
+}
+
+/* Called by worker threads to synchronize with main thread */
+static int barrier_sync(struct thread_barrier *barrier)
+{
+ futex_dec(&barrier->threads);
+ if (barrier->threads == 0)
+ futex_wake(&barrier->threads, 1, FUTEX_PRIVATE_FLAG);
+ while (barrier->unblock == 0)
+ futex_wait(&barrier->unblock, 0, NULL, FUTEX_PRIVATE_FLAG);
+ return barrier->unblock;
+}
+
+/* Called by main thread to wait for all workers to reach sync point */
+static void barrier_wait(struct thread_barrier *barrier)
+{
+ int threads;
+ while ((threads = barrier->threads) > 0)
+ futex_wait(&barrier->threads, threads, NULL,
+ FUTEX_PRIVATE_FLAG);
+}
+
+/* Called by main thread to unblock worker threads from their sync point */
+static void barrier_unblock(struct thread_barrier *barrier, int value)
+{
+ barrier->unblock = value;
+ futex_wake(&barrier->unblock, INT_MAX, FUTEX_PRIVATE_FLAG);
+}
+
+static bool use_futex_for_sync;
+
+static void *worker(void *arg)
+{
+ char dummy;
+ int iterations;
+ futex_t *futex;
+
+ struct worker_ctx *ctx = (struct worker_ctx *)arg;
+ struct pollfd pollfd = { .fd = ctx->wakefd, .events = POLLIN };
+
+ iterations = ctx->iterations;
+ futex = ctx->futex;
+ /* currently, we have nothing to prepare */
+ if (use_futex_for_sync) {
+ barrier_sync(ctx->barrier_before);
+ } else {
+ if (write(ctx->readyfd, &dummy, 1) != 1)
+ die("write() on readyfd failed");
+
+ if (poll(&pollfd, 1, -1) != 1)
+ die("poll() failed");
+ }
+
+ while (iterations--) {
+ futex_wait_lock(futex);
+ futex_cmpxchg_unlock(futex);
+ }
+
+ if (use_futex_for_sync)
+ barrier_sync(ctx->barrier_after);
+
+ return NULL;
+}
+
+static int iterations = 100000000;
+static int threads = 256;
+/* futexes are fairly distributed for threads */
+static int futexes = 1;
+
+static const struct option options[] = {
+ OPT_INTEGER('i', "iterations", &iterations,
+ "number of locking and unlocking"),
+ OPT_INTEGER('t', "threads", &threads,
+ "number of worker threads"),
+ OPT_INTEGER('f', "futexes", &futexes,
+ "number of futexes, the condition"
+ "threads % futexes == 0 must be true"),
+ OPT_BOOLEAN('s', "futex-for-sync", &use_futex_for_sync,
+ "use futex for sync between main thread and worker threads"),
+ OPT_END()
+};
+
+static const char * const bench_futex_wait_usage[] = {
+ "perf bench futex wait <options>",
+ NULL
+};
+
+int bench_futex_wait(int argc, const char **argv,
+ const char *prefix __used)
+{
+ int i;
+ char buf;
+ int wakefds[2], readyfds[2];
+ pthread_t *pth_tab;
+ struct worker_ctx *ctx_tab;
+ futex_t *futex_tab;
+
+ struct thread_barrier barrier_before, barrier_after;
+
+ clock_t before, after;
+ struct tms tms_before, tms_after;
+ int wall, user, system_time;
+ double tick;
+
+ argc = parse_options(argc, argv, options,
+ bench_futex_wait_usage, 0);
+
+ if (threads % futexes)
+ die("threads %% futexes must be 0");
+
+ if (use_futex_for_sync) {
+ barrier_init(&barrier_before, threads);
+ barrier_init(&barrier_after, threads);
+ } else {
+ fdpair(wakefds);
+ fdpair(readyfds);
+ }
+
+ pth_tab = calloc(sizeof(pthread_t), threads);
+ if (!pth_tab)
+ die("calloc() for pthread descriptors failed");
+ ctx_tab = calloc(sizeof(struct worker_ctx), threads);
+ if (!ctx_tab)
+ die("calloc() for worker contexts failed");
+ futex_tab = calloc(sizeof(futex_t), futexes);
+ if (!futex_tab)
+ die("calloc() for futexes failed");
+
+ for (i = 0; i < threads; i++) {
+ ctx_tab[i].futex = &futex_tab[i % futexes];
+ ctx_tab[i].iterations = iterations / threads;
+
+ ctx_tab[i].readyfd = readyfds[1];
+ ctx_tab[i].wakefd = wakefds[0];
+
+ if (use_futex_for_sync) {
+ ctx_tab[i].barrier_before = &barrier_before;
+ ctx_tab[i].barrier_after = &barrier_after;
+ }
+
+ if (pthread_create(&pth_tab[i], NULL, worker, &ctx_tab[i]))
+ die("pthread_create() for creating workers failed");
+ }
+
+ if (use_futex_for_sync) {
+ barrier_wait(&barrier_before);
+ } else {
+ for (i = 0; i < threads; i++) {
+ if (read(readyfds[0], &buf, 1) != 1)
+ die("read() for ready failed");
+ }
+ }
+
+ before = times(&tms_before);
+
+ if (use_futex_for_sync) {
+ barrier_unblock(&barrier_before, 1);
+ } else {
+ if (write(wakefds[1], &buf, 1) != 1)
+ die("write() for waking up workers failed");
+ }
+
+ if (use_futex_for_sync) {
+ barrier_wait(&barrier_after);
+ } else {
+ for (i = 0; i < threads; i++)
+ pthread_join(pth_tab[i], NULL);
+ }
+
+ after = times(&tms_after);
+
+ wall = after - before;
+ user = tms_after.tms_utime - tms_before.tms_utime;
+ system_time = tms_after.tms_stime - tms_before.tms_stime;
+ tick = 1.0 / sysconf(_SC_CLK_TCK);
+
+ switch (bench_format) {
+ case BENCH_FORMAT_DEFAULT:
+ printf("# %d threads and %d futexes (%d threads for 1 futex)\n",
+ threads, futexes, threads / futexes);
+ printf("%.2fs user, %.2fs system, %.2fs wall, %.2f cores\n",
+ user * tick, system_time * tick, wall * tick,
+ wall ? (user + system_time) * 1. / wall : 1.);
+ printf("Result: %.0f Kiter/s\n",
+ iterations / (wall * tick * 1000));
+ break;
+ case BENCH_FORMAT_SIMPLE:
+ printf("%.0f Kiter/s\n",
+ iterations / (wall * tick * 1000));
+ break;
+ default:
+ /* reaching here is something disaster */
+ die("Unknown format:%d\n", bench_format);
+ break;
+ }
+
+ free((void *)pth_tab);
+ free((void *)ctx_tab);
+ free((void *)futex_tab);
+
+ return 0;
+}
diff --git a/tools/perf/builtin-bench.c b/tools/perf/builtin-bench.c
index b0e74ab..cd8ebfd 100644
--- a/tools/perf/builtin-bench.c
+++ b/tools/perf/builtin-bench.c
@@ -61,6 +61,16 @@ static struct bench_suite mem_suites[] = {
NULL }
};

+static struct bench_suite futex_suites[] = {
+ { "wait",
+ "futex wait",
+ bench_futex_wait },
+ suite_all,
+ { NULL,
+ NULL,
+ NULL }
+};
+
struct bench_subsys {
const char *name;
const char *summary;
@@ -74,6 +84,9 @@ static struct bench_subsys subsystems[] = {
{ "mem",
"memory access performance",
mem_suites },
+ { "futex",
+ "futex performance",
+ futex_suites },
{ "all", /* sentinel: easy for help */
"test all subsystem (pseudo subsystem)",
NULL },
--
1.7.10.rc0.41.gfa678


2012-05-17 16:25:13

by Darren Hart

[permalink] [raw]
Subject: Re: [PATCH] perf bench: add new benchmark subsystem and suite "futex wait"

On 05/17/2012 08:21 AM, Hitoshi Mitake wrote:
> Hi Ingo, Eric and Darren,
> (CCed perf and futex folks)
>
> I wrote this patch for adding new subsystem "futex" and its suite "wait" to perf
> bench on tip/master. This is based on futextest by Darren Hart.
>
> Could you allow me to import your source code of futextest to perf bench, Darren?
>

I do have some concerns I'd like to address first.

What is advantage of incorporating this into perf as opposed to running
it with perf?

Do you intend to port the rest of the futextest testsuite over to perf?

futextest is not by any means complete, and I have been slowly adding to
it over time. My concern would be getting into a situation where perf
bench has a small subset of similar (but slightly different) tests,
which can not be maintained along with futextest.

Would there be a strong motivation to bring all of futextest under perf?
There are certain parts that I can see as not being a good fit, such as
some of the functional tests or possibly some of the stress tests (and
some of the planned randomization stress tests).

> Below is the patch, I'd like to hear your comments.

Depending on the answers to the above, I'm concerned about the inlining
of the various bits and pieces from the futextest header files into a
single C file - from a maintenance and expansion perspective.

I am not necessarily opposed to the idea, especially as being under the
perf umbrella is sure to get more futex testing and eyes on the
futextest code. I would like to make sure we have a long term plan
before merging headers and C files together from futextest into perf.

Thanks,

--
Darren

> This patch adds new benchmark program for futex subsystem and its suite to perf
> bench. The new benchmark is based on performance/futex_wait.c of futextest by
> Darren Hart: http://git.kernel.org/?p=linux/kernel/git/dvhart/futextest.git
>
> This new suite "futex wait" simply create and let worker threads iterate locking
> and unlocking of futexes. After this iterations, the result is printed in unit
> of Kiter/s: Kilo iteration per second. 1 iter means combination of one locking
> and one unlocking.
>
> command line options:
> --futex-for-sync: If this option passed, futex will be used instead of pipe()
> for sync between main thread and worker threads. This is only for mimicing
> subtle behaviour of futextest.
>
> --futexes <number of futexes>: Number of futexes which will be locked and
> unlocked by worker threads. Futexes are distributed to worker threads fairly,
> so the condition: <number of futexes> % <number of threads> == 0 must be true.
>
> --threads <number of threads>: Number of worker threads for locking and
> unlocking futexes.
>
> --iterations <number of iterations>: Number of iteration (locking and unlocking
> futex).
>
> example usage:
> $ ./perf bench futex wait --futexes 1 --threads 1
> # Running futex/wait benchmark...
> # 1 threads and 1 futexes (1 threads for 1 futex)
> 2.76s user, 0.00s system, 2.76s wall, 1.00 cores
> Result: 36232 Kiter/s
> $ ./perf bench futex wait --futexes 2 --threads 16
> # Running futex/wait benchmark...
> # 16 threads and 2 futexes (8 threads for 1 futex)
> 8.35s user, 16.53s system, 6.38s wall, 3.90 cores
> Result: 15674 Kiter/s
>
> I have to note that this patch produces three warns of checkpatch.pl, but these
> are not essential and acceptable:
>
> WARNING: line over 80 characters
> #36: FILE: tools/perf/bench/bench.h:8:
> +extern int bench_futex_wait(int argc, const char **argv, const char *prefix __used);
>
> WARNING: do not add new typedefs
> #76: FILE: tools/perf/bench/futex-wait.c:31:
> +typedef volatile u_int32_t futex_t;
>
> WARNING: Use of volatile is usually wrong: see Documentation/volatile-considered-harmful.txt
> #76: FILE: tools/perf/bench/futex-wait.c:31:
> +typedef volatile u_int32_t futex_t;
>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Paul Mackerras <[email protected]>
> Cc: Arnaldo Carvalho de Melo <[email protected]>
> Cc: Darren Hart <[email protected]>
> Cc: Michel Lespinasse <[email protected]>
> Cc: Rusty Russell <[email protected]>
> Cc: Eric Dumazet <[email protected]>
> Signed-off-by: Hitoshi Mitake <[email protected]>
>
> ---
> tools/perf/Makefile | 1 +
> tools/perf/bench/bench.h | 1 +
> tools/perf/bench/futex-wait.c | 377 +++++++++++++++++++++++++++++++++++++++++
> tools/perf/builtin-bench.c | 13 ++
> 4 files changed, 392 insertions(+)
> create mode 100644 tools/perf/bench/futex-wait.c
>
> diff --git a/tools/perf/Makefile b/tools/perf/Makefile
> index 6f54efb..4a5aaa0 100644
> --- a/tools/perf/Makefile
> +++ b/tools/perf/Makefile
> @@ -375,6 +375,7 @@ BUILTIN_OBJS += $(OUTPUT)bench/mem-memset-x86-64-asm.o
> endif
> BUILTIN_OBJS += $(OUTPUT)bench/mem-memcpy.o
> BUILTIN_OBJS += $(OUTPUT)bench/mem-memset.o
> +BUILTIN_OBJS += $(OUTPUT)bench/futex-wait.o
>
> BUILTIN_OBJS += $(OUTPUT)builtin-diff.o
> BUILTIN_OBJS += $(OUTPUT)builtin-evlist.o
> diff --git a/tools/perf/bench/bench.h b/tools/perf/bench/bench.h
> index a09bece..e927583 100644
> --- a/tools/perf/bench/bench.h
> +++ b/tools/perf/bench/bench.h
> @@ -5,6 +5,7 @@ extern int bench_sched_messaging(int argc, const char **argv, const char *prefix
> extern int bench_sched_pipe(int argc, const char **argv, const char *prefix);
> extern int bench_mem_memcpy(int argc, const char **argv, const char *prefix __used);
> extern int bench_mem_memset(int argc, const char **argv, const char *prefix);
> +extern int bench_futex_wait(int argc, const char **argv, const char *prefix __used);
>
> #define BENCH_FORMAT_DEFAULT_STR "default"
> #define BENCH_FORMAT_DEFAULT 0
> diff --git a/tools/perf/bench/futex-wait.c b/tools/perf/bench/futex-wait.c
> new file mode 100644
> index 0000000..091973a
> --- /dev/null
> +++ b/tools/perf/bench/futex-wait.c
> @@ -0,0 +1,377 @@
> +/*
> + * futex-wait.c
> + *
> + * Measure FUTEX_WAIT operations per second.
> + * based on futex_wait.c of futextest by Darren Hart <[email protected]>
> + * and Michel Lespinasse <[email protected]>
> + *
> + * ported to perf bench by Hitoshi Mitake <[email protected]>
> + *
> + * original futextest:
> + * http://git.kernel.org/?p=linux/kernel/git/dvhart/futextest.git
> + */
> +
> +#include "../perf.h"
> +#include "../util.h"
> +#include "../util/parse-options.h"
> +
> +#include "bench.h"
> +
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <pthread.h>
> +#include <sys/times.h>
> +#include <sys/poll.h>
> +#include <unistd.h>
> +#include <sys/types.h>
> +#include <sys/socket.h>
> +
> +#include <linux/futex.h>
> +
> +#ifndef _GNU_SOURCE
> +#define _GNU_SOURCE
> +#endif
> +#include <poll.h>
> +
> +typedef volatile u_int32_t futex_t;
> +
> +struct thread_barrier {
> + futex_t threads;
> + futex_t unblock;
> +};
> +
> +struct worker_ctx {
> + futex_t *futex;
> + unsigned int iterations;
> +
> + int readyfd, wakefd;
> +
> + struct thread_barrier *barrier_before, *barrier_after;
> +};
> +
> +static void fdpair(int fds[2])
> +{
> + if (pipe(fds) == 0)
> + return;
> +
> + die("pipe() failed");
> +}
> +
> +/**
> + * futex() - SYS_futex syscall wrapper
> + * @uaddr: address of first futex
> + * @op: futex op code
> + * @val: typically expected value of uaddr, but varies by op
> + * @timeout: typically an absolute struct timespec (except where noted
> + * otherwise). Overloaded by some ops
> + * @uaddr2: address of second futex for some ops\
> + * @val3: varies by op
> + * @opflags: flags to be bitwise OR'd with op, such as FUTEX_PRIVATE_FLAG
> + *
> + * futex() is used by all the following futex op wrappers. It can also be
> + * used for misuse and abuse testing. Generally, the specific op wrappers
> + * should be used instead. It is a macro instead of an static inline function as
> + * some of the types over overloaded (timeout is used for nr_requeue for
> + * example).
> + *
> + * These argument descriptions are the defaults for all
> + * like-named arguments in the following wrappers except where noted below.
> + */
> +#define futex(uaddr, op, val, timeout, uaddr2, val3, opflags) \
> + syscall(SYS_futex, uaddr, op | opflags, val, timeout, uaddr2, val3)
> +
> +/**
> + * futex_wait() - block on uaddr with optional timeout
> + * @timeout: relative timeout
> + */
> +static inline int
> +futex_wait(futex_t *uaddr, futex_t val, struct timespec *timeout, int opflags)
> +{
> + return futex(uaddr, FUTEX_WAIT, val, timeout, NULL, 0, opflags);
> +}
> +/**
> + * futex_wake() - wake one or more tasks blocked on uaddr
> + * @nr_wake: wake up to this many tasks
> + */
> +static inline int
> +futex_wake(futex_t *uaddr, int nr_wake, int opflags)
> +{
> + return futex(uaddr, FUTEX_WAKE, nr_wake, NULL, NULL, 0, opflags);
> +}
> +
> +/**
> + * futex_cmpxchg() - atomic compare and exchange
> + * @uaddr: The address of the futex to be modified
> + * @oldval: The expected value of the futex
> + * @newval: The new value to try and assign the futex
> + *
> + * Implement cmpxchg using gcc atomic builtins.
> + * http://gcc.gnu.org/onlinedocs/gcc-4.1.0/gcc/Atomic-Builtins.html
> + *
> + * Return the old futex value.
> + */
> +static inline u_int32_t
> +futex_cmpxchg(futex_t *uaddr, u_int32_t oldval, u_int32_t newval)
> +{
> + return __sync_val_compare_and_swap(uaddr, oldval, newval);
> +}
> +
> +/**
> + * futex_dec() - atomic decrement of the futex value
> + * @uaddr: The address of the futex to be modified
> + *
> + * Return the new futex value.
> + */
> +static inline u_int32_t
> +futex_dec(futex_t *uaddr)
> +{
> + return __sync_sub_and_fetch(uaddr, 1);
> +}
> +
> +/**
> + * futex_inc() - atomic increment of the futex value
> + * @uaddr: the address of the futex to be modified
> + *
> + * Return the new futex value.
> + */
> +static inline u_int32_t
> +futex_inc(futex_t *uaddr)
> +{
> + return __sync_add_and_fetch(uaddr, 1);
> +}
> +
> +static inline void futex_wait_lock(futex_t *futex)
> +{
> + int status = *futex;
> + if (status == 0)
> + status = futex_cmpxchg(futex, 0, 1);
> + while (status != 0) {
> + if (status == 1)
> + status = futex_cmpxchg(futex, 1, 2);
> + if (status != 0) {
> + futex_wait(futex, 2, NULL, FUTEX_PRIVATE_FLAG);
> + status = *futex;
> + }
> + if (status == 0)
> + status = futex_cmpxchg(futex, 0, 2);
> + }
> +}
> +
> +static inline void futex_cmpxchg_unlock(futex_t *futex)
> +{
> + int status = *futex;
> + if (status == 1)
> + status = futex_cmpxchg(futex, 1, 0);
> + if (status == 2) {
> + futex_cmpxchg(futex, 2, 0);
> + futex_wake(futex, 1, FUTEX_PRIVATE_FLAG);
> + }
> +}
> +
> +/* Called by main thread to initialize barrier */
> +static void barrier_init(struct thread_barrier *barrier, int threads)
> +{
> + barrier->threads = threads;
> + barrier->unblock = 0;
> +}
> +
> +/* Called by worker threads to synchronize with main thread */
> +static int barrier_sync(struct thread_barrier *barrier)
> +{
> + futex_dec(&barrier->threads);
> + if (barrier->threads == 0)
> + futex_wake(&barrier->threads, 1, FUTEX_PRIVATE_FLAG);
> + while (barrier->unblock == 0)
> + futex_wait(&barrier->unblock, 0, NULL, FUTEX_PRIVATE_FLAG);
> + return barrier->unblock;
> +}
> +
> +/* Called by main thread to wait for all workers to reach sync point */
> +static void barrier_wait(struct thread_barrier *barrier)
> +{
> + int threads;
> + while ((threads = barrier->threads) > 0)
> + futex_wait(&barrier->threads, threads, NULL,
> + FUTEX_PRIVATE_FLAG);
> +}
> +
> +/* Called by main thread to unblock worker threads from their sync point */
> +static void barrier_unblock(struct thread_barrier *barrier, int value)
> +{
> + barrier->unblock = value;
> + futex_wake(&barrier->unblock, INT_MAX, FUTEX_PRIVATE_FLAG);
> +}
> +
> +static bool use_futex_for_sync;
> +
> +static void *worker(void *arg)
> +{
> + char dummy;
> + int iterations;
> + futex_t *futex;
> +
> + struct worker_ctx *ctx = (struct worker_ctx *)arg;
> + struct pollfd pollfd = { .fd = ctx->wakefd, .events = POLLIN };
> +
> + iterations = ctx->iterations;
> + futex = ctx->futex;
> + /* currently, we have nothing to prepare */
> + if (use_futex_for_sync) {
> + barrier_sync(ctx->barrier_before);
> + } else {
> + if (write(ctx->readyfd, &dummy, 1) != 1)
> + die("write() on readyfd failed");
> +
> + if (poll(&pollfd, 1, -1) != 1)
> + die("poll() failed");
> + }
> +
> + while (iterations--) {
> + futex_wait_lock(futex);
> + futex_cmpxchg_unlock(futex);
> + }
> +
> + if (use_futex_for_sync)
> + barrier_sync(ctx->barrier_after);
> +
> + return NULL;
> +}
> +
> +static int iterations = 100000000;
> +static int threads = 256;
> +/* futexes are fairly distributed for threads */
> +static int futexes = 1;
> +
> +static const struct option options[] = {
> + OPT_INTEGER('i', "iterations", &iterations,
> + "number of locking and unlocking"),
> + OPT_INTEGER('t', "threads", &threads,
> + "number of worker threads"),
> + OPT_INTEGER('f', "futexes", &futexes,
> + "number of futexes, the condition"
> + "threads % futexes == 0 must be true"),
> + OPT_BOOLEAN('s', "futex-for-sync", &use_futex_for_sync,
> + "use futex for sync between main thread and worker threads"),
> + OPT_END()
> +};
> +
> +static const char * const bench_futex_wait_usage[] = {
> + "perf bench futex wait <options>",
> + NULL
> +};
> +
> +int bench_futex_wait(int argc, const char **argv,
> + const char *prefix __used)
> +{
> + int i;
> + char buf;
> + int wakefds[2], readyfds[2];
> + pthread_t *pth_tab;
> + struct worker_ctx *ctx_tab;
> + futex_t *futex_tab;
> +
> + struct thread_barrier barrier_before, barrier_after;
> +
> + clock_t before, after;
> + struct tms tms_before, tms_after;
> + int wall, user, system_time;
> + double tick;
> +
> + argc = parse_options(argc, argv, options,
> + bench_futex_wait_usage, 0);
> +
> + if (threads % futexes)
> + die("threads %% futexes must be 0");
> +
> + if (use_futex_for_sync) {
> + barrier_init(&barrier_before, threads);
> + barrier_init(&barrier_after, threads);
> + } else {
> + fdpair(wakefds);
> + fdpair(readyfds);
> + }
> +
> + pth_tab = calloc(sizeof(pthread_t), threads);
> + if (!pth_tab)
> + die("calloc() for pthread descriptors failed");
> + ctx_tab = calloc(sizeof(struct worker_ctx), threads);
> + if (!ctx_tab)
> + die("calloc() for worker contexts failed");
> + futex_tab = calloc(sizeof(futex_t), futexes);
> + if (!futex_tab)
> + die("calloc() for futexes failed");
> +
> + for (i = 0; i < threads; i++) {
> + ctx_tab[i].futex = &futex_tab[i % futexes];
> + ctx_tab[i].iterations = iterations / threads;
> +
> + ctx_tab[i].readyfd = readyfds[1];
> + ctx_tab[i].wakefd = wakefds[0];
> +
> + if (use_futex_for_sync) {
> + ctx_tab[i].barrier_before = &barrier_before;
> + ctx_tab[i].barrier_after = &barrier_after;
> + }
> +
> + if (pthread_create(&pth_tab[i], NULL, worker, &ctx_tab[i]))
> + die("pthread_create() for creating workers failed");
> + }
> +
> + if (use_futex_for_sync) {
> + barrier_wait(&barrier_before);
> + } else {
> + for (i = 0; i < threads; i++) {
> + if (read(readyfds[0], &buf, 1) != 1)
> + die("read() for ready failed");
> + }
> + }
> +
> + before = times(&tms_before);
> +
> + if (use_futex_for_sync) {
> + barrier_unblock(&barrier_before, 1);
> + } else {
> + if (write(wakefds[1], &buf, 1) != 1)
> + die("write() for waking up workers failed");
> + }
> +
> + if (use_futex_for_sync) {
> + barrier_wait(&barrier_after);
> + } else {
> + for (i = 0; i < threads; i++)
> + pthread_join(pth_tab[i], NULL);
> + }
> +
> + after = times(&tms_after);
> +
> + wall = after - before;
> + user = tms_after.tms_utime - tms_before.tms_utime;
> + system_time = tms_after.tms_stime - tms_before.tms_stime;
> + tick = 1.0 / sysconf(_SC_CLK_TCK);
> +
> + switch (bench_format) {
> + case BENCH_FORMAT_DEFAULT:
> + printf("# %d threads and %d futexes (%d threads for 1 futex)\n",
> + threads, futexes, threads / futexes);
> + printf("%.2fs user, %.2fs system, %.2fs wall, %.2f cores\n",
> + user * tick, system_time * tick, wall * tick,
> + wall ? (user + system_time) * 1. / wall : 1.);
> + printf("Result: %.0f Kiter/s\n",
> + iterations / (wall * tick * 1000));
> + break;
> + case BENCH_FORMAT_SIMPLE:
> + printf("%.0f Kiter/s\n",
> + iterations / (wall * tick * 1000));
> + break;
> + default:
> + /* reaching here is something disaster */
> + die("Unknown format:%d\n", bench_format);
> + break;
> + }
> +
> + free((void *)pth_tab);
> + free((void *)ctx_tab);
> + free((void *)futex_tab);
> +
> + return 0;
> +}
> diff --git a/tools/perf/builtin-bench.c b/tools/perf/builtin-bench.c
> index b0e74ab..cd8ebfd 100644
> --- a/tools/perf/builtin-bench.c
> +++ b/tools/perf/builtin-bench.c
> @@ -61,6 +61,16 @@ static struct bench_suite mem_suites[] = {
> NULL }
> };
>
> +static struct bench_suite futex_suites[] = {
> + { "wait",
> + "futex wait",
> + bench_futex_wait },
> + suite_all,
> + { NULL,
> + NULL,
> + NULL }
> +};
> +
> struct bench_subsys {
> const char *name;
> const char *summary;
> @@ -74,6 +84,9 @@ static struct bench_subsys subsystems[] = {
> { "mem",
> "memory access performance",
> mem_suites },
> + { "futex",
> + "futex performance",
> + futex_suites },
> { "all", /* sentinel: easy for help */
> "test all subsystem (pseudo subsystem)",
> NULL },

--
Darren Hart
Intel Open Source Technology Center
Yocto Project - Linux Kernel

2012-05-20 08:32:17

by Hitoshi Mitake

[permalink] [raw]
Subject: Re: [PATCH] perf bench: add new benchmark subsystem and suite "futex wait"

On Fri, May 18, 2012 at 1:24 AM, Darren Hart <[email protected]> wrote:
> On 05/17/2012 08:21 AM, Hitoshi Mitake wrote:
>> Hi Ingo, Eric and Darren,
>> (CCed perf and futex folks)
>>
>> I wrote this patch for adding new subsystem "futex" and its suite "wait" to perf
>> bench on tip/master. This is based on futextest by Darren Hart.
>>
>> Could you allow me to import your source code of futextest to perf bench, Darren?
>>
>
> I do have some concerns I'd like to address first.
>
> What is advantage of incorporating this into perf as opposed to running
> it with perf?

The main and direct advantage is that perf bench can share useful
utilities stored under tools/perf/util/ directory e.g. parse-options[ch].

>
> Do you intend to port the rest of the futextest testsuite over to perf?
>
> futextest is not by any means complete, and I have been slowly adding to
> it over time. My concern would be getting into a situation where perf
> bench has a small subset of similar (but slightly different) tests,
> which can not be maintained along with futextest.
>
> Would there be a strong motivation to bring all of futextest under perf?
> There are certain parts that I can see as not being a good fit, such as
> some of the functional tests or possibly some of the stress tests (and
> some of the planned randomization stress tests).

I was intending to port only futex_wait.c to perf currently. But
importing other parts of futextest may be worthful. Even if they are
not suitable for perf bench, I think storing them into tools/
directory of linux kernel is valuable because they are good
documentation and example of futex usage.

>
>> Below is the patch, I'd like to hear your comments.
>
> Depending on the answers to the above, I'm concerned about the inlining
> of the various bits and pieces from the futextest header files into a
> single C file - from a maintenance and expansion perspective.
>
> I am not necessarily opposed to the idea, especially as being under the
> perf umbrella is sure to get more futex testing and eyes on the
> futextest code. I would like to make sure we have a long term plan
> before merging headers and C files together from futextest into perf.
>

As you say, if we import other part of futextest into perf or tools/,
embedding some functions like futex_inc, dec, etc are not so good.

How do you think about the idea of storing other part of futextest
into tools/ directory? If you agree this, I'll move some functions
related to futex and atomic operations to tools/include/ directory or
somewhere more suitable.

Thanks,


--
Hitoshi Mitake
[email protected]

2012-05-20 09:37:59

by Hitoshi Mitake

[permalink] [raw]
Subject: Re: [PATCH] perf bench: add new benchmark subsystem and suite "futex wait"

On Sun, May 20, 2012 at 5:32 PM, Hitoshi Mitake <[email protected]> wrote:
> On Fri, May 18, 2012 at 1:24 AM, Darren Hart <[email protected]> wrote:
>> On 05/17/2012 08:21 AM, Hitoshi Mitake wrote:
>>> Hi Ingo, Eric and Darren,
>>> (CCed perf and futex folks)
>>>
>>> I wrote this patch for adding new subsystem "futex" and its suite "wait" to perf
>>> bench on tip/master. This is based on futextest by Darren Hart.
>>>
>>> Could you allow me to import your source code of futextest to perf bench, Darren?
>>>
>>
>> I do have some concerns I'd like to address first.
>>
>> What is advantage of incorporating this into perf as opposed to running
>> it with perf?
>
> The main and direct advantage is that perf bench can share useful
> utilities stored under tools/perf/util/ directory e.g. parse-options[ch].
>

BTW, I often feel parse-options.[ch] of perf (this was come from git,
right?) is very useful not only for perf and git but also other
projects. So I think these stuff are worth independence as a
library. If the library contains unified feature for parsing and
evaluating configuration files, the hell of managing configurable
options will be reduced. e.g. I often use "strace -e open <command>"
to detect configuration files read by the <command>...

I thought that if perf bench can be independent from perf with such
efforts, it can be smaller sized and statically linked binary. From my
experience, this will be good for embedded systems people.

This independence also has risk: less people can find it or is
attracted even if it stays in the kernel tree (e.g. tools/bench/). But
it seems that very few people know about perf bench, so this will not
be a serious problem ;)

I'd like to hear your opinion.

Thanks,

--
Hitoshi Mitake
[email protected]

2012-06-05 18:18:36

by Darren Hart

[permalink] [raw]
Subject: Re: [PATCH] perf bench: add new benchmark subsystem and suite "futex wait"



On 05/20/2012 02:37 AM, Hitoshi Mitake wrote:
> On Sun, May 20, 2012 at 5:32 PM, Hitoshi Mitake <[email protected]> wrote:
>> On Fri, May 18, 2012 at 1:24 AM, Darren Hart <[email protected]> wrote:
>>> On 05/17/2012 08:21 AM, Hitoshi Mitake wrote:
>>>> Hi Ingo, Eric and Darren,
>>>> (CCed perf and futex folks)
>>>>
>>>> I wrote this patch for adding new subsystem "futex" and its suite "wait" to perf
>>>> bench on tip/master. This is based on futextest by Darren Hart.
>>>>
>>>> Could you allow me to import your source code of futextest to perf bench, Darren?
>>>>
>>>
>>> I do have some concerns I'd like to address first.
>>>
>>> What is advantage of incorporating this into perf as opposed to running
>>> it with perf?
>>
>> The main and direct advantage is that perf bench can share useful
>> utilities stored under tools/perf/util/ directory e.g. parse-options[ch].
>>
>
> BTW, I often feel parse-options.[ch] of perf (this was come from git,
> right?) is very useful not only for perf and git but also other
> projects. So I think these stuff are worth independence as a
> library. If the library contains unified feature for parsing and
> evaluating configuration files, the hell of managing configurable
> options will be reduced. e.g. I often use "strace -e open <command>"
> to detect configuration files read by the <command>...
>
> I thought that if perf bench can be independent from perf with such
> efforts, it can be smaller sized and statically linked binary. From my
> experience, this will be good for embedded systems people.
>
> This independence also has risk: less people can find it or is
> attracted even if it stays in the kernel tree (e.g. tools/bench/). But
> it seems that very few people know about perf bench, so this will not
> be a serious problem ;)
>
> I'd like to hear your opinion.

I haven't been involved with perf tools/bench so I haven't really formed
an opinion. Ingo and Arnaldo, would either of you care to weigh in on
the pros/cons of merging futextest into perf?

--
Darren Hart
Intel Open Source Technology Center
Yocto Project - Linux Kernel

2012-06-06 07:39:16

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH] perf bench: add new benchmark subsystem and suite "futex wait"


* Darren Hart <[email protected]> wrote:

> On 05/20/2012 02:37 AM, Hitoshi Mitake wrote:
> > On Sun, May 20, 2012 at 5:32 PM, Hitoshi Mitake <[email protected]> wrote:
> >> On Fri, May 18, 2012 at 1:24 AM, Darren Hart <[email protected]> wrote:
> >>> On 05/17/2012 08:21 AM, Hitoshi Mitake wrote:
> >>>> Hi Ingo, Eric and Darren,
> >>>> (CCed perf and futex folks)
> >>>>
> >>>> I wrote this patch for adding new subsystem "futex" and its suite "wait" to perf
> >>>> bench on tip/master. This is based on futextest by Darren Hart.
> >>>>
> >>>> Could you allow me to import your source code of futextest to perf bench, Darren?
> >>>>
> >>>
> >>> I do have some concerns I'd like to address first.
> >>>
> >>> What is advantage of incorporating this into perf as opposed to running
> >>> it with perf?
> >>
> >> The main and direct advantage is that perf bench can share useful
> >> utilities stored under tools/perf/util/ directory e.g. parse-options[ch].
> >>
> >
> > BTW, I often feel parse-options.[ch] of perf (this was come from git,
> > right?) is very useful not only for perf and git but also other
> > projects. So I think these stuff are worth independence as a
> > library. If the library contains unified feature for parsing and
> > evaluating configuration files, the hell of managing configurable
> > options will be reduced. e.g. I often use "strace -e open <command>"
> > to detect configuration files read by the <command>...
> >
> > I thought that if perf bench can be independent from perf with such
> > efforts, it can be smaller sized and statically linked binary. From my
> > experience, this will be good for embedded systems people.
> >
> > This independence also has risk: less people can find it or is
> > attracted even if it stays in the kernel tree (e.g. tools/bench/). But
> > it seems that very few people know about perf bench, so this will not
> > be a serious problem ;)
> >
> > I'd like to hear your opinion.
>
> I haven't been involved with perf tools/bench so I haven't
> really formed an opinion. Ingo and Arnaldo, would either of
> you care to weigh in on the pros/cons of merging futextest
> into perf?

No objections from me - 'perf bench futex' seems rather natural
to type to me and it would certainly make futex performance
testing easier and more widespread.

So it all depends on whether you'd like to host it upstream and
within tools/perf/bench/.

I'd rather not split it from the main perf binary, if embedded
wants a small static binary (do they really?) we could add
support for minimal builds of perf, with just a few [even one]
subcommand activated or so.

Thanks,

Ingo

2012-06-06 12:30:31

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: Re: [PATCH] perf bench: add new benchmark subsystem and suite "futex wait"

Em Wed, Jun 06, 2012 at 09:39:09AM +0200, Ingo Molnar escreveu:
> I'd rather not split it from the main perf binary, if embedded
> wants a small static binary (do they really?) we could add
> support for minimal builds of perf, with just a few [even one]
> subcommand activated or so.

The plan is to be able to do a:

make -C tools/perf menuconfig

And all the other *config make targets as the kernel, using Kconfig
files, etc.

So that we can pick and choose which subset of tools and features one
wants.

- Arnaldo

2012-06-06 14:18:53

by David Ahern

[permalink] [raw]
Subject: Re: [PATCH] perf bench: add new benchmark subsystem and suite "futex wait"

On 6/6/12 1:39 AM, Ingo Molnar wrote:
> I'd rather not split it from the main perf binary, if embedded
> wants a small static binary (do they really?) we could add
> support for minimal builds of perf, with just a few [even one]
> subcommand activated or so.

Yes -- e.g., record and stat commands only.

David

2012-06-06 15:37:22

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH] perf bench: add new benchmark subsystem and suite "futex wait"


* Arnaldo Carvalho de Melo <[email protected]> wrote:

> Em Wed, Jun 06, 2012 at 09:39:09AM +0200, Ingo Molnar escreveu:
> > I'd rather not split it from the main perf binary, if embedded
> > wants a small static binary (do they really?) we could add
> > support for minimal builds of perf, with just a few [even one]
> > subcommand activated or so.
>
> The plan is to be able to do a:
>
> make -C tools/perf menuconfig
>
> And all the other *config make targets as the kernel, using Kconfig
> files, etc.
>
> So that we can pick and choose which subset of tools and features one
> wants.

Sounds very nifty to me!

Thanks,

Ingo

2012-06-06 16:02:12

by Hitoshi Mitake

[permalink] [raw]
Subject: Re: [PATCH] perf bench: add new benchmark subsystem and suite "futex wait"

On Wed, Jun 6, 2012 at 4:39 PM, Ingo Molnar <[email protected]> wrote:
>
> * Darren Hart <[email protected]> wrote:
>
>> On 05/20/2012 02:37 AM, Hitoshi Mitake wrote:
>> > On Sun, May 20, 2012 at 5:32 PM, Hitoshi Mitake <[email protected]> wrote:
>> >> On Fri, May 18, 2012 at 1:24 AM, Darren Hart <[email protected]> wrote:
>> >>> On 05/17/2012 08:21 AM, Hitoshi Mitake wrote:
>> >>>> Hi Ingo, Eric and Darren,
>> >>>> (CCed perf and futex folks)
>> >>>>
>> >>>> I wrote this patch for adding new subsystem "futex" and its suite "wait" to perf
>> >>>> bench on tip/master. This is based on futextest by Darren Hart.
>> >>>>
>> >>>> Could you allow me to import your source code of futextest to perf bench, Darren?
>> >>>>
>> >>>
>> >>> I do have some concerns I'd like to address first.
>> >>>
>> >>> What is advantage of incorporating this into perf as opposed to running
>> >>> it with perf?
>> >>
>> >> The main and direct advantage is that perf bench can share useful
>> >> utilities stored under tools/perf/util/ directory e.g. parse-options[ch].
>> >>
>> >
>> > BTW, I often feel parse-options.[ch] of perf (this was come from git,
>> > right?) is very useful not only for perf and git but also other
>> > projects. So I think these stuff are worth independence as a
>> > library. If the library contains unified feature for parsing and
>> > evaluating configuration files, the hell of managing configurable
>> > options will be reduced. e.g. I often use "strace -e open <command>"
>> > to detect configuration files read by the <command>...
>> >
>> > I thought that if perf bench can be independent from perf with such
>> > efforts, it can be smaller sized and statically linked binary. From my
>> > experience, this will be good for embedded systems people.
>> >
>> > This independence also has risk: less people can find it or is
>> > attracted even if it stays in the kernel tree (e.g. tools/bench/). But
>> > it seems that very few people know about perf bench, so this will not
>> > be a serious problem ;)
>> >
>> > I'd like to hear your opinion.
>>
>> I haven't been involved with perf tools/bench so I haven't
>> really formed an opinion. Ingo and Arnaldo, would either of
>> you care to weigh in on the pros/cons of merging futextest
>> into perf?
>
> No objections from me - 'perf bench futex' seems rather natural
> to type to me and it would certainly make futex performance
> testing easier and more widespread.
>
> So it all depends on whether you'd like to host it upstream and
> within tools/perf/bench/.
>
> I'd rather not split it from the main perf binary, if embedded
> wants a small static binary (do they really?) we could add
> support for minimal builds of perf, with just a few [even one]
> subcommand activated or so.


Sorry, I meant that the main problem is not size of binary, dependency
of libraries and headers for building perf.

Preparing cross compilers is hard. Preparing libraries and headers for
them are also hard. So I think making build process of perf is
suitable for embedded systems.

As Arnaldo says, menuconfig approach might be promising.
# I couldn't come up with the idea...

Thanks,


--
Hitoshi Mitake
[email protected]

2012-06-07 15:11:36

by Hitoshi Mitake

[permalink] [raw]
Subject: Re: [PATCH] perf bench: add new benchmark subsystem and suite "futex wait"

On Thu, Jun 7, 2012 at 1:02 AM, Hitoshi Mitake <[email protected]> wrote:
> On Wed, Jun 6, 2012 at 4:39 PM, Ingo Molnar <[email protected]> wrote:
>>
>> * Darren Hart <[email protected]> wrote:
>>
>>> On 05/20/2012 02:37 AM, Hitoshi Mitake wrote:
>>> > On Sun, May 20, 2012 at 5:32 PM, Hitoshi Mitake <[email protected]> wrote:
>>> >> On Fri, May 18, 2012 at 1:24 AM, Darren Hart <[email protected]> wrote:
>>> >>> On 05/17/2012 08:21 AM, Hitoshi Mitake wrote:
>>> >>>> Hi Ingo, Eric and Darren,
>>> >>>> (CCed perf and futex folks)
>>> >>>>
>>> >>>> I wrote this patch for adding new subsystem "futex" and its suite "wait" to perf
>>> >>>> bench on tip/master. This is based on futextest by Darren Hart.
>>> >>>>
>>> >>>> Could you allow me to import your source code of futextest to perf bench, Darren?
>>> >>>>
>>> >>>
>>> >>> I do have some concerns I'd like to address first.
>>> >>>
>>> >>> What is advantage of incorporating this into perf as opposed to running
>>> >>> it with perf?
>>> >>
>>> >> The main and direct advantage is that perf bench can share useful
>>> >> utilities stored under tools/perf/util/ directory e.g. parse-options[ch].
>>> >>
>>> >
>>> > BTW, I often feel parse-options.[ch] of perf (this was come from git,
>>> > right?) is very useful not only for perf and git but also other
>>> > projects. So I think these stuff are worth independence as a
>>> > library. If the library contains unified feature for parsing and
>>> > evaluating configuration files, the hell of managing configurable
>>> > options will be reduced. e.g. I often use "strace -e open <command>"
>>> > to detect configuration files read by the <command>...
>>> >
>>> > I thought that if perf bench can be independent from perf with such
>>> > efforts, it can be smaller sized and statically linked binary. From my
>>> > experience, this will be good for embedded systems people.
>>> >
>>> > This independence also has risk: less people can find it or is
>>> > attracted even if it stays in the kernel tree (e.g. tools/bench/). But
>>> > it seems that very few people know about perf bench, so this will not
>>> > be a serious problem ;)
>>> >
>>> > I'd like to hear your opinion.
>>>
>>> I haven't been involved with perf tools/bench so I haven't
>>> really formed an opinion. Ingo and Arnaldo, would either of
>>> you care to weigh in on the pros/cons of merging futextest
>>> into perf?
>>
>> No objections from me - 'perf bench futex' seems rather natural
>> to type to me and it would certainly make futex performance
>> testing easier and more widespread.
>>
>> So it all depends on whether you'd like to host it upstream and
>> within tools/perf/bench/.
>>

There is another problem. futextest containts code not for benchmark,
for functional tests. My understand is: Darren doesn't like the
situation that the benchmark part is imported into perf bench and the
functional test part remains in futextest. Because this situation is
not good for maintenance.

I think that the functional tests part is not suitable for perf bench
because of its purpose, but suitable for tools/ directly of Linux
kernel.

If both of benchmark part and functional test part of futextest can be
imported into kernel source tree, maintenance problem will be solved.
Even if the benchmark part (in perf bench) and functional test part
are devided, they will be able to share common header files. For
example, these headers can be placed in tools/include directory.

Thanks,

--
Hitoshi Mitake
[email protected]

2012-06-13 16:52:56

by Darren Hart

[permalink] [raw]
Subject: Re: [PATCH] perf bench: add new benchmark subsystem and suite "futex wait"



On 06/07/2012 08:11 AM, Hitoshi Mitake wrote:
> On Thu, Jun 7, 2012 at 1:02 AM, Hitoshi Mitake <[email protected]> wrote:
>> On Wed, Jun 6, 2012 at 4:39 PM, Ingo Molnar <[email protected]> wrote:
>>>
>>> * Darren Hart <[email protected]> wrote:
>>>
>>>> On 05/20/2012 02:37 AM, Hitoshi Mitake wrote:
>>>>> On Sun, May 20, 2012 at 5:32 PM, Hitoshi Mitake <[email protected]> wrote:
>>>>>> On Fri, May 18, 2012 at 1:24 AM, Darren Hart <[email protected]> wrote:
>>>>>>> On 05/17/2012 08:21 AM, Hitoshi Mitake wrote:
>>>>>>>> Hi Ingo, Eric and Darren,
>>>>>>>> (CCed perf and futex folks)
>>>>>>>>
>>>>>>>> I wrote this patch for adding new subsystem "futex" and its suite "wait" to perf
>>>>>>>> bench on tip/master. This is based on futextest by Darren Hart.
>>>>>>>>
>>>>>>>> Could you allow me to import your source code of futextest to perf bench, Darren?
>>>>>>>>
>>>>>>>
>>>>>>> I do have some concerns I'd like to address first.
>>>>>>>
>>>>>>> What is advantage of incorporating this into perf as opposed to running
>>>>>>> it with perf?
>>>>>>
>>>>>> The main and direct advantage is that perf bench can share useful
>>>>>> utilities stored under tools/perf/util/ directory e.g. parse-options[ch].
>>>>>>
>>>>>
>>>>> BTW, I often feel parse-options.[ch] of perf (this was come from git,
>>>>> right?) is very useful not only for perf and git but also other
>>>>> projects. So I think these stuff are worth independence as a
>>>>> library. If the library contains unified feature for parsing and
>>>>> evaluating configuration files, the hell of managing configurable
>>>>> options will be reduced. e.g. I often use "strace -e open <command>"
>>>>> to detect configuration files read by the <command>...
>>>>>
>>>>> I thought that if perf bench can be independent from perf with such
>>>>> efforts, it can be smaller sized and statically linked binary. From my
>>>>> experience, this will be good for embedded systems people.
>>>>>
>>>>> This independence also has risk: less people can find it or is
>>>>> attracted even if it stays in the kernel tree (e.g. tools/bench/). But
>>>>> it seems that very few people know about perf bench, so this will not
>>>>> be a serious problem ;)
>>>>>
>>>>> I'd like to hear your opinion.
>>>>
>>>> I haven't been involved with perf tools/bench so I haven't
>>>> really formed an opinion. Ingo and Arnaldo, would either of
>>>> you care to weigh in on the pros/cons of merging futextest
>>>> into perf?
>>>
>>> No objections from me - 'perf bench futex' seems rather natural
>>> to type to me and it would certainly make futex performance
>>> testing easier and more widespread.
>>>
>>> So it all depends on whether you'd like to host it upstream and
>>> within tools/perf/bench/.
>>>
>
> There is another problem. futextest containts code not for benchmark,
> for functional tests. My understand is: Darren doesn't like the
> situation that the benchmark part is imported into perf bench and the
> functional test part remains in futextest. Because this situation is
> not good for maintenance.
>
> I think that the functional tests part is not suitable for perf bench
> because of its purpose, but suitable for tools/ directly of Linux
> kernel.
>
> If both of benchmark part and functional test part of futextest can be
> imported into kernel source tree, maintenance problem will be solved.
> Even if the benchmark part (in perf bench) and functional test part
> are devided, they will be able to share common header files. For
> example, these headers can be placed in tools/include directory.
>
> Thanks,


If tools/testing is an appropriate place for functional and stress tests
that would make this easier. I really like the idea of more testing for
futexes and more eyes on the futextest code itself.

Is completely integrating futextest into linux/tools/perf and
linux/tools/testing the approach everyone would like to see us take here?

--
Darren Hart
Intel Open Source Technology Center
Yocto Project - Linux Kernel

2012-06-24 16:08:32

by Hitoshi Mitake

[permalink] [raw]
Subject: Re: [PATCH] perf bench: add new benchmark subsystem and suite "futex wait"

On Thu, Jun 14, 2012 at 1:51 AM, Darren Hart <[email protected]> wrote:
>
>
> On 06/07/2012 08:11 AM, Hitoshi Mitake wrote:
>> On Thu, Jun 7, 2012 at 1:02 AM, Hitoshi Mitake <[email protected]> wrote:
>>> On Wed, Jun 6, 2012 at 4:39 PM, Ingo Molnar <[email protected]> wrote:
>>>>
>>>> * Darren Hart <[email protected]> wrote:
>>>>
>>>>> On 05/20/2012 02:37 AM, Hitoshi Mitake wrote:
>>>>>> On Sun, May 20, 2012 at 5:32 PM, Hitoshi Mitake <[email protected]> wrote:
>>>>>>> On Fri, May 18, 2012 at 1:24 AM, Darren Hart <[email protected]> wrote:
>>>>>>>> On 05/17/2012 08:21 AM, Hitoshi Mitake wrote:
>>>>>>>>> Hi Ingo, Eric and Darren,
>>>>>>>>> (CCed perf and futex folks)
>>>>>>>>>
>>>>>>>>> I wrote this patch for adding new subsystem "futex" and its suite "wait" to perf
>>>>>>>>> bench on tip/master. This is based on futextest by Darren Hart.
>>>>>>>>>
>>>>>>>>> Could you allow me to import your source code of futextest to perf bench, Darren?
>>>>>>>>>
>>>>>>>>
>>>>>>>> I do have some concerns I'd like to address first.
>>>>>>>>
>>>>>>>> What is advantage of incorporating this into perf as opposed to running
>>>>>>>> it with perf?
>>>>>>>
>>>>>>> The main and direct advantage is that perf bench can share useful
>>>>>>> utilities stored under tools/perf/util/ directory e.g. parse-options[ch].
>>>>>>>
>>>>>>
>>>>>> BTW, I often feel parse-options.[ch] of perf (this was come from git,
>>>>>> right?) is very useful not only for perf and git but also other
>>>>>> projects. So I think these stuff are worth independence as a
>>>>>> library. If the library contains unified feature for parsing and
>>>>>> evaluating configuration files, the hell of managing configurable
>>>>>> options will be reduced. e.g. I often use "strace -e open <command>"
>>>>>> to detect configuration files read by the <command>...
>>>>>>
>>>>>> I thought that if perf bench can be independent from perf with such
>>>>>> efforts, it can be smaller sized and statically linked binary. From my
>>>>>> experience, this will be good for embedded systems people.
>>>>>>
>>>>>> This independence also has risk: less people can find it or is
>>>>>> attracted even if it stays in the kernel tree (e.g. tools/bench/). But
>>>>>> it seems that very few people know about perf bench, so this will not
>>>>>> be a serious problem ;)
>>>>>>
>>>>>> I'd like to hear your opinion.
>>>>>
>>>>> I haven't been involved with perf tools/bench so I haven't
>>>>> really formed an opinion. Ingo and Arnaldo, would either of
>>>>> you care to weigh in on the pros/cons of merging futextest
>>>>> into perf?
>>>>
>>>> No objections from me - 'perf bench futex' seems rather natural
>>>> to type to me and it would certainly make futex performance
>>>> testing easier and more widespread.
>>>>
>>>> So it all depends on whether you'd like to host it upstream and
>>>> within tools/perf/bench/.
>>>>
>>
>> There is another problem. futextest containts code not for benchmark,
>> for functional tests. My understand is: Darren doesn't like the
>> situation that the benchmark part is imported into perf bench and the
>> functional test part remains in futextest. Because this situation is
>> not good for maintenance.
>>
>> I think that the functional tests part is not suitable for perf bench
>> because of its purpose, but suitable for tools/ directly of Linux
>> kernel.
>>
>> If both of benchmark part and functional test part of futextest can be
>> imported into kernel source tree, maintenance problem will be solved.
>> Even if the benchmark part (in perf bench) and functional test part
>> are devided, they will be able to share common header files. For
>> example, these headers can be placed in tools/include directory.
>>
>> Thanks,
>
>
> If tools/testing is an appropriate place for functional and stress tests
> that would make this easier. I really like the idea of more testing for
> futexes and more eyes on the futextest code itself.
>
> Is completely integrating futextest into linux/tools/perf and
> linux/tools/testing the approach everyone would like to see us take here?

At least I think so. The part of futextest will be a good document and
integrating it into the kernel tree might be a good culture.

If the integration is allowed, I'd like to update the patch.
# But this will take a long time because my day job is busy phase, very sorry...

--
Hitoshi Mitake
[email protected]