2009-11-03 04:39:28

by Hitoshi Mitake

[permalink] [raw]
Subject: [RFC][PATCH 7/7] Adding general performance benchmarking subsystem to perf.


Adding general performance benchmarking subsystem to perf.
This patch adds builtin-bench-pipe.c

builtin-bench-pipe.c is a benchmark program
to measure performance of pipe() system call.
This benchmark is based on pipe-test-1m.c by Ingo Molnar.
http://people.redhat.com/mingo/cfs-scheduler/tools/pipe-test-1m.c

Signed-off-by: Hitoshi Mitake <[email protected]>
Cc: Rusty Russell <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Mike Galbraith <[email protected]>
---
tools/perf/builtin-bench-pipe.c | 89 +++++++++++++++++++++++++++++++++++++++
1 files changed, 89 insertions(+), 0 deletions(-)
create mode 100644 tools/perf/builtin-bench-pipe.c

diff --git a/tools/perf/builtin-bench-pipe.c b/tools/perf/builtin-bench-pipe.c
new file mode 100644
index 0000000..081515e
--- /dev/null
+++ b/tools/perf/builtin-bench-pipe.c
@@ -0,0 +1,89 @@
+/*
+ *
+ * builtin-bench-pipe.c
+ *
+ * pipe: Benchmark for pipe()
+ *
+ * Based on pipe-test-1m.c by Ingo Molnar <[email protected]>
+ * http://people.redhat.com/mingo/cfs-scheduler/tools/pipe-test-1m.c
+ * Ported to perf by Hitoshi Mitake <[email protected]>
+ *
+ */
+
+#include "perf.h"
+#include "util/util.h"
+#include "util/parse-options.h"
+#include "builtin.h"
+#include "bench-suite.h"
+
+#include <unistd.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <signal.h>
+#include <sys/wait.h>
+#include <linux/unistd.h>
+#include <string.h>
+#include <errno.h>
+#include <assert.h>
+#include <sys/time.h>
+
+#define LOOPS_DEFAULT 1000000
+static int loops = LOOPS_DEFAULT;
+
+static const struct option options[] = {
+ OPT_INTEGER('l', "loop", &loops,
+ "Specify number of loops"),
+ OPT_END()
+};
+
+static const char * const bench_sched_pipe_usage[] = {
+ "perf bench sched pipe <options>",
+ NULL
+};
+
+int bench_sched_pipe(int argc, const char **argv,
+ const char *prefix __used)
+{
+ int pipe_1[2], pipe_2[2];
+ int m = 0, i;
+ struct timeval start, stop, diff;
+
+ /*
+ * why does "ret" exists?
+ * discarding returned value of read(), write()
+ * causes error in building environment for perf
+ */
+ int ret;
+ pid_t pid;
+
+ argc = parse_options(argc, argv, options,
+ bench_sched_pipe_usage, 0);
+
+ assert(!pipe(pipe_1));
+ assert(!pipe(pipe_2));
+
+ pid = fork();
+ assert(pid >= 0);
+
+ gettimeofday(&start, NULL);
+
+ if (!pid) {
+ for (i = 0; i < loops; i++) {
+ ret = read(pipe_1[0], &m, sizeof(int));
+ ret = write(pipe_2[1], &m, sizeof(int));
+ }
+ } else if (pid > 0) {
+ for (i = 0; i < loops; i++) {
+ ret = write(pipe_1[1], &m, sizeof(int));
+ ret = read(pipe_2[0], &m, sizeof(int));
+ }
+ }
+
+ gettimeofday(&stop, NULL);
+ timersub(&stop, &start, &diff);
+ if (!pid)
+ printf("%lu.%03lu\n",
+ diff.tv_sec, diff.tv_usec/1000);
+
+ return 0;
+}
--
1.5.6.5


2009-11-03 07:47:02

by Ingo Molnar

[permalink] [raw]
Subject: Re: [RFC][PATCH 7/7] Adding general performance benchmarking subsystem to perf.


* Hitoshi Mitake <[email protected]> wrote:

>
> Adding general performance benchmarking subsystem to perf.
> This patch adds builtin-bench-pipe.c
>
> builtin-bench-pipe.c is a benchmark program
> to measure performance of pipe() system call.
> This benchmark is based on pipe-test-1m.c by Ingo Molnar.
> http://people.redhat.com/mingo/cfs-scheduler/tools/pipe-test-1m.c
>
> Signed-off-by: Hitoshi Mitake <[email protected]>
> Cc: Rusty Russell <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Mike Galbraith <[email protected]>
> ---
> tools/perf/builtin-bench-pipe.c | 89 +++++++++++++++++++++++++++++++++++++++
> 1 files changed, 89 insertions(+), 0 deletions(-)
> create mode 100644 tools/perf/builtin-bench-pipe.c
>
> diff --git a/tools/perf/builtin-bench-pipe.c b/tools/perf/builtin-bench-pipe.c
> new file mode 100644
> index 0000000..081515e
> --- /dev/null
> +++ b/tools/perf/builtin-bench-pipe.c
> @@ -0,0 +1,89 @@
> +/*
> + *
> + * builtin-bench-pipe.c
> + *
> + * pipe: Benchmark for pipe()
> + *
> + * Based on pipe-test-1m.c by Ingo Molnar <[email protected]>
> + * http://people.redhat.com/mingo/cfs-scheduler/tools/pipe-test-1m.c
> + * Ported to perf by Hitoshi Mitake <[email protected]>
> + *
> + */

Ok, i think there's going to be quite a few of these benchmarks, so i'd
suggest you start a new directory for the benchmark modules:
tools/perf/bench/ for example.

We'll still have tools/perf/builtin-bench.c which represents the highest
level 'perf bench' tool - and new modules can be added by adding them to
bench/.

What do you think?

All in one, i very much like the modular direction you are taking here.

There will be a handful of more details i'm sure but once there's a good
base we can commit it - would you / will you be interested in extending
it further and adding more benchmark modules as well?

There's quite a few useful small benchmarks that people are using to
measure the kernel. Having a good collection of them in one place, with
standardized options and standardized output would be very useful.

Ingo

2009-11-03 10:53:41

by Hitoshi Mitake

[permalink] [raw]
Subject: Re: [RFC][PATCH 7/7] Adding general performance benchmarking subsystem to perf.

From: Ingo Molnar <[email protected]>
Subject: Re: [RFC][PATCH 7/7] Adding general performance benchmarking subsystem to perf.
Date: Tue, 3 Nov 2009 08:46:48 +0100

>
> * Hitoshi Mitake <[email protected]> wrote:
>
> >
> > Adding general performance benchmarking subsystem to perf.
> > This patch adds builtin-bench-pipe.c
> >
> > builtin-bench-pipe.c is a benchmark program
> > to measure performance of pipe() system call.
> > This benchmark is based on pipe-test-1m.c by Ingo Molnar.
> > http://people.redhat.com/mingo/cfs-scheduler/tools/pipe-test-1m.c
> >
> > Signed-off-by: Hitoshi Mitake <[email protected]>
> > Cc: Rusty Russell <[email protected]>
> > Cc: Thomas Gleixner <[email protected]>
> > Cc: Peter Zijlstra <[email protected]>
> > Cc: Mike Galbraith <[email protected]>
> > ---
> > tools/perf/builtin-bench-pipe.c | 89 +++++++++++++++++++++++++++++++++++++++
> > 1 files changed, 89 insertions(+), 0 deletions(-)
> > create mode 100644 tools/perf/builtin-bench-pipe.c
> >
> > diff --git a/tools/perf/builtin-bench-pipe.c b/tools/perf/builtin-bench-pipe.c
> > new file mode 100644
> > index 0000000..081515e
> > --- /dev/null
> > +++ b/tools/perf/builtin-bench-pipe.c
> > @@ -0,0 +1,89 @@
> > +/*
> > + *
> > + * builtin-bench-pipe.c
> > + *
> > + * pipe: Benchmark for pipe()
> > + *
> > + * Based on pipe-test-1m.c by Ingo Molnar <[email protected]>
> > + * http://people.redhat.com/mingo/cfs-scheduler/tools/pipe-test-1m.c
> > + * Ported to perf by Hitoshi Mitake <[email protected]>
> > + *
> > + */
>

Thanks for your detailed comments, Ingo!
I read your comments and rewrote the patch series.
I'll sent the series later as new thread.

> Ok, i think there's going to be quite a few of these benchmarks, so i'd
> suggest you start a new directory for the benchmark modules:
> tools/perf/bench/ for example.
>
> We'll still have tools/perf/builtin-bench.c which represents the highest
> level 'perf bench' tool - and new modules can be added by adding them to
> bench/.
>
> What do you think?

I agree with your way making new bench/ directory.
I feel that modules of bench should not be at top of tools/perf/.

>
> All in one, i very much like the modular direction you are taking here.
>

Thanks, I'm grad to hear it.

> There will be a handful of more details i'm sure but once there's a good
> base we can commit it - would you / will you be interested in extending
> it further and adding more benchmark modules as well?
>
> There's quite a few useful small benchmarks that people are using to
> measure the kernel. Having a good collection of them in one place, with
> standardized options and standardized output would be very useful.

Yes, of course! Unified benchmarking utilities will be big help for
Linux users including me.

e.g. I think that copybench (http://code.google.com/p/copybench/) will be
good benchmark for I/O, memory and file system.
I'll work on this after that the patch series I'll send later is merged.

Do you know any other good candidates to include?

2009-11-03 17:24:22

by Ingo Molnar

[permalink] [raw]
Subject: Re: [RFC][PATCH 7/7] Adding general performance benchmarking subsystem to perf.


* Hitoshi Mitake <[email protected]> wrote:

> > There will be a handful of more details i'm sure but once there's a
> > good base we can commit it - would you / will you be interested in
> > extending it further and adding more benchmark modules as well?
> >
> > There's quite a few useful small benchmarks that people are using to
> > measure the kernel. Having a good collection of them in one place,
> > with standardized options and standardized output would be very
> > useful.
>
> Yes, of course! Unified benchmarking utilities will be big help for
> Linux users including me.
>
> e.g. I think that copybench (http://code.google.com/p/copybench/) will
> be good benchmark for I/O, memory and file system. I'll work on this
> after that the patch series I'll send later is merged.

copybench is listed as 'new BSD license'. Might need the pinging of its
author whether he considers it GPLv2 compatible.

> Do you know any other good candidates to include?

Frederic suggested dbench - although that's quite large as it includes a
complete trace of a benchmark run.

We might want to do similar measurements to lmbench.

One nice thing would be to have a 'system call benchmark' set - one that
measures _all_ system calls, and could thus be used to find regressions
on a 'broad' basis. Syscall usage could be gleaned from the LTP project.

Ingo

2009-11-04 10:33:15

by Hitoshi Mitake

[permalink] [raw]
Subject: Re: [RFC][PATCH 7/7] Adding general performance benchmarking subsystem to perf.

From: Ingo Molnar <[email protected]>
Subject: Re: [RFC][PATCH 7/7] Adding general performance benchmarking subsystem to perf.
Date: Tue, 3 Nov 2009 18:24:07 +0100

>
> * Hitoshi Mitake <[email protected]> wrote:
>
> > > There will be a handful of more details i'm sure but once there's a
> > > good base we can commit it - would you / will you be interested in
> > > extending it further and adding more benchmark modules as well?
> > >
> > > There's quite a few useful small benchmarks that people are using to
> > > measure the kernel. Having a good collection of them in one place,
> > > with standardized options and standardized output would be very
> > > useful.
> >
> > Yes, of course! Unified benchmarking utilities will be big help for
> > Linux users including me.
> >
> > e.g. I think that copybench (http://code.google.com/p/copybench/) will
> > be good benchmark for I/O, memory and file system. I'll work on this
> > after that the patch series I'll send later is merged.
>
> copybench is listed as 'new BSD license'. Might need the pinging of its
> author whether he considers it GPLv2 compatible.
>

Yes. I'll contact the author when I try unifying copybench to perf actually.

> > Do you know any other good candidates to include?
>
> Frederic suggested dbench - although that's quite large as it includes a
> complete trace of a benchmark run.
>
> We might want to do similar measurements to lmbench.
>
> One nice thing would be to have a 'system call benchmark' set - one that
> measures _all_ system calls, and could thus be used to find regressions
> on a 'broad' basis. Syscall usage could be gleaned from the LTP project.
>

These are good candidates.
Especially system call benchmark is nice idea.
I'll try these after completion of base part.