Commit-ID: 122e0b947052f6106595fa29d63d514d2ebcdad9
Gitweb: http://git.kernel.org/tip/122e0b947052f6106595fa29d63d514d2ebcdad9
Author: Arnaldo Carvalho de Melo <[email protected]>
AuthorDate: Fri, 4 Aug 2017 12:19:44 -0300
Committer: Arnaldo Carvalho de Melo <[email protected]>
CommitDate: Fri, 11 Aug 2017 16:06:28 -0300
perf test shell: Install shell tests
Now that we have shell tests, install them.
Developers don't need this pass, as 'perf test' will look first at the
in tree scripts at tools/perf/tests/shell/.
Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Thomas Richter <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/Makefile.perf | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
index c1f7884..eb13567 100644
--- a/tools/perf/Makefile.perf
+++ b/tools/perf/Makefile.perf
@@ -760,7 +760,9 @@ install-tests: all install-gtk
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests'; \
$(INSTALL) tests/attr.py '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests'; \
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/attr'; \
- $(INSTALL) tests/attr/* '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/attr'
+ $(INSTALL) tests/attr/* '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/attr'; \
+ $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell'; \
+ $(INSTALL) tests/shell/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell'
install-bin: install-tools install-tests install-traceevent-plugins
Hi Arnaldo!
Maybe this would be the right time to incorporate the shell-based
perftool-testsuite [1] into perf-test, wouldn't it?
It already contains bunch of shell-based perf tests that cover
25+ RH bugs...
A little problem might be different design, since the testsuite
has multiple levels of hierarchy of sub-sub-sub-tests, like:
...
-- [ PASS ] -- perf_probe :: test_probe_syntax :: custom named probe :: add
-- [ PASS ] -- perf_probe :: test_probe_syntax :: custom named probe :: list
-- [ PASS ] -- perf_probe :: test_probe_syntax :: custom named probe :: use
-- [ PASS ] -- perf_probe :: test_probe_syntax :: various syntax forms :: vfs_read@fs/read_write.c
-- [ PASS ] -- perf_probe :: test_probe_syntax :: various syntax forms :: vfs_read:11@fs/read_write.c
-- [ PASS ] -- perf_probe :: test_probe_syntax :: various syntax forms :: vfs_read@fs/read_write.c:11
-- [ PASS ] -- perf_probe :: test_probe_syntax :: various syntax forms :: vfs_read%return
-- [ PASS ] -- perf_probe :: test_probe_syntax :: various syntax forms :: test.c:29
-- [ PASS ] -- perf_probe :: test_probe_syntax :: various syntax forms :: func%return $retval
## [ PASS ] ## perf_probe :: test_probe_syntax SUMMARY
-- [ PASS ] -- perf_probe :: test_sdt :: adding SDT tracepoints as probes
-- [ PASS ] -- perf_probe :: test_sdt :: listing added probes
-- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf stat (N = 13)
-- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf stat (N = 128)
-- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf stat (N = 241)
-- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf record (N = 37)
-- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf report (N = 37)
-- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf script (N = 37)
-- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf record (N = 97)
-- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf report (N = 97)
-- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf script (N = 97)
-- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf record (N = 237)
-- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf report (N = 237)
-- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf script (N = 237)
## [ PASS ] ## perf_probe :: test_sdt SUMMARY
...
... which does not exactly match how perf-test is structured, however,
I think that the multi-level structure of the testsuite is important
for keeping some order in it...
What do you think?
Cheers,
Michael
[1] https://github.com/rfmvh/perftool-testsuite
On Mon, 14 Aug 2017, tip-bot for Arnaldo Carvalho de Melo wrote:
> Commit-ID: 122e0b947052f6106595fa29d63d514d2ebcdad9
> Gitweb: http://git.kernel.org/tip/122e0b947052f6106595fa29d63d514d2ebcdad9
> Author: Arnaldo Carvalho de Melo <[email protected]>
> AuthorDate: Fri, 4 Aug 2017 12:19:44 -0300
> Committer: Arnaldo Carvalho de Melo <[email protected]>
> CommitDate: Fri, 11 Aug 2017 16:06:28 -0300
>
> perf test shell: Install shell tests
>
> Now that we have shell tests, install them.
>
> Developers don't need this pass, as 'perf test' will look first at the
> in tree scripts at tools/perf/tests/shell/.
>
> Cc: Adrian Hunter <[email protected]>
> Cc: David Ahern <[email protected]>
> Cc: Jiri Olsa <[email protected]>
> Cc: Michael Petlan <[email protected]>
> Cc: Namhyung Kim <[email protected]>
> Cc: Thomas Richter <[email protected]>
> Cc: Wang Nan <[email protected]>
> Link: http://lkml.kernel.org/n/[email protected]
> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
> ---
> tools/perf/Makefile.perf | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
> index c1f7884..eb13567 100644
> --- a/tools/perf/Makefile.perf
> +++ b/tools/perf/Makefile.perf
> @@ -760,7 +760,9 @@ install-tests: all install-gtk
> $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests'; \
> $(INSTALL) tests/attr.py '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests'; \
> $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/attr'; \
> - $(INSTALL) tests/attr/* '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/attr'
> + $(INSTALL) tests/attr/* '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/attr'; \
> + $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell'; \
> + $(INSTALL) tests/shell/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell'
>
> install-bin: install-tools install-tests install-traceevent-plugins
>
>
Em Mon, Aug 14, 2017 at 08:44:14PM +0200, Michael Petlan escreveu:
> Hi Arnaldo!
>
> Maybe this would be the right time to incorporate the shell-based
> perftool-testsuite [1] into perf-test, wouldn't it?
Perhaps its time, yes. Some questions:
Do these tests assume that perf was built in some particular way, i.e.
as it is packaged for RHEL?
One thing that came to mind from a report from Kim Phillips, that I
addressed (to some extent) after this sending this first patchkit to
Ingo was that perf can be built in many ways, for instance, without
'perf probe', or with 'perf probe' but without DWARF support, which will
allow some features to be tested while others should cause the test
needing not-builtin features to just return '2', that will make it be
marked as "Skip" in the output.
- Arnaldo
> It already contains bunch of shell-based perf tests that cover
> 25+ RH bugs...
>
> A little problem might be different design, since the testsuite
> has multiple levels of hierarchy of sub-sub-sub-tests, like:
>
> ...
> -- [ PASS ] -- perf_probe :: test_probe_syntax :: custom named probe :: add
> -- [ PASS ] -- perf_probe :: test_probe_syntax :: custom named probe :: list
> -- [ PASS ] -- perf_probe :: test_probe_syntax :: custom named probe :: use
> -- [ PASS ] -- perf_probe :: test_probe_syntax :: various syntax forms :: vfs_read@fs/read_write.c
> -- [ PASS ] -- perf_probe :: test_probe_syntax :: various syntax forms :: vfs_read:11@fs/read_write.c
> -- [ PASS ] -- perf_probe :: test_probe_syntax :: various syntax forms :: vfs_read@fs/read_write.c:11
> -- [ PASS ] -- perf_probe :: test_probe_syntax :: various syntax forms :: vfs_read%return
> -- [ PASS ] -- perf_probe :: test_probe_syntax :: various syntax forms :: test.c:29
> -- [ PASS ] -- perf_probe :: test_probe_syntax :: various syntax forms :: func%return $retval
> ## [ PASS ] ## perf_probe :: test_probe_syntax SUMMARY
> -- [ PASS ] -- perf_probe :: test_sdt :: adding SDT tracepoints as probes
> -- [ PASS ] -- perf_probe :: test_sdt :: listing added probes
> -- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf stat (N = 13)
> -- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf stat (N = 128)
> -- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf stat (N = 241)
> -- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf record (N = 37)
> -- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf report (N = 37)
> -- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf script (N = 37)
> -- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf record (N = 97)
> -- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf report (N = 97)
> -- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf script (N = 97)
> -- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf record (N = 237)
> -- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf report (N = 237)
> -- [ PASS ] -- perf_probe :: test_sdt :: using probes :: perf script (N = 237)
> ## [ PASS ] ## perf_probe :: test_sdt SUMMARY
> ...
>
> ... which does not exactly match how perf-test is structured, however,
> I think that the multi-level structure of the testsuite is important
> for keeping some order in it...
>
> What do you think?
>
> Cheers,
> Michael
>
>
>
> [1] https://github.com/rfmvh/perftool-testsuite
>
>
> On Mon, 14 Aug 2017, tip-bot for Arnaldo Carvalho de Melo wrote:
>
> > Commit-ID: 122e0b947052f6106595fa29d63d514d2ebcdad9
> > Gitweb: http://git.kernel.org/tip/122e0b947052f6106595fa29d63d514d2ebcdad9
> > Author: Arnaldo Carvalho de Melo <[email protected]>
> > AuthorDate: Fri, 4 Aug 2017 12:19:44 -0300
> > Committer: Arnaldo Carvalho de Melo <[email protected]>
> > CommitDate: Fri, 11 Aug 2017 16:06:28 -0300
> >
> > perf test shell: Install shell tests
> >
> > Now that we have shell tests, install them.
> >
> > Developers don't need this pass, as 'perf test' will look first at the
> > in tree scripts at tools/perf/tests/shell/.
> >
> > Cc: Adrian Hunter <[email protected]>
> > Cc: David Ahern <[email protected]>
> > Cc: Jiri Olsa <[email protected]>
> > Cc: Michael Petlan <[email protected]>
> > Cc: Namhyung Kim <[email protected]>
> > Cc: Thomas Richter <[email protected]>
> > Cc: Wang Nan <[email protected]>
> > Link: http://lkml.kernel.org/n/[email protected]
> > Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
> > ---
> > tools/perf/Makefile.perf | 4 +++-
> > 1 file changed, 3 insertions(+), 1 deletion(-)
> >
> > diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
> > index c1f7884..eb13567 100644
> > --- a/tools/perf/Makefile.perf
> > +++ b/tools/perf/Makefile.perf
> > @@ -760,7 +760,9 @@ install-tests: all install-gtk
> > $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests'; \
> > $(INSTALL) tests/attr.py '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests'; \
> > $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/attr'; \
> > - $(INSTALL) tests/attr/* '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/attr'
> > + $(INSTALL) tests/attr/* '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/attr'; \
> > + $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell'; \
> > + $(INSTALL) tests/shell/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell'
> >
> > install-bin: install-tools install-tests install-traceevent-plugins
> >
> >
On Mon, 14 Aug 2017, Arnaldo Carvalho de Melo wrote:
> Em Mon, Aug 14, 2017 at 08:44:14PM +0200, Michael Petlan escreveu:
> > Hi Arnaldo!
> >
> > Maybe this would be the right time to incorporate the shell-based
> > perftool-testsuite [1] into perf-test, wouldn't it?
>
> Perhaps its time, yes. Some questions:
>
> Do these tests assume that perf was built in some particular way, i.e.
> as it is packaged for RHEL?
Of course I run the testsuite most often on RHEL, but it should be
distro-agnostic, worked on Debian with their perf as well as with
vanilla kernel/perf build from Linus' repo...
It somehow assumes having kernel-debuginfo available (but this does
not necessarily mean kernel-debuginfo RHEL package). It runs against
'perf' from path or against $CMD_PERF if this variable is defined.
>
> One thing that came to mind from a report from Kim Phillips, that I
> addressed (to some extent) after this sending this first patchkit to
> Ingo was that perf can be built in many ways, for instance, without
> 'perf probe', or with 'perf probe' but without DWARF support, which will
> allow some features to be tested while others should cause the test
> needing not-builtin features to just return '2', that will make it be
> marked as "Skip" in the output.
>
It has mechanisms for skipping things if they aren't supported, but
definitely not *all*. E.g. it detects uprobe/kprobe support, POWER8
hv_24x7 events support, Intel uncore support, HW breakpoint events
availablitity, etc. But as I said, perf without perf-probe subcommand
would probably fail, because I wasn't aware of such possibility...
Anyway, it is easily fixable... The suite has a mechanism for skipping
particular tests. If there is a way to detect a feature support, it
is easy to use it as a condition. The dwarf support might be more
difficult, because afaik, there's no way to find out whether dwarf
support just does not work or is disabled on purpose...
Michael
> - Arnaldo
>
> > It already contains bunch of shell-based perf tests that cover
> > 25+ RH bugs...
> >
> > A little problem might be different design, since the testsuite
> > has multiple levels of hierarchy of sub-sub-sub-tests, like:
> >
[SNIP]
Em Mon, Aug 14, 2017 at 11:01:57PM +0200, Michael Petlan escreveu:
> On Mon, 14 Aug 2017, Arnaldo Carvalho de Melo wrote:
> > Em Mon, Aug 14, 2017 at 08:44:14PM +0200, Michael Petlan escreveu:
> > > Maybe this would be the right time to incorporate the shell-based
> > > perftool-testsuite [1] into perf-test, wouldn't it?
> > Perhaps its time, yes. Some questions:
> > Do these tests assume that perf was built in some particular way, i.e.
> > as it is packaged for RHEL?
>
> Of course I run the testsuite most often on RHEL, but it should be
> distro-agnostic, worked on Debian with their perf as well as with
> vanilla kernel/perf build from Linus' repo...
Right, but I mean more generally, i.e. the only report so far of these
new tests failing came from Kim Phillips, and his setup didn't had the
devel packages needed to build 'perf probe', which is enabled, AFAIK in
all general purpose distro 'perf' (linux-tools, etc) packages.
> It somehow assumes having kernel-debuginfo available (but this does
> not necessarily mean kernel-debuginfo RHEL package). It runs against
> 'perf' from path or against $CMD_PERF if this variable is defined.
Right, that is interesting, to be able to use a development version
while having some other perf version installed.
> > One thing that came to mind from a report from Kim Phillips, that I
> > addressed (to some extent) after this sending this first patchkit to
> > Ingo was that perf can be built in many ways, for instance, without
> > 'perf probe', or with 'perf probe' but without DWARF support, which will
> > allow some features to be tested while others should cause the test
> > needing not-builtin features to just return '2', that will make it be
> > marked as "Skip" in the output.
> It has mechanisms for skipping things if they aren't supported, but
> definitely not *all*. E.g. it detects uprobe/kprobe support, POWER8
> hv_24x7 events support, Intel uncore support, HW breakpoint events
> availablitity, etc. But as I said, perf without perf-probe subcommand
> would probably fail, because I wasn't aware of such possibility...
Yeah, what you have seems great for general purpose distros, while we
have to go on adding tests and trying to have people trying it in more
different environments to see if everything works as expected or at
least we detect what is needed for each test and skip when the
pre-requisites are not in place.
Right now it returns 2 for Skip, probably we need a way for the test to
state what needs to be built-in or available from the processor/system
to be able to perform some test.
> Anyway, it is easily fixable... The suite has a mechanism for skipping
> particular tests. If there is a way to detect a feature support, it
> is easy to use it as a condition. The dwarf support might be more
> difficult, because afaik, there's no way to find out whether dwarf
> support just does not work or is disabled on purpose...
Well, I'm detecting this for the tests already in place, for instance,
for:
# perf test ping
probe libc's inet_pton & backtrace it with ping: Ok
#
[acme@jouet linux]$ grep ^# tools/perf/tests/shell/trace+probe_libc_inet_pton.sh
# probe libc's inet_pton & backtrace it with ping
# Installs a probe on libc's inet_pton function, that will use uprobes,
# then use 'perf trace' on a ping to localhost asking for just one packet
# with a backtrace 3 levels deep, check that it is what we expect.
# This needs no debuginfo package, all is done using the libc ELF symtab
# and the CFI info in the binaries.
# Arnaldo Carvalho de Melo <[email protected]>, 2017
[acme@jouet linux]$
# rpm -q glibc-debuginfo iputils-debuginfo
package glibc-debuginfo is not installed
package iputils-debuginfo is not installed
#
But tests that requires full DWARF support will be skipped because of
this check:
$ cat tools/perf/tests/shell/lib/probe_vfs_getname.sh
<SNIP>
skip_if_no_debuginfo() {
add_probe_vfs_getname -v 2>&1 | egrep -q "^(Failed to find the path for kernel|Debuginfo-analysis is not supported)" && return 2
return 1
}
So there are ways to figure out that a test fails because support for
what it needs is not builtin.
> > > A little problem might be different design, since the testsuite
> > > has multiple levels of hierarchy of sub-sub-sub-tests, like:
Right, having subdirectories in the tests dir to group tests per area
should be no problem, and probably we can ask 'perf test' to test just
some sub hierarchy, say, 'perf probe' tests.
So we should try to merge your tests, trying to make them emit test
names and results like the other 'perf test' entries, and allowing for
substring tests matching, i.e. the first line of the test should have
a one line description used for the perf test indexed output, etc.
What we want is to add more tests that will not disrupt people already
using 'perf test' to validate backports in distros, ports to new
architectures, etc. All this people will see is a growing number of
tests that will -help- them to make sure 'perf' works well in their
environments.
- Arnaldo
On Tue, 15 Aug 2017, Arnaldo Carvalho de Melo wrote:
[...]
> > > Perhaps its time, yes. Some questions:
>
> > > Do these tests assume that perf was built in some particular way, i.e.
> > > as it is packaged for RHEL?
> >
> > Of course I run the testsuite most often on RHEL, but it should be
> > distro-agnostic, worked on Debian with their perf as well as with
> > vanilla kernel/perf build from Linus' repo...
>
> Right, but I mean more generally, i.e. the only report so far of these
> new tests failing came from Kim Phillips, and his setup didn't had the
> devel packages needed to build 'perf probe', which is enabled, AFAIK in
> all general purpose distro 'perf' (linux-tools, etc) packages.
So basically this point is OK. My suite should be generic enough to
cover basic general purpose distro 'perf' packages and if we find out
that it fails on some specific configuration, we can always fix it.
>
> > It somehow assumes having kernel-debuginfo available (but this does
> > not necessarily mean kernel-debuginfo RHEL package). It runs against
> > 'perf' from path or against $CMD_PERF if this variable is defined.
>
> Right, that is interesting, to be able to use a development version
> while having some other perf version installed.
So this should be also OK then.
>
> Yeah, what you have seems great for general purpose distros, while we
> have to go on adding tests and trying to have people trying it in more
> different environments to see if everything works as expected or at
> least we detect what is needed for each test and skip when the
> pre-requisites are not in place.
I can make it more "conditional" to avoid fails when something is not
supported. However, it always has served for RHEL testing, so such fails
have been useful to warn RHEL QE that some always-available features
are broken. I think this is solvable.
>
> Right now it returns 2 for Skip, probably we need a way for the test to
> state what needs to be built-in or available from the processor/system
> to be able to perform some test.
It would be great to connect it with the way perf is built, thus the
tests could detect e.g. '--call-graph=dwarf' availability from the
build... However, my testsuite has been designed as standalone, thus
what I cannot detect from the environment or perf itself, I cannot
detect at all, at least for now.
>
> > Anyway, it is easily fixable... The suite has a mechanism for skipping
> > particular tests. If there is a way to detect a feature support, it
> > is easy to use it as a condition. The dwarf support might be more
> > difficult, because afaik, there's no way to find out whether dwarf
> > support just does not work or is disabled on purpose...
>
> Well, I'm detecting this for the tests already in place, for instance,
> for:
>
> # perf test ping
> probe libc's inet_pton & backtrace it with ping: Ok
> #
>
> [acme@jouet linux]$ grep ^# tools/perf/tests/shell/trace+probe_libc_inet_pton.sh
> # probe libc's inet_pton & backtrace it with ping
> # Installs a probe on libc's inet_pton function, that will use uprobes,
> # then use 'perf trace' on a ping to localhost asking for just one packet
> # with a backtrace 3 levels deep, check that it is what we expect.
> # This needs no debuginfo package, all is done using the libc ELF symtab
> # and the CFI info in the binaries.
> # Arnaldo Carvalho de Melo <[email protected]>, 2017
> [acme@jouet linux]$
>
> # rpm -q glibc-debuginfo iputils-debuginfo
> package glibc-debuginfo is not installed
> package iputils-debuginfo is not installed
> #
>
> But tests that requires full DWARF support will be skipped because of
> this check:
>
> $ cat tools/perf/tests/shell/lib/probe_vfs_getname.sh
> <SNIP>
> skip_if_no_debuginfo() {
> add_probe_vfs_getname -v 2>&1 | egrep -q "^(Failed to find the path for kernel|Debuginfo-analysis is not supported)" && return 2
> return 1
> }
>
> So there are ways to figure out that a test fails because support for
> what it needs is not builtin.
>
I use similar way to figure that out:
All the related tests share functions like the following:
check_kprobes_available()
{
test -e /sys/kernel/debug/tracing/kprobe_events
}
check_uprobes_available()
{
test -e /sys/kernel/debug/tracing/uprobe_events
}
[...]
And in a particular test, I check e.g. for uprobes support:
check_uprobes_available
if [ $? -ne 0 ]; then
print_overall_skipped
exit 0
fi
> > > > A little problem might be different design, since the testsuite
> > > > has multiple levels of hierarchy of sub-sub-sub-tests, like:
>
> Right, having subdirectories in the tests dir to group tests per area
> should be no problem, and probably we can ask 'perf test' to test just
> some sub hierarchy, say, 'perf probe' tests.
>
Might be `perf test suite probe` which would run the tests in "base_probe"
directory. Or even `perf test suite probe listing` would run
base_probe/test_listing.sh ...
> So we should try to merge your tests, trying to make them emit test
> names and results like the other 'perf test' entries, and allowing for
> substring tests matching, i.e. the first line of the test should have
> a one line description used for the perf test indexed output, etc.
I don't actually understand the substring matching feature purpose...
It is a good idea, but the set of current perf-test names looks a bit
chaotic to me. As it is, it seems to me that this feature groups together
tests that aren't actually related to each other except of having common
word in names...
# perf test cpu
3: detect openat syscall event on all cpus : Ok
39: Test cpu map synthesize : Ok
46: Test cpu map print : Ok
Also, `perf test list` prints a list of subtests, thus 'list' is a
special word that does not get into substring matching, but from the
outside, it is not obvious that it is something else...
>
> What we want is to add more tests that will not disrupt people already
> using 'perf test' to validate backports in distros, ports to new
> architectures, etc. All this people will see is a growing number of
> tests that will -help- them to make sure 'perf' works well in their
> environments.
Sure.
>
> - Arnaldo
Cheers,
Michael
>