Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752758AbdHOPX6 (ORCPT ); Tue, 15 Aug 2017 11:23:58 -0400 Received: from mail.kernel.org ([198.145.29.99]:36534 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752718AbdHOPX4 (ORCPT ); Tue, 15 Aug 2017 11:23:56 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9CC4822C90 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=acme@kernel.org Date: Tue, 15 Aug 2017 12:23:51 -0300 From: Arnaldo Carvalho de Melo To: Michael Petlan Cc: Arnaldo Carvalho de Melo , tmricht@linux.vnet.ibm.com, namhyung@kernel.org, dsahern@gmail.com, Jiri Olsa , tglx@linutronix.de, hpa@zytor.com, mingo@kernel.org, adrian.hunter@intel.com, wangnan0@huawei.com, linux-kernel@vger.kernel.org Subject: Re: [tip:perf/core] perf test shell: Install shell tests Message-ID: <20170815152351.GA22988@kernel.org> References: <20170814202854.GC2641@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Url: http://acmel.wordpress.com User-Agent: Mutt/1.8.3 (2017-05-23) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5040 Lines: 111 Em Mon, Aug 14, 2017 at 11:01:57PM +0200, Michael Petlan escreveu: > On Mon, 14 Aug 2017, Arnaldo Carvalho de Melo wrote: > > Em Mon, Aug 14, 2017 at 08:44:14PM +0200, Michael Petlan escreveu: > > > Maybe this would be the right time to incorporate the shell-based > > > perftool-testsuite [1] into perf-test, wouldn't it? > > Perhaps its time, yes. Some questions: > > Do these tests assume that perf was built in some particular way, i.e. > > as it is packaged for RHEL? > > Of course I run the testsuite most often on RHEL, but it should be > distro-agnostic, worked on Debian with their perf as well as with > vanilla kernel/perf build from Linus' repo... Right, but I mean more generally, i.e. the only report so far of these new tests failing came from Kim Phillips, and his setup didn't had the devel packages needed to build 'perf probe', which is enabled, AFAIK in all general purpose distro 'perf' (linux-tools, etc) packages. > It somehow assumes having kernel-debuginfo available (but this does > not necessarily mean kernel-debuginfo RHEL package). It runs against > 'perf' from path or against $CMD_PERF if this variable is defined. Right, that is interesting, to be able to use a development version while having some other perf version installed. > > One thing that came to mind from a report from Kim Phillips, that I > > addressed (to some extent) after this sending this first patchkit to > > Ingo was that perf can be built in many ways, for instance, without > > 'perf probe', or with 'perf probe' but without DWARF support, which will > > allow some features to be tested while others should cause the test > > needing not-builtin features to just return '2', that will make it be > > marked as "Skip" in the output. > It has mechanisms for skipping things if they aren't supported, but > definitely not *all*. E.g. it detects uprobe/kprobe support, POWER8 > hv_24x7 events support, Intel uncore support, HW breakpoint events > availablitity, etc. But as I said, perf without perf-probe subcommand > would probably fail, because I wasn't aware of such possibility... Yeah, what you have seems great for general purpose distros, while we have to go on adding tests and trying to have people trying it in more different environments to see if everything works as expected or at least we detect what is needed for each test and skip when the pre-requisites are not in place. Right now it returns 2 for Skip, probably we need a way for the test to state what needs to be built-in or available from the processor/system to be able to perform some test. > Anyway, it is easily fixable... The suite has a mechanism for skipping > particular tests. If there is a way to detect a feature support, it > is easy to use it as a condition. The dwarf support might be more > difficult, because afaik, there's no way to find out whether dwarf > support just does not work or is disabled on purpose... Well, I'm detecting this for the tests already in place, for instance, for: # perf test ping probe libc's inet_pton & backtrace it with ping: Ok # [acme@jouet linux]$ grep ^# tools/perf/tests/shell/trace+probe_libc_inet_pton.sh # probe libc's inet_pton & backtrace it with ping # Installs a probe on libc's inet_pton function, that will use uprobes, # then use 'perf trace' on a ping to localhost asking for just one packet # with a backtrace 3 levels deep, check that it is what we expect. # This needs no debuginfo package, all is done using the libc ELF symtab # and the CFI info in the binaries. # Arnaldo Carvalho de Melo , 2017 [acme@jouet linux]$ # rpm -q glibc-debuginfo iputils-debuginfo package glibc-debuginfo is not installed package iputils-debuginfo is not installed # But tests that requires full DWARF support will be skipped because of this check: $ cat tools/perf/tests/shell/lib/probe_vfs_getname.sh skip_if_no_debuginfo() { add_probe_vfs_getname -v 2>&1 | egrep -q "^(Failed to find the path for kernel|Debuginfo-analysis is not supported)" && return 2 return 1 } So there are ways to figure out that a test fails because support for what it needs is not builtin. > > > A little problem might be different design, since the testsuite > > > has multiple levels of hierarchy of sub-sub-sub-tests, like: Right, having subdirectories in the tests dir to group tests per area should be no problem, and probably we can ask 'perf test' to test just some sub hierarchy, say, 'perf probe' tests. So we should try to merge your tests, trying to make them emit test names and results like the other 'perf test' entries, and allowing for substring tests matching, i.e. the first line of the test should have a one line description used for the perf test indexed output, etc. What we want is to add more tests that will not disrupt people already using 'perf test' to validate backports in distros, ports to new architectures, etc. All this people will see is a growing number of tests that will -help- them to make sure 'perf' works well in their environments. - Arnaldo