Received: by 2002:a05:6358:9144:b0:117:f937:c515 with SMTP id r4csp1478121rwr; Fri, 5 May 2023 15:02:28 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4DP2OIbpkB14PEJLdYPpEe+zQ4QOlC9ghabt7pl94uz1HS+8aS3+13wo1YTNGRWbIkM67h X-Received: by 2002:a17:903:2351:b0:1ac:4ddb:5d22 with SMTP id c17-20020a170903235100b001ac4ddb5d22mr1290747plh.1.1683324148271; Fri, 05 May 2023 15:02:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683324148; cv=none; d=google.com; s=arc-20160816; b=aZEXcrJWYdOKvMGd2fKYCsjTuvu8+jjwliOuDdp+uU6/5sk3dlHnxUubYUGfz/f+Li ZnEK38d6LD8Lu7gUatwpGRpZsldktRIWebD/fPXIAlEgEBPj12gQX6iBPlWeHet8ESMp qo5tx+HVit4nTGy41VhDJUPnMhZzCp0t6F98TDxd6123r4JJ0UvX/IGoKJujTG2PJ1gs 2mO0zvOksxslUOfz4Gys5MK0B3zse22JhomuYslSMZGQxHxTnVT1wVfGQRE2Zl47UxTf xkBWAvEGCzW9wjjgNoUlU4sIZxM9JS6ukSWY9uaJR7d8tLoXCvpupmHcU0WRxMfa3HKD M+Jw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=OUmkYnUmIlmlAh+phUuMufDI8J1c6HLRZfn4bb5Q/8Y=; b=0JcRyPMIyT8drsTTPGMwIQDmDLCguWhY7E30mhujr62vBNWrMSScuexc8BTojtYekX FANQyxME5i8d1Zserjk3a3MfU9mpMT/Tdb7e1RhXDZs0cRblrWAnCWtk/sPN3kzYnR13 MPQdcll53G8k77PAKxGF2xmYE1b71y2ylwoQ2H1IKECZwtX9hn5Xj299sz6i7f1LdIEM 3nsL/up/lc5fBMVp6fEp6hJYzrHvszOZUcCpn2drhtC7eGkEupDnw2Idtg5Nm/7bCWzX BH93ETXEuQ8ItnuRioPb6J0swkhEvMNcRYmgdYD6o3RTmZMB7+FKgH9F5kbZFRNepSGI jvwQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=nok614ur; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u18-20020a17090341d200b001a647aadbe0si2800702ple.568.2023.05.05.15.02.13; Fri, 05 May 2023 15:02:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=nok614ur; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232055AbjEEVwL (ORCPT + 99 others); Fri, 5 May 2023 17:52:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231783AbjEEVwK (ORCPT ); Fri, 5 May 2023 17:52:10 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D7CC4C28; Fri, 5 May 2023 14:52:08 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C60AC64118; Fri, 5 May 2023 21:52:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B7FE0C433EF; Fri, 5 May 2023 21:52:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683323527; bh=LBfqOVw4Z6GuC13VItNamLAOjDe39tStvarVotopRwI=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=nok614urewGKxx3mZLQ/3JFXst8YbvkWMLPAmAn9CWcdDQ/FzIJ6nHeDT3Cajoj00 CpFtyY+cEijzqw0scykBhuSuCFqmN1Svg3TNehoyBYgVBX30BskZBqRRen/BtFJMoZ 2cpS/QqWAB2lBlUniu3jVyrZ4TGZD9LqmW52FeNlwdsraQ6KLZDdGNDZo2U4UPR0yG AuGXuIBNtwIbRMe2hcSa1nXroMfNf+JmAjWe2Y65llfDHd8CSL2dD5UTF+Y9VfwcBc bPgmJyZOIsfiK4VwVckWcMIPtj6C/hUwbJax6GZJrHKWhPoWeC3PJEtG9Yi1/LnTpO WGg+c20r6DZbw== Received: by quaco.ghostprotocols.net (Postfix, from userid 1000) id 1C043403B5; Fri, 5 May 2023 18:52:04 -0300 (-03) Date: Fri, 5 May 2023 18:52:04 -0300 From: Arnaldo Carvalho de Melo To: Andrii Nakryiko Cc: Jiri Olsa , Ian Rogers , Linus Torvalds , Namhyung Kim , Song Liu , Andrii Nakryiko , Ingo Molnar , Thomas Gleixner , Clark Williams , Kate Carcia , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Adrian Hunter , Changbin Du , Hao Luo , James Clark , Kan Liang , Roman Lozko , Stephane Eranian , Thomas Richter , Arnaldo Carvalho de Melo , bpf , Alexei Starovoitov , Yang Jihong , Mark Rutland , Paul Clarke Subject: Re: [PATCH RFC/RFT] perf bpf skels: Stop using vmlinux.h generated from BTF, use subset of used structs + CO-RE. was Re: BPF skels in perf .Re: [GIT PULL] perf tools changes for v6.4 Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Url: http://acmel.wordpress.com X-Spam-Status: No, score=-7.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Em Fri, May 05, 2023 at 02:21:56PM -0700, Andrii Nakryiko escreveu: > On Fri, May 5, 2023 at 2:15 PM Jiri Olsa wrote: > > > > On Fri, May 05, 2023 at 01:46:30PM -0700, Ian Rogers wrote: > > > On Fri, May 5, 2023 at 1:43 PM Jiri Olsa wrote: > > > > > > > > On Fri, May 05, 2023 at 10:04:47AM -0700, Ian Rogers wrote: > > > > > On Fri, May 5, 2023 at 9:56 AM Arnaldo Carvalho de Melo wrote: > > > > > > > > > > > > Em Fri, May 05, 2023 at 10:33:15AM -0300, Arnaldo Carvalho de Melo escreveu: > > > > > > > Em Fri, May 05, 2023 at 01:03:14AM +0200, Jiri Olsa escreveu: > > > > > > > That with the preserve_access_index isn't needed, we need just the > > > > > > > fields that we access in the tools, right? > > > > > > > > > > > > I'm now doing build test this in many distro containers, without the two > > > > > > reverts, i.e. BPF skels continue as opt-out as in my pull request, to > > > > > > test build and also for the functionality tests on the tools using such > > > > > > bpf skels, see below, no touching of vmlinux nor BTF data during the > > > > > > build. > > > > > > > > > > > > - Arnaldo > > > > > > > > > > > > From 882adaee50bc27f85374aeb2fbaa5b76bef60d05 Mon Sep 17 00:00:00 2001 > > > > > > From: Arnaldo Carvalho de Melo > > > > > > Date: Thu, 4 May 2023 19:03:51 -0300 > > > > > > Subject: [PATCH 1/1] perf bpf skels: Stop using vmlinux.h generated from BTF, > > > > > > use subset of used structs + CO-RE > > > > > > > > > > > > Linus reported a build break due to using a vmlinux without a BTF elf > > > > > > section to generate the vmlinux.h header with bpftool for use in the BPF > > > > > > tools in tools/perf/util/bpf_skel/*.bpf.c. > > > > > > > > > > > > Instead add a vmlinux.h file with the structs needed with the fields the > > > > > > tools need, marking the structs with __attribute__((preserve_access_index)), > > > > > > so that libbpf's CO-RE code can fixup the struct field offsets. > > > > > > > > > > > > In some cases the vmlinux.h file that was being generated by bpftool > > > > > > from the kernel BTF information was not needed at all, just including > > > > > > linux/bpf.h, sometimes linux/perf_event.h was enough as non-UAPI > > > > > > types were not being used. > > > > > > > > > > > > To keep te patch small, include those UAPI headers from the trimmed down > > > > > > vmlinux.h file, that then provides the tools with just the structs and > > > > > > the subset of its fields needed for them. > > > > > > > > > > > > Testing it: > > > > > > > > > > > > # perf lock contention -b find / > /dev/null > > > > > > > > I tested perf lock con -abv -L rcu_state sleep 1 > > > > and needed fix below > > > > > > > > jirka > > > > > > I thought this was fixed by: > > > https://lore.kernel.org/lkml/20230427234833.1576130-1-namhyung@kernel.org/ > > > but I think that is just in perf-tools-next. > > > > ah ok, missed that one > > Please try validating with veristat to check if all of perf's .bpf.o > files are successful. Veristat is part of selftests and can be built > with just `make -C tools/testing/selftests/bpf veristat`. After that; > > sudo ~/bin/veristat tools/perf/util/bpf_skel/.tmp/*.bpf.o > > This is a surer way to check that BPF object files are ok at least on > your currently running kernel, than trying to exercise each BPF > program through perf commands. [acme@quaco perf-tools]$ sudo tools/testing/selftests/bpf/veristat /tmp/build/perf-tools/util/bpf_skel/.tmp/*.bpf.o Processing 'bperf_cgroup.bpf.o'... Processing 'bperf_follower.bpf.o'... Processing 'bperf_leader.bpf.o'... Processing 'bpf_prog_profiler.bpf.o'... Processing 'func_latency.bpf.o'... Processing 'kwork_trace.bpf.o'... Processing 'lock_contention.bpf.o'... Processing 'off_cpu.bpf.o'... Processing 'sample_filter.bpf.o'... File Program Verdict Duration (us) Insns States Peak states ----------------------- ------------------------------- ------- ------------- ------ ------ ----------- bperf_cgroup.bpf.o on_cgrp_switch success 6479 17025 417 174 bperf_cgroup.bpf.o trigger_read success 6370 17025 417 174 bperf_follower.bpf.o fexit_XXX failure 0 0 0 0 bperf_leader.bpf.o on_switch success 360 49 3 3 bpf_prog_profiler.bpf.o fentry_XXX failure 0 0 0 0 bpf_prog_profiler.bpf.o fexit_XXX failure 0 0 0 0 func_latency.bpf.o func_begin success 351 69 6 6 func_latency.bpf.o func_end success 318 158 15 15 kwork_trace.bpf.o latency_softirq_entry success 334 108 10 10 kwork_trace.bpf.o latency_softirq_raise success 896 1993 34 34 kwork_trace.bpf.o latency_workqueue_activate_work success 333 46 4 4 kwork_trace.bpf.o latency_workqueue_execute_start success 1112 2219 41 41 kwork_trace.bpf.o report_irq_handler_entry success 1067 2118 34 34 kwork_trace.bpf.o report_irq_handler_exit success 334 110 10 10 kwork_trace.bpf.o report_softirq_entry success 897 1993 34 34 kwork_trace.bpf.o report_softirq_exit success 329 108 10 10 kwork_trace.bpf.o report_workqueue_execute_end success 1124 2219 41 41 kwork_trace.bpf.o report_workqueue_execute_start success 295 46 4 4 lock_contention.bpf.o collect_lock_syms failure 0 0 0 0 lock_contention.bpf.o contention_begin failure 0 0 0 0 lock_contention.bpf.o contention_end failure 0 0 0 0 off_cpu.bpf.o on_newtask success 387 37 3 3 off_cpu.bpf.o on_switch success 536 220 20 20 sample_filter.bpf.o perf_sample_filter success 190443 190237 11173 923 ----------------------- ------------------------------- ------- ------------- ------ ------ ----------- Done. Processed 9 files, 0 programs. Skipped 24 files, 0 programs. [acme@quaco perf-tools]$ What extra info can we get from these "failure" lines? - Arnaldo