Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp3656127pxk; Mon, 21 Sep 2020 21:50:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzai3+DdNtrNkw+aqcMPtQNZ1CT+k4gWSixMjehHBfp8Mk6G2Aqdnnh4ZskZazk5JzNiicp X-Received: by 2002:a17:907:2105:: with SMTP id qn5mr2871666ejb.238.1600750225881; Mon, 21 Sep 2020 21:50:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600750225; cv=none; d=google.com; s=arc-20160816; b=Ih9NG9Fi8niyiATAka6EdKuAXtogRHbGUHRVzqRXApJgCMNT3ApZYbqfOADj5SfPKW /0rFCPgemxQIEZc1ynNW7txSsCymibA0hsWryevl25qbTXS4jU9qgpk3p7dhds+eFmY9 ZP7ZvKrcfqxOmouv0a02eORC5poIc4T9jj5VID0KcEwdtFFMH7EqyXt2UcspHTkoQejm A32KPRVy/9wbpl7RifxxlG2ZZofcTAIxabnGowXdDMX/xK3CCI3Et84rc0L3cNj4ApfL sZ1DqAz4gZDN53qcR8KCCF8BV/wT4otsp1cws2klUCsF8A7Pk+ZU8j/BhWHVbutbyS7H gTrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=0FMzuccRphHhjQp62errTPSncLhQu0xepYEC3/QU0HY=; b=E/4PViOkJfMm0q/QFBTXTSHTbLR7QP+DkRz1ZEyEOZDt3SHn647SovcWmKX2wu3CkM f9RE6Qu77mjPYSLuOZdhSDy72OVRsA7ZiqIXdFSrRBGou++8eZ3jLFmv6IF2mavpngKr QcqGxszqRs7xR4HYbbcFo3K5DWYBbooIRp/pgdou1SikOrW0W4hzRLrLOzNr84rjAouv 4JehzbWrFOseN4piN4G2cq0+qM5FGn96yHqFxr7tKZdZYWBmjfNSJU4q1auCfEGJWy9T a7ZMGPmLB7KMT8dkD68uNcdmJRlRYhO3xGnWGYJzSQYYoHSKnj6/OtiVhglH9Mov9Kex mGMg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i25si9474350eds.204.2020.09.21.21.50.02; Mon, 21 Sep 2020 21:50:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728286AbgIVDO2 (ORCPT + 99 others); Mon, 21 Sep 2020 23:14:28 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:36934 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726818AbgIVDO2 (ORCPT ); Mon, 21 Sep 2020 23:14:28 -0400 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 8F50736975DDAE624B91; Tue, 22 Sep 2020 11:14:26 +0800 (CST) Received: from euler.huawei.com (10.175.124.27) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.487.0; Tue, 22 Sep 2020 11:14:20 +0800 From: Wei Li To: Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , "Namhyung Kim" , Andi Kleen , Alexey Budankov , Adrian Hunter CC: Peter Zijlstra , Ingo Molnar , , , Subject: [PATCH 1/2] perf stat: Fix segfault when counting armv8_pmu events Date: Tue, 22 Sep 2020 11:13:45 +0800 Message-ID: <20200922031346.15051-2-liwei391@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200922031346.15051-1-liwei391@huawei.com> References: <20200922031346.15051-1-liwei391@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.175.124.27] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When executing perf stat with armv8_pmu events with a workload, it will report a segfault as result. (gdb) bt #0 0x0000000000603fc8 in perf_evsel__close_fd_cpu (evsel=, cpu=) at evsel.c:122 #1 perf_evsel__close_cpu (evsel=evsel@entry=0x716e950, cpu=7) at evsel.c:156 #2 0x00000000004d4718 in evlist__close (evlist=0x70a7cb0) at util/evlist.c:1242 #3 0x0000000000453404 in __run_perf_stat (argc=3, argc@entry=1, argv=0x30, argv@entry=0xfffffaea2f90, run_idx=119, run_idx@entry=1701998435) at builtin-stat.c:929 #4 0x0000000000455058 in run_perf_stat (run_idx=1701998435, argv=0xfffffaea2f90, argc=1) at builtin-stat.c:947 #5 cmd_stat (argc=1, argv=0xfffffaea2f90) at builtin-stat.c:2357 #6 0x00000000004bb888 in run_builtin (p=p@entry=0x9764b8 , argc=argc@entry=4, argv=argv@entry=0xfffffaea2f90) at perf.c:312 #7 0x00000000004bbb54 in handle_internal_command (argc=argc@entry=4, argv=argv@entry=0xfffffaea2f90) at perf.c:364 #8 0x0000000000435378 in run_argv (argcp=, argv=) at perf.c:408 #9 main (argc=4, argv=0xfffffaea2f90) at perf.c:538 After debugging, i found the root reason is that the xyarray fd is created by evsel__open_per_thread() ignoring the cpu passed in create_perf_stat_counter(), while the evsel' cpumap is assigned as the corresponding PMU's cpumap in __add_event(). Thus, the xyarray fd is created with ncpus of dummy cpumap and an out of bounds 'cpu' index will be used in perf_evsel__close_fd_cpu(). To address this, add a flag to mark this situation and avoid using the affinity technique when closing/enabling/disabling events. Fixes: 7736627b865d ("perf stat: Use affinity for closing file descriptors") Fixes: 704e2f5b700d ("perf stat: Use affinity for enabling/disabling events") Signed-off-by: Wei Li --- tools/lib/perf/include/internal/evlist.h | 1 + tools/perf/builtin-stat.c | 3 +++ tools/perf/util/evlist.c | 23 ++++++++++++++++++++++- 3 files changed, 26 insertions(+), 1 deletion(-) diff --git a/tools/lib/perf/include/internal/evlist.h b/tools/lib/perf/include/internal/evlist.h index 2d0fa02b036f..c02d7e583846 100644 --- a/tools/lib/perf/include/internal/evlist.h +++ b/tools/lib/perf/include/internal/evlist.h @@ -17,6 +17,7 @@ struct perf_evlist { struct list_head entries; int nr_entries; bool has_user_cpus; + bool open_per_thread; struct perf_cpu_map *cpus; struct perf_cpu_map *all_cpus; struct perf_thread_map *threads; diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c index fddc97cac984..6e6ceacce634 100644 --- a/tools/perf/builtin-stat.c +++ b/tools/perf/builtin-stat.c @@ -725,6 +725,9 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx) if (group) perf_evlist__set_leader(evsel_list); + if (!(target__has_cpu(&target) && !target__has_per_thread(&target))) + evsel_list->core.open_per_thread = true; + if (affinity__setup(&affinity) < 0) return -1; diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c index e3fa3bf7498a..bf8a3ccc599f 100644 --- a/tools/perf/util/evlist.c +++ b/tools/perf/util/evlist.c @@ -383,6 +383,15 @@ void evlist__disable(struct evlist *evlist) int cpu, i, imm = 0; bool has_imm = false; + if (evlist->core.open_per_thread) { + evlist__for_each_entry(evlist, pos) { + if (pos->disabled || !evsel__is_group_leader(pos) || !pos->core.fd) + continue; + evsel__disable(pos); + } + goto out; + } + if (affinity__setup(&affinity) < 0) return; @@ -414,6 +423,7 @@ void evlist__disable(struct evlist *evlist) pos->disabled = true; } +out: evlist->enabled = false; } @@ -423,6 +433,15 @@ void evlist__enable(struct evlist *evlist) struct affinity affinity; int cpu, i; + if (evlist->core.open_per_thread) { + evlist__for_each_entry(evlist, pos) { + if (!evsel__is_group_leader(pos) || !pos->core.fd) + continue; + evsel__enable(pos); + } + goto out; + } + if (affinity__setup(&affinity) < 0) return; @@ -444,6 +463,7 @@ void evlist__enable(struct evlist *evlist) pos->disabled = false; } +out: evlist->enabled = true; } @@ -1223,9 +1243,10 @@ void evlist__close(struct evlist *evlist) /* * With perf record core.cpus is usually NULL. + * Or perf stat may open events per-thread. * Use the old method to handle this for now. */ - if (!evlist->core.cpus) { + if (evlist->core.open_per_thread || !evlist->core.cpus) { evlist__for_each_entry_reverse(evlist, evsel) evsel__close(evsel); return; -- 2.17.1