Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp3942639ybi; Mon, 15 Jul 2019 01:16:05 -0700 (PDT) X-Google-Smtp-Source: APXvYqxJ/RQzP70vw7kJBD5Hv0LMCI3WeiIIarCXD8yqpfJBVzVFcwEvkibeVymb6/cJmLuzR/2h X-Received: by 2002:a17:902:ff11:: with SMTP id f17mr27485998plj.121.1563178565601; Mon, 15 Jul 2019 01:16:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563178565; cv=none; d=google.com; s=arc-20160816; b=r1pcnPucoQfujwBt2pZ1wJ0PdGoM4z9T2ZyEhYvNxEkUIMCl584Y1eWsLV5JEyx69J uAFnfUTxNGBtyoF1pTxLP6RcYHum4/xuMQL+sWP0cqKu2jINBA3YPY4k4tBBqEgOZfqJ bV9QPg6U+4yLjp4EhFR7DsrKSSoBAP36NEd7YH4Hw1OauR40r8fRfp52SWYioWBvI7q0 Tg0al/zeIXinqfp8f7+bAwbXtF9aHp5UbDSX7S6fwSQTTGhDuAXw9KaslPsCCqWqmBcp Sd+h4EM5HTsNv40oo+w4rVPKjLG8W8Rr1gtPHnY5GV6l+qD8oiJOwh0rPit+XV9DbjGQ /UFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=y/4K6BJlwFivH7Ck0WyKZuYNHVyNBlGAg0jTOaxVGec=; b=qQ/EZH5k+Q1UfZ+4zhDJMJJlaxGpd0DEBX1FU16w/YDJqPAi/J+Z37gYTERr9FVxcI TNPFRgaFzkWiqV2fOtz4Xy9jwIQrLiCoPo/fkOsw2mS5YDgl+hsexQlcRIOIA/PR7rQI KwpOmGvVnDW1cy5qZD49zMtEaMZe2VpWJykcmIlp/TXf6t+XaisARVvceEvgloKcaFmg PJIpQn3AF4trUWw4XVwuGPv2TDvInBKlEF8DQKWhFLz4/FmSldwiThsvBK+hm8iYPbPg +Pm7dITcBQsQ8+f1lGbLj//6jIvpSIMIMxKPnP/6Dc0QDg+BlJLeGnIlNG/UtbxQ54iN ZpUQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=t2uJanJr; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q100si15175299pja.87.2019.07.15.01.15.45; Mon, 15 Jul 2019 01:16:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=t2uJanJr; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729322AbfGOIPM (ORCPT + 99 others); Mon, 15 Jul 2019 04:15:12 -0400 Received: from mail-io1-f66.google.com ([209.85.166.66]:33945 "EHLO mail-io1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726170AbfGOIPM (ORCPT ); Mon, 15 Jul 2019 04:15:12 -0400 Received: by mail-io1-f66.google.com with SMTP id k8so32784805iot.1 for ; Mon, 15 Jul 2019 01:15:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=y/4K6BJlwFivH7Ck0WyKZuYNHVyNBlGAg0jTOaxVGec=; b=t2uJanJrunb0Uuy0yhxSMKu6j/iphXKM8Z4f+DEMvY3/Rx3gc4O2PgEaVthRsFZTdQ BqM+lTQnkdXVDO28ZTVbzhQMXPRYdypFIo4uIYyYUNmENquYSh6hfRUke0PNFK92tZRO 60YiVpGsgJueKZeLxI+039nXiHhsdjG3vv7PPWeGRFCKeHckYxahA1gUSOb4xnHmSmHT Q1PPuLAT1QActFlRSHPR13lbA+/6w5qFsfLS+5XZcqDC5ple3cAujbr0n6tXRPekLxre 72Qa8JD6Bwscq9p+KiOLgYTs2KSKEUJLKF0wjctq9c9Bmh3zOR2H1bNTyAt5V/hZqwWL OLJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=y/4K6BJlwFivH7Ck0WyKZuYNHVyNBlGAg0jTOaxVGec=; b=O34O/B8aCQWF9gG6ZH5b+Zux2YcqFmk4//IR289nKwfZe6FRBGY5PJMkAiEmHcuZqB pevUnui5h3LQgwgrZv7XqerD1sRvcQdBvH8gpWeZLS9mglTTjQCOkY+X+nHUCIExKhoi vb1P6t7acH5FJih4uyQXy1mtFQhwd5ylKrH3sB32BZJ/jdPtG9SF/19bc7Jf5mt4ettV emNYJ1vyau7wwrMzdT5RNi1584N7u0j0KmRhhJDV3/gGW2urpCPo2cqYnVqeLW7rEzpd jI6MgCBizoYpDJQPJcP22aKpEs8ixOp78zdoTaJkR+D+koQhJ4Z/tHGvUrtGxDs7kmQz T0Kw== X-Gm-Message-State: APjAAAWirGfwtFm5OH0FDMMSpVF/wGwWRoMBDGrad9sKwWgr5jM+XhpQ ortO/yAh7lFusEKMIUL7f0axpsfA8b+uYkwsoGaGlA== X-Received: by 2002:a02:5a02:: with SMTP id v2mr25578264jaa.124.1563178511096; Mon, 15 Jul 2019 01:15:11 -0700 (PDT) MIME-Version: 1.0 References: <20190710204540.176495-1-nums@google.com> <20190714204432.GA8120@krava> <20190714205505.GB8120@krava> <20190715075912.GA2821@krava> In-Reply-To: <20190715075912.GA2821@krava> From: Stephane Eranian Date: Mon, 15 Jul 2019 01:14:59 -0700 Message-ID: Subject: Re: [PATCH] Fix perf stat repeat segfault To: Jiri Olsa Cc: Numfor Mbiziwo-Tiapo , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Alexander Shishkin , Namhyung Kim , Song Liu , mbd@fb.com, LKML , Ian Rogers Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 15, 2019 at 12:59 AM Jiri Olsa wrote: > > On Sun, Jul 14, 2019 at 02:36:42PM -0700, Stephane Eranian wrote: > > On Sun, Jul 14, 2019 at 1:55 PM Jiri Olsa wrote: > > > > > > On Sun, Jul 14, 2019 at 10:44:36PM +0200, Jiri Olsa wrote: > > > > On Wed, Jul 10, 2019 at 01:45:40PM -0700, Numfor Mbiziwo-Tiapo wrote: > > > > > When perf stat is called with event groups and the repeat option, > > > > > a segfault occurs because the cpu ids are stored on each iteration > > > > > of the repeat, when they should only be stored on the first iteration, > > > > > which causes a buffer overflow. > > > > > > > > > > This can be replicated by running (from the tip directory): > > > > > > > > > > make -C tools/perf > > > > > > > > > > then running: > > > > > > > > > > tools/perf/perf stat -e '{cycles,instructions}' -r 10 ls > > > > > > > > > > Since run_idx keeps track of the current iteration of the repeat, > > > > > only storing the cpu ids on the first iteration (when run_idx < 1) > > > > > fixes this issue. > > > > > > > > > > Signed-off-by: Numfor Mbiziwo-Tiapo > > > > > --- > > > > > tools/perf/builtin-stat.c | 7 ++++--- > > > > > 1 file changed, 4 insertions(+), 3 deletions(-) > > > > > > > > > > diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c > > > > > index 63a3afc7f32b..92d6694367e4 100644 > > > > > --- a/tools/perf/builtin-stat.c > > > > > +++ b/tools/perf/builtin-stat.c > > > > > @@ -378,9 +378,10 @@ static void workload_exec_failed_signal(int signo __maybe_unused, siginfo_t *inf > > > > > workload_exec_errno = info->si_value.sival_int; > > > > > } > > > > > > > > > > -static bool perf_evsel__should_store_id(struct perf_evsel *counter) > > > > > +static bool perf_evsel__should_store_id(struct perf_evsel *counter, int run_idx) > > > > > { > > > > > - return STAT_RECORD || counter->attr.read_format & PERF_FORMAT_ID; > > > > > + return STAT_RECORD || counter->attr.read_format & PERF_FORMAT_ID > > > > > + && run_idx < 1; > > > > > > > > we create counters for every iteration, so this can't be > > > > based on iteration > > > > > > > > I think that's just a workaround for memory corruption, > > > > that's happening for repeating groupped events stats, > > > > I'll check on this > > > > > > how about something like this? we did not cleanup > > > ids on evlist close, so it kept on raising and > > > causing corruption in next iterations > > > > > not sure, that would realloc on each iteration of the repeats. > > well, we need new ids, because we create new events every iteration > If you recreate them, then agreed. It is not clear to me why you need ids when not running is STAT_RECORD mode. > jirka > > > > > > > > > jirka > > > > > > > > > --- > > > diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c > > > index ebb46da4dfe5..52459dd5ad0c 100644 > > > --- a/tools/perf/util/evsel.c > > > +++ b/tools/perf/util/evsel.c > > > @@ -1291,6 +1291,7 @@ static void perf_evsel__free_id(struct perf_evsel *evsel) > > > xyarray__delete(evsel->sample_id); > > > evsel->sample_id = NULL; > > > zfree(&evsel->id); > > > + evsel->ids = 0; > > > } > > > > > > static void perf_evsel__free_config_terms(struct perf_evsel *evsel) > > > @@ -2077,6 +2078,7 @@ void perf_evsel__close(struct perf_evsel *evsel) > > > > > > perf_evsel__close_fd(evsel); > > > perf_evsel__free_fd(evsel); > > > + perf_evsel__free_id(evsel); > > > } > > > > > > int perf_evsel__open_per_cpu(struct perf_evsel *evsel,