Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0274C433EF for ; Wed, 5 Jan 2022 06:15:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237998AbiAEGPl (ORCPT ); Wed, 5 Jan 2022 01:15:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237820AbiAEGPD (ORCPT ); Wed, 5 Jan 2022 01:15:03 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4FF9C0613A1 for ; Tue, 4 Jan 2022 22:15:02 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id d5-20020a25be45000000b0060fdc206836so11908563ybm.21 for ; Tue, 04 Jan 2022 22:15:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=2sPeS5hD6XhbS/KjXnNo0OQBT3AjJJ8ohyesk40XqcA=; b=ldL8BW7gXh8/9KepY71ByDsGGstCJarcy50vIMcVFnZKjvzBX+dRGoIyDerlIKW4rW 6rABwnELy5rSo1pp/Q99GJl+XnxUAazw2aNjlOojoVUgyAqFOGD/aaRTsmDE+wkdiN5b bAHucWTUtDB67Xe8KWx+FdhrzkJ0qZesS/9ktiIOkC0smNI+d/atdz8QxpuRGm7YtkV5 3CUiTjlT4j9xZTy+fi+UcHXFXOEnrPVgJ5uTUcxQ00w8QcS5FRO/m6dD/FM0323XLF+b 9Bvrc57fURO1zkIibDDyki8tocY/BjYzSPzE3+uXneZ0dnSaJlaN6ZS3dAS39JrdTti3 8k0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=2sPeS5hD6XhbS/KjXnNo0OQBT3AjJJ8ohyesk40XqcA=; b=Yh3HiJPdWnQNlb84iJSDAqNk5M1TZmgLVozbnvuXZjmY3WqiA3XZKw0t8sszAE+J3V kxn0schszSO4NxKXC2Nw6ntFVnAEXxEi3IFYoOh2/9sdcbMuaOcI2vlKynsj1JkZeSz3 PaIYAnGIYMctMNxJDlqHLODWEOE97MPVkLD5H1G/A1OpMmIAM3S9PlTK9sxiVNBNlz7M Sx8sKl6BMr+R+D40uKA72PKyExSQh/tgoYTfC6MsNyZPgv3VQN06JmfzmGv5TL0jZd8+ AgG44NX3YaTqI4prvtTIi8bdR+S+TFcMG6DuNUrPHU+Ltmp19f5TJjiiKiXZ408t3tOr j8Ww== X-Gm-Message-State: AOAM533fBF/PSB+YX/a1RGqeTfImiPyBewX82EVEDbqftvzHKybKV2NZ ftPkEoz9lBPciw1VeaG0fa56RMKi9pem X-Google-Smtp-Source: ABdhPJzvI9mgKC7c2QxXrfa3gLX0pRbsDuozoT96UbLbPrMM+EDLylMN2bvvKGEcj9eV3UaYYOxVEHXpmHcQ X-Received: from irogers.svl.corp.google.com ([2620:15c:2cd:202:3af1:caef:2937:5d2b]) (user=irogers job=sendgmr) by 2002:a25:5008:: with SMTP id e8mr36319518ybb.522.1641363302068; Tue, 04 Jan 2022 22:15:02 -0800 (PST) Date: Tue, 4 Jan 2022 22:13:29 -0800 In-Reply-To: <20220105061351.120843-1-irogers@google.com> Message-Id: <20220105061351.120843-27-irogers@google.com> Mime-Version: 1.0 References: <20220105061351.120843-1-irogers@google.com> X-Mailer: git-send-email 2.34.1.448.ga2b2bfdf31-goog Subject: [PATCH v4 26/48] perf evsel: Derive CPUs and threads in alloc_counts From: Ian Rogers To: Andi Kleen , Jiri Olsa , Namhyung Kim , John Garry , Kajol Jain , "Paul A . Clarke" , Arnaldo Carvalho de Melo , Riccardo Mancini , Kan Liang , Peter Zijlstra , Ingo Molnar , Mark Rutland , Alexander Shishkin , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Vineet Singh , James Clark , Mathieu Poirier , Suzuki K Poulose , Mike Leach , Leo Yan , coresight@lists.linaro.org, linux-arm-kernel@lists.infradead.org, zhengjun.xing@intel.com Cc: eranian@google.com, Ian Rogers Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Passing the number of CPUs and threads allows for an evsel's counts to be mismatched to its cpu map. To avoid this always derive the counts size from the cpu map. Change openat-syscall-all-cpus to set the cpus to allow for this to work. Signed-off-by: Ian Rogers --- tools/perf/tests/openat-syscall-all-cpus.c | 10 +--------- tools/perf/util/counts.c | 8 ++++++-- tools/perf/util/counts.h | 2 +- tools/perf/util/evsel.c | 2 +- tools/perf/util/stat.c | 13 ++++++------- 5 files changed, 15 insertions(+), 20 deletions(-) diff --git a/tools/perf/tests/openat-syscall-all-cpus.c b/tools/perf/tests/openat-syscall-all-cpus.c index cd3dd463783f..544db0839b3b 100644 --- a/tools/perf/tests/openat-syscall-all-cpus.c +++ b/tools/perf/tests/openat-syscall-all-cpus.c @@ -85,15 +85,7 @@ static int test__openat_syscall_event_on_all_cpus(struct test_suite *test __mayb CPU_CLR(cpus->map[cpu], &cpu_set); } - /* - * Here we need to explicitly preallocate the counts, as if - * we use the auto allocation it will allocate just for 1 cpu, - * as we start by cpu 0. - */ - if (evsel__alloc_counts(evsel, cpus->nr, 1) < 0) { - pr_debug("evsel__alloc_counts(ncpus=%d)\n", cpus->nr); - goto out_close_fd; - } + evsel->core.cpus = perf_cpu_map__get(cpus); err = 0; diff --git a/tools/perf/util/counts.c b/tools/perf/util/counts.c index 582f3aeaf5e4..2b81707b9dba 100644 --- a/tools/perf/util/counts.c +++ b/tools/perf/util/counts.c @@ -4,6 +4,7 @@ #include #include "evsel.h" #include "counts.h" +#include #include struct perf_counts *perf_counts__new(int ncpus, int nthreads) @@ -55,9 +56,12 @@ void evsel__reset_counts(struct evsel *evsel) perf_counts__reset(evsel->counts); } -int evsel__alloc_counts(struct evsel *evsel, int ncpus, int nthreads) +int evsel__alloc_counts(struct evsel *evsel) { - evsel->counts = perf_counts__new(ncpus, nthreads); + struct perf_cpu_map *cpus = evsel__cpus(evsel); + int nthreads = perf_thread_map__nr(evsel->core.threads); + + evsel->counts = perf_counts__new(cpus ? cpus->nr : 1, nthreads); return evsel->counts != NULL ? 0 : -ENOMEM; } diff --git a/tools/perf/util/counts.h b/tools/perf/util/counts.h index 7ff36bf6d644..3e275e9c60d1 100644 --- a/tools/perf/util/counts.h +++ b/tools/perf/util/counts.h @@ -40,7 +40,7 @@ void perf_counts__delete(struct perf_counts *counts); void perf_counts__reset(struct perf_counts *counts); void evsel__reset_counts(struct evsel *evsel); -int evsel__alloc_counts(struct evsel *evsel, int ncpus, int nthreads); +int evsel__alloc_counts(struct evsel *evsel); void evsel__free_counts(struct evsel *evsel); #endif /* __PERF_COUNTS_H */ diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c index 656c30b988ce..6c9af21776e6 100644 --- a/tools/perf/util/evsel.c +++ b/tools/perf/util/evsel.c @@ -1578,7 +1578,7 @@ int __evsel__read_on_cpu(struct evsel *evsel, int cpu, int thread, bool scale) if (FD(evsel, cpu, thread) < 0) return -EINVAL; - if (evsel->counts == NULL && evsel__alloc_counts(evsel, cpu + 1, thread + 1) < 0) + if (evsel->counts == NULL && evsel__alloc_counts(evsel) < 0) return -ENOMEM; if (readn(FD(evsel, cpu, thread), &count, nv * sizeof(u64)) <= 0) diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c index c69b221f5e3e..995cb5003133 100644 --- a/tools/perf/util/stat.c +++ b/tools/perf/util/stat.c @@ -152,11 +152,13 @@ static void evsel__free_stat_priv(struct evsel *evsel) zfree(&evsel->stats); } -static int evsel__alloc_prev_raw_counts(struct evsel *evsel, int ncpus, int nthreads) +static int evsel__alloc_prev_raw_counts(struct evsel *evsel) { + int cpu_map_nr = evsel__nr_cpus(evsel); + int nthreads = perf_thread_map__nr(evsel->core.threads); struct perf_counts *counts; - counts = perf_counts__new(ncpus, nthreads); + counts = perf_counts__new(cpu_map_nr, nthreads); if (counts) evsel->prev_raw_counts = counts; @@ -177,12 +179,9 @@ static void evsel__reset_prev_raw_counts(struct evsel *evsel) static int evsel__alloc_stats(struct evsel *evsel, bool alloc_raw) { - int ncpus = evsel__nr_cpus(evsel); - int nthreads = perf_thread_map__nr(evsel->core.threads); - if (evsel__alloc_stat_priv(evsel) < 0 || - evsel__alloc_counts(evsel, ncpus, nthreads) < 0 || - (alloc_raw && evsel__alloc_prev_raw_counts(evsel, ncpus, nthreads) < 0)) + evsel__alloc_counts(evsel) < 0 || + (alloc_raw && evsel__alloc_prev_raw_counts(evsel) < 0)) return -ENOMEM; return 0; -- 2.34.1.448.ga2b2bfdf31-goog