Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp2830920pxv; Mon, 12 Jul 2021 03:01:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzCzib62SNCoL4qBfFPkaSnOyye8+Evgr2LLP5o1QDOMZq0zB99u2KaDRr/RgzhGLN3y9mq X-Received: by 2002:a05:6e02:1073:: with SMTP id q19mr15186273ilj.110.1626084105707; Mon, 12 Jul 2021 03:01:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626084105; cv=none; d=google.com; s=arc-20160816; b=Lychkq0KQetZ6E9yt562c8dLJowMxHKf06Oh0fH3S61H7dmxjL7uG/vX684VYTkzX3 d0NH6MQQ9kkrLemvXbEt/bLZ3Bcvv37cegfHyp+QkdxfVNaVHoBhNvN8u9G6/UcFQZNv edmaTvzfYmiaxg3pZgKkkXiJoq1muxGJ1zRdUVqrBfOtQyeVUlsGrv/vOW/wNBvjV5B1 7YzA5zdtgFTuYbIHnEgHgxucQcJ1E9tXgvwH0b0mCqfzt/xlnbCURBZqBJtG/sExYPJF 8c9nFM2+E8kunnKjUdvGgkcIAY59Yb1MU9+J0EH1PO+nqnd07HuTcAvy70K86gYd4hyc cnMQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=iGexjbEALGE70JIujWT+ZD6wyoEmcNmGkgMRtAyZofQ=; b=wDwsqY97yrm1iKrPxMolW0tMluPPHoe8i0oOm31m1n0yKpgI0gfEFbOyEQsrawKHAn fmZnMe4OevbkYXNXjZ92bt8MnvGL7jbu7+jlSIlRBq09EoDaIeZPn/REQpU27hPXL+Wi mN+tidM3229mTShN72j1tFlAW2sYsW0YdgCagkjuhKYaYdCIUCmsbu3BVJTGlqXVtnHe ul/7EsCvzXfiXmEKa0hUHB0UrP50IMJ+23ziZwOxra1OaYIa+Ubejsu4soVgvx9bqzuZ KjdhTyJWe8bNZ2WGOEngEdfjYTJbo9gNO0ZI3uYxQZKFaDgvpZN1EfdWlUJLABtCaDAY pIFQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d198si17168721jac.70.2021.07.12.03.01.27; Mon, 12 Jul 2021 03:01:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241619AbhGLHOq (ORCPT + 99 others); Mon, 12 Jul 2021 03:14:46 -0400 Received: from mga14.intel.com ([192.55.52.115]:45774 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239892AbhGLGuF (ORCPT ); Mon, 12 Jul 2021 02:50:05 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10042"; a="209748980" X-IronPort-AV: E=Sophos;i="5.84,232,1620716400"; d="scan'208";a="209748980" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Jul 2021 23:47:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,232,1620716400"; d="scan'208";a="491916267" Received: from nntpat99-84.inn.intel.com ([10.125.99.84]) by FMSMGA003.fm.intel.com with ESMTP; 11 Jul 2021 23:47:13 -0700 From: Alexey Bayduraev To: Arnaldo Carvalho de Melo Cc: Jiri Olsa , Namhyung Kim , Alexander Shishkin , Peter Zijlstra , Ingo Molnar , linux-kernel , Andi Kleen , Adrian Hunter , Alexander Antonov , Alexei Budankov , Riccardo Mancini Subject: [PATCH v10 13/24] perf record: Extend --threads command line option Date: Mon, 12 Jul 2021 09:46:13 +0300 Message-Id: <6e7d03c2f90f7721a7c8cf8d2952b628e8e7e0de.1626072009.git.alexey.v.bayduraev@linux.intel.com> X-Mailer: git-send-email 2.19.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Extend --threads option in perf record command line interface. The option can have a value in the form of masks that specify cpus to be monitored with data streaming threads and its layout in system topology. The masks can be filtered using cpu mask provided via -C option. The specification value can be user defined list of masks. Masks separated by colon define cpus to be monitored by one thread and affinity mask of that thread is separated by slash. For example: /:/ specifies parallel threads layout that consists of two threads with corresponding assigned cpus to be monitored. The specification value can be a string e.g. "cpu", "core" or "socket" meaning creation of data streaming thread for every cpu or core or socket to monitor distinct cpus or cpus grouped by core or socket. The option provided with no or empty value defaults to per-cpu parallel threads layout creating data streaming thread for every cpu being monitored. Document --threads option syntax and parallel data streaming modes in Documentation/perf-record.txt. Feature design and implementation are based on prototypes [1], [2]. [1] git clone https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git -b perf/record_threads [2] https://lore.kernel.org/lkml/20180913125450.21342-1-jolsa@kernel.org/ Suggested-by: Jiri Olsa Suggested-by: Namhyung Kim Acked-by: Andi Kleen Acked-by: Namhyung Kim Signed-off-by: Alexey Bayduraev --- tools/perf/Documentation/perf-record.txt | 30 ++- tools/perf/builtin-record.c | 314 ++++++++++++++++++++++- tools/perf/util/record.h | 1 + 3 files changed, 340 insertions(+), 5 deletions(-) diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt index ca2771b80fd5..2046b28d9822 100644 --- a/tools/perf/Documentation/perf-record.txt +++ b/tools/perf/Documentation/perf-record.txt @@ -695,9 +695,35 @@ measurements: wait -n ${perf_pid} exit $? ---threads:: +--threads=:: Write collected trace data into several data files using parallel threads. -The option creates a data streaming thread for each cpu in the system. + value can be user defined list of masks. Masks separated by colon +define cpus to be monitored by a thread and affinity mask of that thread +is separated by slash: + + /:/:... + +For example user specification like the following: + + 0,2-4/2-4:1,5-7/5-7 + +specifies parallel threads layout that consists of two threads, +the first thread monitors cpus 0 and 2-4 with the affinity mask 2-4, +the second monitors cpus 1 and 5-7 with the affinity mask 5-7. + + value can also be a string meaning predefined parallel threads +layout: + + cpu - create new data streaming thread for every monitored cpu + core - create new thread to monitor cpus grouped by a core + socket - create new thread to monitor cpus grouped by a socket + numa - create new threed to monitor cpus grouped by a numa domain + +Predefined layouts can be used on systems with large number of cpus in +order not to spawn multiple per-cpu streaming threads but still avoid LOST +events in data directory files. Option specified with no or empty value +defaults to cpu layout. Masks defined or provided by the option value are +filtered through the mask provided by -C option. include::intel-hybrid.txt[] diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c index 65e2d1837bec..4aacba490ca3 100644 --- a/tools/perf/builtin-record.c +++ b/tools/perf/builtin-record.c @@ -51,6 +51,7 @@ #include "util/evlist-hybrid.h" #include "asm/bug.h" #include "perf.h" +#include "cputopo.h" #include #include @@ -125,6 +126,15 @@ static const char *thread_msg_tags[THREAD_MSG__MAX] = { enum thread_spec { THREAD_SPEC__UNDEFINED = 0, THREAD_SPEC__CPU, + THREAD_SPEC__CORE, + THREAD_SPEC__SOCKET, + THREAD_SPEC__NUMA, + THREAD_SPEC__USER, + THREAD_SPEC__MAX, +}; + +static const char *thread_spec_tags[THREAD_SPEC__MAX] = { + "undefined", "cpu", "core", "socket", "numa", "user" }; struct record { @@ -2816,12 +2826,66 @@ static void record__thread_mask_free(struct thread_mask *mask) record__mmap_cpu_mask_free(&mask->affinity); } +static int record__thread_mask_or(struct thread_mask *dest, struct thread_mask *src1, + struct thread_mask *src2) +{ + if (src1->maps.nbits != src2->maps.nbits || + dest->maps.nbits != src1->maps.nbits || + src1->affinity.nbits != src2->affinity.nbits || + dest->affinity.nbits != src1->affinity.nbits) + return -EINVAL; + + bitmap_or(dest->maps.bits, src1->maps.bits, + src2->maps.bits, src1->maps.nbits); + bitmap_or(dest->affinity.bits, src1->affinity.bits, + src2->affinity.bits, src1->affinity.nbits); + + return 0; +} + +static int record__thread_mask_intersects(struct thread_mask *mask_1, struct thread_mask *mask_2) +{ + int res1, res2; + + if (mask_1->maps.nbits != mask_2->maps.nbits || + mask_1->affinity.nbits != mask_2->affinity.nbits) + return -EINVAL; + + res1 = bitmap_intersects(mask_1->maps.bits, mask_2->maps.bits, + mask_1->maps.nbits); + res2 = bitmap_intersects(mask_1->affinity.bits, mask_2->affinity.bits, + mask_1->affinity.nbits); + if (res1 || res2) + return 1; + + return 0; +} + static int record__parse_threads(const struct option *opt, const char *str, int unset) { + int s; struct record_opts *opts = opt->value; - if (unset || !str || !strlen(str)) + if (unset || !str || !strlen(str)) { opts->threads_spec = THREAD_SPEC__CPU; + } else { + for (s = 1; s < THREAD_SPEC__MAX; s++) { + if (s == THREAD_SPEC__USER) { + opts->threads_user_spec = strdup(str); + opts->threads_spec = THREAD_SPEC__USER; + break; + } + if (!strncasecmp(str, thread_spec_tags[s], strlen(thread_spec_tags[s]))) { + opts->threads_spec = s; + break; + } + } + } + + pr_debug("threads_spec: %s", thread_spec_tags[opts->threads_spec]); + if (opts->threads_spec == THREAD_SPEC__USER) + pr_debug("=[%s]", opts->threads_user_spec); + pr_debug("\n"); return 0; } @@ -3285,6 +3349,17 @@ static void record__mmap_cpu_mask_init(struct mmap_cpu_mask *mask, struct perf_c set_bit(cpus->map[c], mask->bits); } +static void record__mmap_cpu_mask_init_spec(struct mmap_cpu_mask *mask, char *mask_spec) +{ + struct perf_cpu_map *cpus; + + cpus = perf_cpu_map__new(mask_spec); + if (cpus) { + record__mmap_cpu_mask_init(mask, cpus); + perf_cpu_map__put(cpus); + } +} + static void record__free_thread_masks(struct record *rec, int nr_threads) { int t; @@ -3342,6 +3417,213 @@ static int record__init_thread_cpu_masks(struct record *rec, struct perf_cpu_map return 0; } +static int record__init_thread_masks_spec(struct record *rec, struct perf_cpu_map *cpus, + char **maps_spec, char **affinity_spec, u32 nr_spec) +{ + u32 s; + int ret = 0, nr_threads = 0; + struct mmap_cpu_mask cpus_mask; + struct thread_mask thread_mask, full_mask, *prev_masks; + + ret = record__mmap_cpu_mask_alloc(&cpus_mask, cpu__max_cpu()); + if (ret) + goto out; + record__mmap_cpu_mask_init(&cpus_mask, cpus); + ret = record__thread_mask_alloc(&thread_mask, cpu__max_cpu()); + if (ret) + goto out_free_cpu_mask; + ret = record__thread_mask_alloc(&full_mask, cpu__max_cpu()); + if (ret) + goto out_free_thread_mask; + record__thread_mask_clear(&full_mask); + + for (s = 0; s < nr_spec; s++) { + record__thread_mask_clear(&thread_mask); + + record__mmap_cpu_mask_init_spec(&thread_mask.maps, maps_spec[s]); + record__mmap_cpu_mask_init_spec(&thread_mask.affinity, affinity_spec[s]); + + if (!bitmap_and(thread_mask.maps.bits, thread_mask.maps.bits, + cpus_mask.bits, thread_mask.maps.nbits) || + !bitmap_and(thread_mask.affinity.bits, thread_mask.affinity.bits, + cpus_mask.bits, thread_mask.affinity.nbits)) + continue; + + ret = record__thread_mask_intersects(&thread_mask, &full_mask); + if (ret) + goto out_free_full_mask; + record__thread_mask_or(&full_mask, &full_mask, &thread_mask); + + prev_masks = rec->thread_masks; + rec->thread_masks = realloc(rec->thread_masks, + (nr_threads + 1) * sizeof(struct thread_mask)); + if (!rec->thread_masks) { + pr_err("Failed to allocate thread masks\n"); + rec->thread_masks = prev_masks; + ret = -ENOMEM; + goto out_free_full_mask; + } + rec->thread_masks[nr_threads] = thread_mask; + if (verbose) { + pr_debug("thread_masks[%d]: addr=", nr_threads); + mmap_cpu_mask__scnprintf(&rec->thread_masks[nr_threads].maps, "maps"); + pr_debug("thread_masks[%d]: addr=", nr_threads); + mmap_cpu_mask__scnprintf(&rec->thread_masks[nr_threads].affinity, + "affinity"); + } + nr_threads++; + ret = record__thread_mask_alloc(&thread_mask, cpu__max_cpu()); + if (ret) + goto out_free_full_mask; + } + + rec->nr_threads = nr_threads; + pr_debug("threads: nr_threads=%d\n", rec->nr_threads); + + if (rec->nr_threads <= 0) + ret = -EINVAL; + +out_free_full_mask: + record__thread_mask_free(&full_mask); +out_free_thread_mask: + record__thread_mask_free(&thread_mask); +out_free_cpu_mask: + record__mmap_cpu_mask_free(&cpus_mask); +out: + return ret; +} + +static int record__init_thread_core_masks(struct record *rec, struct perf_cpu_map *cpus) +{ + int ret; + struct cpu_topology *topo; + + topo = cpu_topology__new(); + if (!topo) + return -EINVAL; + + ret = record__init_thread_masks_spec(rec, cpus, topo->thread_siblings, + topo->thread_siblings, topo->thread_sib); + cpu_topology__delete(topo); + + return ret; +} + +static int record__init_thread_socket_masks(struct record *rec, struct perf_cpu_map *cpus) +{ + int ret; + struct cpu_topology *topo; + + topo = cpu_topology__new(); + if (!topo) + return -EINVAL; + + ret = record__init_thread_masks_spec(rec, cpus, topo->core_siblings, + topo->core_siblings, topo->core_sib); + cpu_topology__delete(topo); + + return ret; +} + +static int record__init_thread_numa_masks(struct record *rec, struct perf_cpu_map *cpus) +{ + u32 s; + int ret; + char **spec; + struct numa_topology *topo; + + topo = numa_topology__new(); + if (!topo) + return -EINVAL; + spec = zalloc(topo->nr * sizeof(char *)); + if (!spec) { + ret = -ENOMEM; + goto out_delete_topo; + } + for (s = 0; s < topo->nr; s++) + spec[s] = topo->nodes[s].cpus; + + ret = record__init_thread_masks_spec(rec, cpus, spec, spec, topo->nr); + + zfree(&spec); + +out_delete_topo: + numa_topology__delete(topo); + + return ret; +} + +static int record__init_thread_user_masks(struct record *rec, struct perf_cpu_map *cpus) +{ + int t, ret; + u32 s, nr_spec = 0; + char **maps_spec = NULL, **affinity_spec = NULL, **prev_spec; + char *spec, *spec_ptr, *user_spec, *mask, *mask_ptr; + + for (t = 0, user_spec = (char *)rec->opts.threads_user_spec; ; t++, user_spec = NULL) { + spec = strtok_r(user_spec, ":", &spec_ptr); + if (spec == NULL) + break; + pr_debug(" spec[%d]: %s\n", t, spec); + mask = strtok_r(spec, "/", &mask_ptr); + if (mask == NULL) + break; + pr_debug(" maps mask: %s\n", mask); + prev_spec = maps_spec; + maps_spec = realloc(maps_spec, (nr_spec + 1) * sizeof(char *)); + if (!maps_spec) { + pr_err("Failed to realloc maps_spec\n"); + maps_spec = prev_spec; + ret = -ENOMEM; + goto out_free_all_specs; + } + maps_spec[nr_spec] = strdup(mask); + if (!maps_spec[nr_spec]) { + pr_err("Failed to alloc maps_spec[%d]\n", nr_spec); + ret = -ENOMEM; + goto out_free_all_specs; + } + mask = strtok_r(NULL, "/", &mask_ptr); + if (mask == NULL) { + free(maps_spec[nr_spec]); + ret = -EINVAL; + goto out_free_all_specs; + } + pr_debug(" affinity mask: %s\n", mask); + prev_spec = affinity_spec; + affinity_spec = realloc(affinity_spec, (nr_spec + 1) * sizeof(char *)); + if (!affinity_spec) { + pr_err("Failed to realloc affinity_spec\n"); + affinity_spec = prev_spec; + free(maps_spec[nr_spec]); + ret = -ENOMEM; + goto out_free_all_specs; + } + affinity_spec[nr_spec] = strdup(mask); + if (!affinity_spec[nr_spec]) { + pr_err("Failed to alloc affinity_spec[%d]\n", nr_spec); + free(maps_spec[nr_spec]); + ret = -ENOMEM; + goto out_free_all_specs; + } + nr_spec++; + } + + ret = record__init_thread_masks_spec(rec, cpus, maps_spec, affinity_spec, nr_spec); + +out_free_all_specs: + for (s = 0; s < nr_spec; s++) { + if (maps_spec) + free(maps_spec[s]); + if (affinity_spec) + free(affinity_spec[s]); + } + free(affinity_spec); + free(maps_spec); + + return ret; +} + static int record__init_thread_default_masks(struct record *rec, struct perf_cpu_map *cpus) { int ret; @@ -3359,12 +3641,33 @@ static int record__init_thread_default_masks(struct record *rec, struct perf_cpu static int record__init_thread_masks(struct record *rec) { + int ret = 0; struct perf_cpu_map *cpus = rec->evlist->core.cpus; if (!record__threads_enabled(rec)) return record__init_thread_default_masks(rec, cpus); - return record__init_thread_cpu_masks(rec, cpus); + switch (rec->opts.threads_spec) { + case THREAD_SPEC__CPU: + ret = record__init_thread_cpu_masks(rec, cpus); + break; + case THREAD_SPEC__CORE: + ret = record__init_thread_core_masks(rec, cpus); + break; + case THREAD_SPEC__SOCKET: + ret = record__init_thread_socket_masks(rec, cpus); + break; + case THREAD_SPEC__NUMA: + ret = record__init_thread_numa_masks(rec, cpus); + break; + case THREAD_SPEC__USER: + ret = record__init_thread_user_masks(rec, cpus); + break; + default: + break; + } + + return ret; } static void record__fini_thread_masks(struct record *rec) @@ -3613,7 +3916,12 @@ int cmd_record(int argc, const char **argv) err = record__init_thread_masks(rec); if (err) { - pr_err("record__init_thread_masks failed, error %d\n", err); + if (err > 0) + pr_err("ERROR: parallel data streaming masks (--threads) intersect\n"); + else if (err == -EINVAL) + pr_err("ERROR: invalid parallel data streaming masks (--threads)\n"); + else + pr_err("record__init_thread_masks failed, error %d\n", err); goto out; } diff --git a/tools/perf/util/record.h b/tools/perf/util/record.h index 4d68b7e27272..3da156498f47 100644 --- a/tools/perf/util/record.h +++ b/tools/perf/util/record.h @@ -78,6 +78,7 @@ struct record_opts { int ctl_fd_ack; bool ctl_fd_close; int threads_spec; + const char *threads_user_spec; }; extern const char * const *record_usage; -- 2.19.0