Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp460458pxv; Wed, 30 Jun 2021 09:27:12 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwlb+s2nfqMXoMDvNyon6ArdId7jAHpYuisu7EzRnQbijgOqi3Z+hLsFo531kvQ9Qx8UADk X-Received: by 2002:a02:866b:: with SMTP id e98mr9800802jai.48.1625070432142; Wed, 30 Jun 2021 09:27:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625070432; cv=none; d=google.com; s=arc-20160816; b=OP9MicKZzzewjQXr/VShomT42cggj8pLXNTPapHqVw3aBzK+TNCdCT8dggb9Or3OPG hoebHL7y+jAT7xwVa8bXCjo2C56+Xrckmgi8nHwbjeQtUhsyvwtKBPlrAUa4pIalzv8k maxV7n2y2rnbchyqttunbn3mE3lSVrDcovuxKtApjeIUWQ6TprXKm9AltKBBOouTmld3 Yl08DuvehngQ+j82FvAJSc4026jKYOqFjaCgCApqPDMSWsfNpV9Yew1cTr9yN/eZyhsQ NFuJXuxE+CROM8yqv6nn0zvzE6CwaQ/61pOs0UpLudUuRaBxpIvWld9eEJCZ6OVJzie0 LmOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=1LhG+EKt30e8FLvRAYDkeZ/H34T9xJKp5Giin7FhZPc=; b=uENrZI7btudY1w4Neg2vwm3kXC8zuwtzKS6f6gavbxgUao4gx9gnHQYwJaNJSc3W+g lkD+xdIe0fs4aTBV67FdtnE+XpR8HNY40ukZzL7vn92RHJRuNbqAt/EcW438JVj2Lk8v Xn5HeFqeb1tcMlS3+ncnTf4VpOJJ5PieLyOR8/BNYm1VqtsZ4F9ntEDDT17R1Z7lVcEk mztaaozb9WLxLNzO6yj4BuYLnL4PTx02HhKzmDg5k3jJ4cIZMtyjdVc4g5uUd8UXv8Bj RPpUU3s57o6wCndGucUPco76H3bIf+41GUD4SY+FmATDFquMoswbk/SnUAW8dJvuzCxY ejYw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=pbiZQMSl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g16si22543480jao.120.2021.06.30.09.26.54; Wed, 30 Jun 2021 09:27:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=pbiZQMSl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230005AbhF3Q2m (ORCPT + 99 others); Wed, 30 Jun 2021 12:28:42 -0400 Received: from mail.kernel.org ([198.145.29.99]:38316 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229510AbhF3Q2m (ORCPT ); Wed, 30 Jun 2021 12:28:42 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 86B0361421; Wed, 30 Jun 2021 16:26:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1625070372; bh=Vk2s9iywzw/+br3UuVmE1Uct4IHppRloyy32ab1FjTg=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=pbiZQMSlJAcZ8jLUDa9G1KVR5n+jcsZkcJlnQ6Qzejpm1X1dOKD3f0ytX4Xol8n3M kFhUeYVePMVyAzedqIKccDRljuguVi3aVHMYTc5O2hNMR8mtflM4nlcgRLmiSrIaTy BnluhiYXu8zA94UNIOOJPHHqiG25W/vU05zov0jOPuihdcpJv5UTyBBoFMAPFTVeh4 BWNOAzBgumbvckJRojldH4ckJNRs5WlCASczqWS+IZTumpeWxYvxP0CtePwnEX0EGo 6vwJ4tQI7E/mPDQIiRozgoNpg9Jrga9RsSvjRmvd5eDdD5RVXxB1eGZwYfIhBkGW6Q hd3QrIXwKYLCw== Received: by quaco.ghostprotocols.net (Postfix, from userid 1000) id 7221740B1A; Wed, 30 Jun 2021 13:26:09 -0300 (-03) Date: Wed, 30 Jun 2021 13:26:09 -0300 From: Arnaldo Carvalho de Melo To: Alexey Bayduraev Cc: Jiri Olsa , Namhyung Kim , Alexander Shishkin , Peter Zijlstra , Ingo Molnar , linux-kernel , Andi Kleen , Adrian Hunter , Alexander Antonov , Alexei Budankov , Riccardo Mancini Subject: Re: [PATCH v8 02/22] perf record: Introduce thread specific data array Message-ID: References: <54085f942fb8deedc617732b4716cb85a5c6ebfb.1625065643.git.alexey.v.bayduraev@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <54085f942fb8deedc617732b4716cb85a5c6ebfb.1625065643.git.alexey.v.bayduraev@linux.intel.com> X-Url: http://acmel.wordpress.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Em Wed, Jun 30, 2021 at 06:54:41PM +0300, Alexey Bayduraev escreveu: > Introduce thread specific data object and array of such objects > to store and manage thread local data. Implement functions to > allocate, initialize, finalize and release thread specific data. > > Thread local maps and overwrite_maps arrays keep pointers to > mmap buffer objects to serve according to maps thread mask. > Thread local pollfd array keeps event fds connected to mmaps > buffers according to maps thread mask. > > Thread control commands are delivered via thread local comm pipes > and ctlfd_pos fd. External control commands (--control option) > are delivered via evlist ctlfd_pos fd and handled by the main > tool thread. > > Acked-by: Namhyung Kim > Signed-off-by: Alexey Bayduraev > --- > tools/lib/api/fd/array.c | 17 ++++ > tools/lib/api/fd/array.h | 1 + > tools/perf/builtin-record.c | 196 +++++++++++++++++++++++++++++++++++- > 3 files changed, 211 insertions(+), 3 deletions(-) > > diff --git a/tools/lib/api/fd/array.c b/tools/lib/api/fd/array.c > index 5e6cb9debe37..de8bcbaea3f1 100644 > --- a/tools/lib/api/fd/array.c > +++ b/tools/lib/api/fd/array.c > @@ -88,6 +88,23 @@ int fdarray__add(struct fdarray *fda, int fd, short revents, enum fdarray_flags > return pos; > } > > +int fdarray__clone(struct fdarray *fda, int pos, struct fdarray *base) > +{ > + struct pollfd *entry; > + int npos; > + > + if (pos >= base->nr) > + return -EINVAL; > + > + entry = &base->entries[pos]; > + > + npos = fdarray__add(fda, entry->fd, entry->events, base->priv[pos].flags); > + if (npos >= 0) > + fda->priv[npos] = base->priv[pos]; > + > + return npos; > +} > + > int fdarray__filter(struct fdarray *fda, short revents, > void (*entry_destructor)(struct fdarray *fda, int fd, void *arg), > void *arg) > diff --git a/tools/lib/api/fd/array.h b/tools/lib/api/fd/array.h > index 7fcf21a33c0c..4a03da7f1fc1 100644 > --- a/tools/lib/api/fd/array.h > +++ b/tools/lib/api/fd/array.h > @@ -42,6 +42,7 @@ struct fdarray *fdarray__new(int nr_alloc, int nr_autogrow); > void fdarray__delete(struct fdarray *fda); > > int fdarray__add(struct fdarray *fda, int fd, short revents, enum fdarray_flags flags); > +int fdarray__clone(struct fdarray *fda, int pos, struct fdarray *base); > int fdarray__poll(struct fdarray *fda, int timeout); > int fdarray__filter(struct fdarray *fda, short revents, > void (*entry_destructor)(struct fdarray *fda, int fd, void *arg), Please split the fdarray.[ch] parts into a separate patch, then the rest users it in a second patch. If, theoretically, we needed to revert the builtin-record.c we could do it with 'git revert' instead of a patch that removes it while leaving the fdarray__clone part, that at the time of the revert could alredy be in use in other parts of the code. > diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c > index 31b3a515abc1..11ce64b23db4 100644 > --- a/tools/perf/builtin-record.c > +++ b/tools/perf/builtin-record.c > @@ -58,6 +58,7 @@ > #include > #include > #include > +#include > #include > #include > #ifdef HAVE_EVENTFD_SUPPORT > @@ -92,6 +93,23 @@ struct thread_mask { > struct mmap_cpu_mask affinity; > }; > > +struct thread_data { > + pid_t tid; > + struct thread_mask *mask; > + struct { > + int msg[2]; > + int ack[2]; > + } pipes; > + struct fdarray pollfd; > + int ctlfd_pos; > + struct mmap **maps; > + struct mmap **overwrite_maps; > + int nr_mmaps; Move nr_mmaps to after ctlfd_pos > + struct record *rec; > + unsigned long long samples; > + unsigned long waking; > +}; > + > struct record { > struct perf_tool tool; > struct record_opts opts; > @@ -117,6 +135,7 @@ struct record { > struct mmap_cpu_mask affinity_mask; > unsigned long output_max_size; /* = 0: unlimited */ > struct thread_mask *thread_masks; > + struct thread_data *thread_data; > int nr_threads; > }; > > @@ -847,9 +866,174 @@ static int record__kcore_copy(struct machine *machine, struct perf_data *data) > return kcore_copy(from_dir, kcore_dir); > } > > +static int record__thread_data_init_pipes(struct thread_data *thread_data) > +{ > + if (pipe(thread_data->pipes.msg) || pipe(thread_data->pipes.ack)) { > + pr_err("Failed to create thread communication pipes: %s\n", strerror(errno)); > + return -ENOMEM; > + } If one fails you should cleanup the other > + > + pr_debug2("thread_data[%p]: msg=[%d,%d], ack=[%d,%d]\n", thread_data, > + thread_data->pipes.msg[0], thread_data->pipes.msg[1], > + thread_data->pipes.ack[0], thread_data->pipes.ack[1]); > + > + return 0; > +} > + > +static int record__thread_data_init_maps(struct thread_data *thread_data, struct evlist *evlist) > +{ > + int m, tm, nr_mmaps = evlist->core.nr_mmaps; > + struct mmap *mmap = evlist->mmap; > + struct mmap *overwrite_mmap = evlist->overwrite_mmap; > + struct perf_cpu_map *cpus = evlist->core.cpus; > + > + thread_data->nr_mmaps = bitmap_weight(thread_data->mask->maps.bits, > + thread_data->mask->maps.nbits); > + if (mmap) { > + thread_data->maps = zalloc(thread_data->nr_mmaps * sizeof(struct mmap *)); > + if (!thread_data->maps) { > + pr_err("Failed to allocate maps thread data\n"); > + return -ENOMEM; > + } > + } > + if (overwrite_mmap) { > + thread_data->overwrite_maps = zalloc(thread_data->nr_mmaps * sizeof(struct mmap *)); > + if (!thread_data->overwrite_maps) { > + pr_err("Failed to allocate overwrite maps thread data\n"); > + return -ENOMEM; > + } ditto, release the allocated resources on error exit > + } > + pr_debug2("thread_data[%p]: nr_mmaps=%d, maps=%p, ow_maps=%p\n", thread_data, > + thread_data->nr_mmaps, thread_data->maps, thread_data->overwrite_maps); > + > + for (m = 0, tm = 0; m < nr_mmaps && tm < thread_data->nr_mmaps; m++) { > + if (test_bit(cpus->map[m], thread_data->mask->maps.bits)) { > + if (thread_data->maps) { > + thread_data->maps[tm] = &mmap[m]; > + pr_debug2("thread_data[%p]: maps[%d] -> mmap[%d], cpus[%d]\n", > + thread_data, tm, m, cpus->map[m]); > + } > + if (thread_data->overwrite_maps) { > + thread_data->overwrite_maps[tm] = &overwrite_mmap[m]; > + pr_debug2("thread_data[%p]: ow_maps[%d] -> ow_mmap[%d], cpus[%d]\n", > + thread_data, tm, m, cpus->map[m]); > + } > + tm++; > + } > + } > + > + return 0; > +} > + > +static int record__thread_data_init_pollfd(struct thread_data *thread_data, struct evlist *evlist) > +{ > + int f, tm, pos; > + struct mmap *map, *overwrite_map; > + > + fdarray__init(&thread_data->pollfd, 64); > + > + for (tm = 0; tm < thread_data->nr_mmaps; tm++) { > + map = thread_data->maps ? thread_data->maps[tm] : NULL; > + overwrite_map = thread_data->overwrite_maps ? > + thread_data->overwrite_maps[tm] : NULL; > + > + for (f = 0; f < evlist->core.pollfd.nr; f++) { > + void *ptr = evlist->core.pollfd.priv[f].ptr; > + > + if ((map && ptr == map) || (overwrite_map && ptr == overwrite_map)) { > + pos = fdarray__clone(&thread_data->pollfd, f, &evlist->core.pollfd); > + if (pos < 0) > + return pos; > + pr_debug2("thread_data[%p]: pollfd[%d] <- event_fd=%d\n", > + thread_data, pos, evlist->core.pollfd.entries[f].fd); > + } > + } > + } > + > + return 0; > +} > + > +static int record__alloc_thread_data(struct record *rec, struct evlist *evlist) > +{ > + int t, ret; > + struct thread_data *thread_data; > + > + rec->thread_data = zalloc(rec->nr_threads * sizeof(*(rec->thread_data))); > + if (!rec->thread_data) { > + pr_err("Failed to allocate thread data\n"); > + return -ENOMEM; > + } > + thread_data = rec->thread_data; > + > + for (t = 0; t < rec->nr_threads; t++) { > + thread_data[t].rec = rec; > + thread_data[t].mask = &rec->thread_masks[t]; > + ret = record__thread_data_init_maps(&thread_data[t], evlist); > + if (ret) > + return ret; Also release allocated resources on exit > + ret = record__thread_data_init_pollfd(&thread_data[t], evlist); So record__thread_data_init_pollfd() can fail, you emitted a warning here for zalloc() failure, and in the record__thread_data_init_maps() case that function emits error messages, for consistency please emit one here or in record__thread_data_init_pollfd() when fdarray__clone() fails > + if (ret) > + return ret; release resources on exit > + if (t) { > + thread_data[t].tid = -1; > + ret = record__thread_data_init_pipes(&thread_data[t]); > + if (ret) > + return ret; release resources on exit > + thread_data[t].ctlfd_pos = fdarray__add(&thread_data[t].pollfd, > + thread_data[t].pipes.msg[0], > + POLLIN | POLLERR | POLLHUP, > + fdarray_flag__nonfilterable); > + if (thread_data[t].ctlfd_pos < 0) > + return -ENOMEM; pr_err() and release resources > + pr_debug2("thread_data[%p]: pollfd[%d] <- ctl_fd=%d\n", > + thread_data, thread_data[t].ctlfd_pos, > + thread_data[t].pipes.msg[0]); > + } else { > + thread_data[t].tid = syscall(SYS_gettid); > + if (evlist->ctl_fd.pos == -1) > + continue; > + thread_data[t].ctlfd_pos = fdarray__clone(&thread_data[t].pollfd, > + evlist->ctl_fd.pos, > + &evlist->core.pollfd); > + if (thread_data[t].ctlfd_pos < 0) > + return -ENOMEM; Ditto > + pr_debug2("thread_data[%p]: pollfd[%d] <- ctl_fd=%d\n", > + thread_data, thread_data[t].ctlfd_pos, > + evlist->core.pollfd.entries[evlist->ctl_fd.pos].fd); > + } > + } > + > + return 0; > +} > + > +static void record__free_thread_data(struct record *rec) > +{ > + int t; > + > + if (rec->thread_data == NULL) > + return; > + > + for (t = 0; t < rec->nr_threads; t++) { > + if (rec->thread_data[t].pipes.msg[0]) > + close(rec->thread_data[t].pipes.msg[0]); Just to make this consistent with the zfree() use below, please init rec->thread_data[t].pipes.msg[0] to zero (probably best would be to -1) > + if (rec->thread_data[t].pipes.msg[1]) > + close(rec->thread_data[t].pipes.msg[1]); > + if (rec->thread_data[t].pipes.ack[0]) > + close(rec->thread_data[t].pipes.ack[0]); > + if (rec->thread_data[t].pipes.ack[1]) > + close(rec->thread_data[t].pipes.ack[1]); > + zfree(&rec->thread_data[t].maps); > + zfree(&rec->thread_data[t].overwrite_maps); > + fdarray__exit(&rec->thread_data[t].pollfd); > + } > + > + zfree(&rec->thread_data); > +} > + > static int record__mmap_evlist(struct record *rec, > struct evlist *evlist) > { > + int ret; > struct record_opts *opts = &rec->opts; > bool auxtrace_overwrite = opts->auxtrace_snapshot_mode || > opts->auxtrace_sample_mode; > @@ -880,6 +1064,14 @@ static int record__mmap_evlist(struct record *rec, > return -EINVAL; > } > } > + > + if (evlist__initialize_ctlfd(evlist, opts->ctl_fd, opts->ctl_fd_ack)) > + return -1; > + > + ret = record__alloc_thread_data(rec, evlist); > + if (ret) > + return ret; > + > return 0; > } > > @@ -1880,9 +2072,6 @@ static int __cmd_record(struct record *rec, int argc, const char **argv) > evlist__start_workload(rec->evlist); > } > > - if (evlist__initialize_ctlfd(rec->evlist, opts->ctl_fd, opts->ctl_fd_ack)) > - goto out_child; > - > if (opts->initial_delay) { > pr_info(EVLIST_DISABLED_MSG); > if (opts->initial_delay > 0) { > @@ -2040,6 +2229,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv) > out_child: > evlist__finalize_ctlfd(rec->evlist); > record__mmap_read_all(rec, true); > + record__free_thread_data(rec); > record__aio_mmap_read_sync(rec); > > if (rec->session->bytes_transferred && rec->session->bytes_compressed) { > -- > 2.19.0 > -- - Arnaldo