Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp499670pxv; Wed, 30 Jun 2021 10:20:37 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxOvLro7JEtHZxFE9UAwRIlM/a7gtd3McJZD+T//LpIQvVhHK6JuRJ4IA9vayXQAI+F758o X-Received: by 2002:a17:906:1951:: with SMTP id b17mr36955394eje.468.1625073637079; Wed, 30 Jun 2021 10:20:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625073637; cv=none; d=google.com; s=arc-20160816; b=rnNhGCOD/tTs2rEYTOK0raqmhZKMvhy7xkMnt88Pb0fXAt9zhq94APBrmePlIxW/Ui b0sPJ/JYFSseyYp5WuvSMKiS6gKs7A2wVvXGJwDTbfpnOSvtrsFSUFfYAYGW9IGvbtSN VNGsDJv0qy2MYVgIAn2fsuGb7OaFJcl8ECxS7M0H/HPRn/ogiOlsvT9VciluK7fjdlOI TjknrfYcSes53ZNeyezqs64s8ejE26UKG7afK1llkRn6MKo3MOHLPROWAzVPmiA418pH CLc5rR7vi4s1goZVB/qedPYGUKOXpbP0nbxzvkxDbmMg/Tb0TsYh8MgIeC0PuCdzUyhN GwqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=w5+NFxwcLuCLqUrcp8VPurO/3TQDidZ20KokAHDdgYw=; b=wrkCJ6qI5p014cjhW7jrWdcl24Ir76YO8CtI9wlBSWN6oyrf9SMXC39k0OwW9a0njn JfnlS5MhlR7T3i/OxXfbpS6qxFjb7wqsA2j2zSFo41QHwJsrijt01Ew9hQh+HaBrYBnX gFg+u2Bni3VJUgAPGcfpR8pJ9fMUcHcd5mhPZ/0nz32I96KZ0znmAV1pKO4uyl7hKmrQ LzR9//prwfXx6MEFzLYfmyITgyUBYXC2zDI2U+W1IZz+TPVocQ9CXeRTsjqA0oGAzqBl pIn58kbgZA/k9J+DWGgG+YZhou30nYhGlXAHHmvsP4qZ2/Qm6gGyffxSQpQEftG4A2lJ mCzQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ZFJXcFAQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m19si19699240edd.117.2021.06.30.10.20.13; Wed, 30 Jun 2021 10:20:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ZFJXcFAQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232524AbhF3RVJ (ORCPT + 99 others); Wed, 30 Jun 2021 13:21:09 -0400 Received: from mail.kernel.org ([198.145.29.99]:55386 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229573AbhF3RVJ (ORCPT ); Wed, 30 Jun 2021 13:21:09 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id D5BAA61490; Wed, 30 Jun 2021 17:18:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1625073520; bh=ohBF5doleQ88UekCrryE/LgkJIOu0E7nVTeOcvrcJJw=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ZFJXcFAQEgF47F4PS4whtBRe6vbk/1YJnUQ6VGEbQDEpD0q1vnOMiuKyKZ3O3CtDm tjShohuMFh62Yqb2FINWLcEQStnTrvwD/qpDvJUWJbT9yFQmC8gacE2vTw2UF6jeJ7 nRpMlMPNwHpZBULMaPxEGxUgdqcUMlVMJcKssEobEHNmjH13cjlwnjAd3idjPtpnlg Mj0dypMFGdRoq8E6LrsYiVYRYkNG2k4pGFmr3KiK5ddYcVfVQRdTL3GOxODcXcDfVP ICQNLs1YqPPl82sMgjC7tlESt8R4obqQrajyIRUhZL/IwU3Dn4admuK5n6/AUlyLqT WLDSIBZcb3L4Q== Received: by quaco.ghostprotocols.net (Postfix, from userid 1000) id 3CB1C40B1A; Wed, 30 Jun 2021 14:18:37 -0300 (-03) Date: Wed, 30 Jun 2021 14:18:37 -0300 From: Arnaldo Carvalho de Melo To: Alexey Bayduraev Cc: Jiri Olsa , Namhyung Kim , Alexander Shishkin , Peter Zijlstra , Ingo Molnar , linux-kernel , Andi Kleen , Adrian Hunter , Alexander Antonov , Alexei Budankov , Riccardo Mancini Subject: Re: [PATCH v8 02/22] perf record: Introduce thread specific data array Message-ID: References: <54085f942fb8deedc617732b4716cb85a5c6ebfb.1625065643.git.alexey.v.bayduraev@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <54085f942fb8deedc617732b4716cb85a5c6ebfb.1625065643.git.alexey.v.bayduraev@linux.intel.com> X-Url: http://acmel.wordpress.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Em Wed, Jun 30, 2021 at 06:54:41PM +0300, Alexey Bayduraev escreveu: > Introduce thread specific data object and array of such objects > to store and manage thread local data. Implement functions to > allocate, initialize, finalize and release thread specific data. > > Thread local maps and overwrite_maps arrays keep pointers to > mmap buffer objects to serve according to maps thread mask. > Thread local pollfd array keeps event fds connected to mmaps > buffers according to maps thread mask. > > Thread control commands are delivered via thread local comm pipes > and ctlfd_pos fd. External control commands (--control option) > are delivered via evlist ctlfd_pos fd and handled by the main > tool thread. > > Acked-by: Namhyung Kim > Signed-off-by: Alexey Bayduraev > --- > tools/lib/api/fd/array.c | 17 ++++ > tools/lib/api/fd/array.h | 1 + > tools/perf/builtin-record.c | 196 +++++++++++++++++++++++++++++++++++- > 3 files changed, 211 insertions(+), 3 deletions(-) > > diff --git a/tools/lib/api/fd/array.c b/tools/lib/api/fd/array.c > index 5e6cb9debe37..de8bcbaea3f1 100644 > --- a/tools/lib/api/fd/array.c > +++ b/tools/lib/api/fd/array.c > @@ -88,6 +88,23 @@ int fdarray__add(struct fdarray *fda, int fd, short revents, enum fdarray_flags > return pos; > } > > +int fdarray__clone(struct fdarray *fda, int pos, struct fdarray *base) > +{ > + struct pollfd *entry; > + int npos; > + > + if (pos >= base->nr) > + return -EINVAL; > + > + entry = &base->entries[pos]; > + > + npos = fdarray__add(fda, entry->fd, entry->events, base->priv[pos].flags); > + if (npos >= 0) > + fda->priv[npos] = base->priv[pos]; > + > + return npos; > +} > + > int fdarray__filter(struct fdarray *fda, short revents, > void (*entry_destructor)(struct fdarray *fda, int fd, void *arg), > void *arg) > diff --git a/tools/lib/api/fd/array.h b/tools/lib/api/fd/array.h > index 7fcf21a33c0c..4a03da7f1fc1 100644 > --- a/tools/lib/api/fd/array.h > +++ b/tools/lib/api/fd/array.h > @@ -42,6 +42,7 @@ struct fdarray *fdarray__new(int nr_alloc, int nr_autogrow); > void fdarray__delete(struct fdarray *fda); > > int fdarray__add(struct fdarray *fda, int fd, short revents, enum fdarray_flags flags); > +int fdarray__clone(struct fdarray *fda, int pos, struct fdarray *base); > int fdarray__poll(struct fdarray *fda, int timeout); > int fdarray__filter(struct fdarray *fda, short revents, > void (*entry_destructor)(struct fdarray *fda, int fd, void *arg), > diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c > index 31b3a515abc1..11ce64b23db4 100644 > --- a/tools/perf/builtin-record.c > +++ b/tools/perf/builtin-record.c > @@ -58,6 +58,7 @@ > #include > #include > #include > +#include > #include > #include > #ifdef HAVE_EVENTFD_SUPPORT > @@ -92,6 +93,23 @@ struct thread_mask { > struct mmap_cpu_mask affinity; > }; > > +struct thread_data { Please rename this to 'struct record_thread', 'data' is way too generic. > + pid_t tid; > + struct thread_mask *mask; > + struct { > + int msg[2]; > + int ack[2]; > + } pipes; > + struct fdarray pollfd; > + int ctlfd_pos; > + struct mmap **maps; > + struct mmap **overwrite_maps; > + int nr_mmaps; > + struct record *rec; > + unsigned long long samples; > + unsigned long waking; > +}; > + > struct record { > struct perf_tool tool; > struct record_opts opts; > @@ -117,6 +135,7 @@ struct record { > struct mmap_cpu_mask affinity_mask; > unsigned long output_max_size; /* = 0: unlimited */ > struct thread_mask *thread_masks; > + struct thread_data *thread_data; > int nr_threads; > }; > > @@ -847,9 +866,174 @@ static int record__kcore_copy(struct machine *machine, struct perf_data *data) > return kcore_copy(from_dir, kcore_dir); > } > > +static int record__thread_data_init_pipes(struct thread_data *thread_data) > +{ > + if (pipe(thread_data->pipes.msg) || pipe(thread_data->pipes.ack)) { > + pr_err("Failed to create thread communication pipes: %s\n", strerror(errno)); > + return -ENOMEM; > + } > + > + pr_debug2("thread_data[%p]: msg=[%d,%d], ack=[%d,%d]\n", thread_data, > + thread_data->pipes.msg[0], thread_data->pipes.msg[1], > + thread_data->pipes.ack[0], thread_data->pipes.ack[1]); > + > + return 0; > +} > + > +static int record__thread_data_init_maps(struct thread_data *thread_data, struct evlist *evlist) > +{ > + int m, tm, nr_mmaps = evlist->core.nr_mmaps; > + struct mmap *mmap = evlist->mmap; > + struct mmap *overwrite_mmap = evlist->overwrite_mmap; > + struct perf_cpu_map *cpus = evlist->core.cpus; > + > + thread_data->nr_mmaps = bitmap_weight(thread_data->mask->maps.bits, > + thread_data->mask->maps.nbits); > + if (mmap) { > + thread_data->maps = zalloc(thread_data->nr_mmaps * sizeof(struct mmap *)); > + if (!thread_data->maps) { > + pr_err("Failed to allocate maps thread data\n"); > + return -ENOMEM; > + } > + } > + if (overwrite_mmap) { > + thread_data->overwrite_maps = zalloc(thread_data->nr_mmaps * sizeof(struct mmap *)); > + if (!thread_data->overwrite_maps) { > + pr_err("Failed to allocate overwrite maps thread data\n"); > + return -ENOMEM; > + } > + } > + pr_debug2("thread_data[%p]: nr_mmaps=%d, maps=%p, ow_maps=%p\n", thread_data, > + thread_data->nr_mmaps, thread_data->maps, thread_data->overwrite_maps); > + > + for (m = 0, tm = 0; m < nr_mmaps && tm < thread_data->nr_mmaps; m++) { > + if (test_bit(cpus->map[m], thread_data->mask->maps.bits)) { > + if (thread_data->maps) { > + thread_data->maps[tm] = &mmap[m]; > + pr_debug2("thread_data[%p]: maps[%d] -> mmap[%d], cpus[%d]\n", > + thread_data, tm, m, cpus->map[m]); > + } > + if (thread_data->overwrite_maps) { > + thread_data->overwrite_maps[tm] = &overwrite_mmap[m]; > + pr_debug2("thread_data[%p]: ow_maps[%d] -> ow_mmap[%d], cpus[%d]\n", > + thread_data, tm, m, cpus->map[m]); > + } > + tm++; > + } > + } > + > + return 0; > +} > + > +static int record__thread_data_init_pollfd(struct thread_data *thread_data, struct evlist *evlist) > +{ > + int f, tm, pos; > + struct mmap *map, *overwrite_map; > + > + fdarray__init(&thread_data->pollfd, 64); > + > + for (tm = 0; tm < thread_data->nr_mmaps; tm++) { > + map = thread_data->maps ? thread_data->maps[tm] : NULL; > + overwrite_map = thread_data->overwrite_maps ? > + thread_data->overwrite_maps[tm] : NULL; > + > + for (f = 0; f < evlist->core.pollfd.nr; f++) { > + void *ptr = evlist->core.pollfd.priv[f].ptr; > + > + if ((map && ptr == map) || (overwrite_map && ptr == overwrite_map)) { > + pos = fdarray__clone(&thread_data->pollfd, f, &evlist->core.pollfd); > + if (pos < 0) > + return pos; > + pr_debug2("thread_data[%p]: pollfd[%d] <- event_fd=%d\n", > + thread_data, pos, evlist->core.pollfd.entries[f].fd); > + } > + } > + } > + > + return 0; > +} > + > +static int record__alloc_thread_data(struct record *rec, struct evlist *evlist) > +{ > + int t, ret; > + struct thread_data *thread_data; > + > + rec->thread_data = zalloc(rec->nr_threads * sizeof(*(rec->thread_data))); > + if (!rec->thread_data) { > + pr_err("Failed to allocate thread data\n"); > + return -ENOMEM; > + } > + thread_data = rec->thread_data; > + > + for (t = 0; t < rec->nr_threads; t++) { > + thread_data[t].rec = rec; > + thread_data[t].mask = &rec->thread_masks[t]; > + ret = record__thread_data_init_maps(&thread_data[t], evlist); > + if (ret) > + return ret; > + ret = record__thread_data_init_pollfd(&thread_data[t], evlist); > + if (ret) > + return ret; > + if (t) { > + thread_data[t].tid = -1; > + ret = record__thread_data_init_pipes(&thread_data[t]); > + if (ret) > + return ret; > + thread_data[t].ctlfd_pos = fdarray__add(&thread_data[t].pollfd, > + thread_data[t].pipes.msg[0], > + POLLIN | POLLERR | POLLHUP, > + fdarray_flag__nonfilterable); > + if (thread_data[t].ctlfd_pos < 0) > + return -ENOMEM; > + pr_debug2("thread_data[%p]: pollfd[%d] <- ctl_fd=%d\n", > + thread_data, thread_data[t].ctlfd_pos, > + thread_data[t].pipes.msg[0]); > + } else { > + thread_data[t].tid = syscall(SYS_gettid); > + if (evlist->ctl_fd.pos == -1) > + continue; > + thread_data[t].ctlfd_pos = fdarray__clone(&thread_data[t].pollfd, > + evlist->ctl_fd.pos, > + &evlist->core.pollfd); > + if (thread_data[t].ctlfd_pos < 0) > + return -ENOMEM; > + pr_debug2("thread_data[%p]: pollfd[%d] <- ctl_fd=%d\n", > + thread_data, thread_data[t].ctlfd_pos, > + evlist->core.pollfd.entries[evlist->ctl_fd.pos].fd); > + } > + } > + > + return 0; > +} > + > +static void record__free_thread_data(struct record *rec) > +{ > + int t; > + > + if (rec->thread_data == NULL) > + return; > + > + for (t = 0; t < rec->nr_threads; t++) { > + if (rec->thread_data[t].pipes.msg[0]) > + close(rec->thread_data[t].pipes.msg[0]); > + if (rec->thread_data[t].pipes.msg[1]) > + close(rec->thread_data[t].pipes.msg[1]); > + if (rec->thread_data[t].pipes.ack[0]) > + close(rec->thread_data[t].pipes.ack[0]); > + if (rec->thread_data[t].pipes.ack[1]) > + close(rec->thread_data[t].pipes.ack[1]); > + zfree(&rec->thread_data[t].maps); > + zfree(&rec->thread_data[t].overwrite_maps); > + fdarray__exit(&rec->thread_data[t].pollfd); > + } > + > + zfree(&rec->thread_data); > +} > + > static int record__mmap_evlist(struct record *rec, > struct evlist *evlist) > { > + int ret; > struct record_opts *opts = &rec->opts; > bool auxtrace_overwrite = opts->auxtrace_snapshot_mode || > opts->auxtrace_sample_mode; > @@ -880,6 +1064,14 @@ static int record__mmap_evlist(struct record *rec, > return -EINVAL; > } > } > + > + if (evlist__initialize_ctlfd(evlist, opts->ctl_fd, opts->ctl_fd_ack)) > + return -1; > + > + ret = record__alloc_thread_data(rec, evlist); > + if (ret) > + return ret; > + > return 0; > } > > @@ -1880,9 +2072,6 @@ static int __cmd_record(struct record *rec, int argc, const char **argv) > evlist__start_workload(rec->evlist); > } > > - if (evlist__initialize_ctlfd(rec->evlist, opts->ctl_fd, opts->ctl_fd_ack)) > - goto out_child; > - > if (opts->initial_delay) { > pr_info(EVLIST_DISABLED_MSG); > if (opts->initial_delay > 0) { > @@ -2040,6 +2229,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv) > out_child: > evlist__finalize_ctlfd(rec->evlist); > record__mmap_read_all(rec, true); > + record__free_thread_data(rec); > record__aio_mmap_read_sync(rec); > > if (rec->session->bytes_transferred && rec->session->bytes_compressed) { > -- > 2.19.0 > -- - Arnaldo