Received: by 2002:a05:6a10:1d13:0:0:0:0 with SMTP id pp19csp4770542pxb; Tue, 31 Aug 2021 12:56:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJybbZBgzPGwvKmdJiE+oGRfm+LO6FpHm5U3Vo46LHCG2RpqiILkWB3i5wtSZD2wphORiTKS X-Received: by 2002:a17:907:92c:: with SMTP id au12mr33132104ejc.523.1630439785935; Tue, 31 Aug 2021 12:56:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1630439785; cv=none; d=google.com; s=arc-20160816; b=I/35OrPPVuCfF0Ej4FtHqDWzrnQWOxMGhAryWbUE/+MhDB6WmDKjMDB3t1H0UTzUA0 RSYvYKFZI5tWTNwL2bzB+iOd0kccayzsxQkEVPX8sDuUciI50j5cskewJvlJgS/ohraW MVFSfuekRGINX6GQAmXORpPjbZphOEO6flbInQPQru9oyP3q2covAtnY2FCR9vJNxzOY 7naZ0nvX06RmasxIGLysN+8MMMivIVjUYS5E74L5z1M+/dwIhuVLD1x5wAKnONK04ixm Cdo2NmlpwS1aPnGcFvKraE4sVnDdm3jpgdagzmdJjP0JhNthi+dTOszUIRm55z2xnaP5 xsMQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=jYv9+EjR/8G3yyZSYf0dcVP3VrsTay8NdHPz1VuwhSY=; b=l28oHmArp6GuzUROS/3KMRd72P14x38y1PPzu+w4dkFWtUAC+TDEvn57v7fwDl2BwG nkDWjVGzFkABFnlcYl46hz6iueMjWMl4wJFnnR5eMJ4/FhIoDZqJpkUEDzIUjBrYU1nA tGMyfpd7fz0FEQvZ6sWS0zSoAbD3/oTaHsRPnCRbVXa5cw5pruSHJcTAdaVQzi8lYCZt GLegUcgaTdhWcWYalBhXOJQjjujSOm/1Kma39cDZ1h0tKiZDa4zgZqwLssGlLRC1ZZg5 6gtLwY7FfA8s+2IkeV6WW16riKd30bAEFso4Notug7JeomxrgeCxsU2Q+KnQKPkLll9m WQVA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=E3qHKfO7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f25si75236ejh.104.2021.08.31.12.56.03; Tue, 31 Aug 2021 12:56:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=E3qHKfO7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239133AbhHaTzn (ORCPT + 99 others); Tue, 31 Aug 2021 15:55:43 -0400 Received: from mail.kernel.org ([198.145.29.99]:60778 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229946AbhHaTzm (ORCPT ); Tue, 31 Aug 2021 15:55:42 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 532D361053; Tue, 31 Aug 2021 19:54:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1630439686; bh=aHloCrBXDAhdjVe5ypi8FjpqRvK46acnBPd5AxJF6jM=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=E3qHKfO7Z8IeYKyPGbJcLeS8l1mWQDSssd1OcC6/hXCS2hmu6PmJKz1qElx5b0lZL Izn9m3rhr8rK0Ezbxj2NZWOs+sVa2AavF7YbjlXGsJj66zYZtYlPS/OBiWi1y0kqJ/ 6RmT/YomRvQr/2lNYMi5XWRfiZ25B7h1Yf19/EO9Yj3R8TKCzxpv2z4IqSLa33sjVM ivS4LOo/YKmXuNIGoedWHj+3KFW3r/xJW499i+Mv1+kuPhpdDXvlsI0pzE7fFpauPf Ex/u4QP9M1bvQv5/gMknkkvQTaWKE3dksEWS2DMn2gq0K7v8gYNdtMc2ABRdbKqjM+ vA8LfULOjyfEQ== Received: by quaco.ghostprotocols.net (Postfix, from userid 1000) id 444514007E; Tue, 31 Aug 2021 16:54:44 -0300 (-03) Date: Tue, 31 Aug 2021 16:54:44 -0300 From: Arnaldo Carvalho de Melo To: Riccardo Mancini Cc: Ian Rogers , Namhyung Kim , Peter Zijlstra , Ingo Molnar , Mark Rutland , Jiri Olsa , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org Subject: Re: [RFC PATCH v1 25/37] perf evsel: move event open in evsel__open_cpu to separate function Message-ID: References: <74ac2eea14f45b2cbecffb509dd5f3cd523d4a9b.1629490974.git.rickyman7@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <74ac2eea14f45b2cbecffb509dd5f3cd523d4a9b.1629490974.git.rickyman7@gmail.com> X-Url: http://acmel.wordpress.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Em Sat, Aug 21, 2021 at 11:19:31AM +0200, Riccardo Mancini escreveu: > This is the final patch splitting evsel__open_cpu. > This patch moves the entire loop code to a separate function, to be > reused for the multithreaded code. Are you going to use that 'enum perf_event_open_err' somewhere else? I.e. is there a need to expose it in evsel.h? I'm stopping at this patch to give the ones I merged so far some testing, will now push it to tmp.perf/core. - Arnaldo > Signed-off-by: Riccardo Mancini > --- > tools/perf/util/evsel.c | 142 ++++++++++++++++++++++++---------------- > tools/perf/util/evsel.h | 12 ++++ > 2 files changed, 99 insertions(+), 55 deletions(-) > > diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c > index 2e95416b8320c6b9..e41f55a7a70ea630 100644 > --- a/tools/perf/util/evsel.c > +++ b/tools/perf/util/evsel.c > @@ -1945,6 +1945,82 @@ bool evsel__increase_rlimit(enum rlimit_action *set_rlimit) > return false; > } > > +static struct perf_event_open_result perf_event_open(struct evsel *evsel, > + pid_t pid, int cpu, int thread, > + struct perf_cpu_map *cpus, > + struct perf_thread_map *threads) > +{ > + int fd, group_fd, rc; > + struct perf_event_open_result res; > + > + if (!evsel->cgrp && !evsel->core.system_wide) > + pid = perf_thread_map__pid(threads, thread); > + > + group_fd = get_group_fd(evsel, cpu, thread); > + > + test_attr__ready(); > + > + pr_debug2_peo("sys_perf_event_open: pid %d cpu %d group_fd %d flags %#lx", > + pid, cpus->map[cpu], group_fd, evsel->open_flags); > + > + fd = sys_perf_event_open(&evsel->core.attr, pid, cpus->map[cpu], > + group_fd, evsel->open_flags); > + > + FD(evsel, cpu, thread) = fd; > + res.fd = fd; > + > + if (fd < 0) { > + rc = -errno; > + > + pr_debug2_peo("\nsys_perf_event_open failed, error %d\n", > + rc); > + res.rc = rc; > + res.err = PEO_FALLBACK; > + return res; > + } > + > + bpf_counter__install_pe(evsel, cpu, fd); > + > + if (unlikely(test_attr__enabled)) { > + test_attr__open(&evsel->core.attr, pid, > + cpus->map[cpu], fd, > + group_fd, evsel->open_flags); > + } > + > + pr_debug2_peo(" = %d\n", fd); > + > + if (evsel->bpf_fd >= 0) { > + int evt_fd = fd; > + int bpf_fd = evsel->bpf_fd; > + > + rc = ioctl(evt_fd, > + PERF_EVENT_IOC_SET_BPF, > + bpf_fd); > + if (rc && errno != EEXIST) { > + pr_err("failed to attach bpf fd %d: %s\n", > + bpf_fd, strerror(errno)); > + res.rc = -EINVAL; > + res.err = PEO_ERROR; > + return res; > + } > + } > + > + /* > + * If we succeeded but had to kill clockid, fail and > + * have evsel__open_strerror() print us a nice error. > + */ > + if (perf_missing_features.clockid || > + perf_missing_features.clockid_wrong) { > + res.rc = -EINVAL; > + res.err = PEO_ERROR; > + return res; > + } > + > + res.rc = 0; > + res.err = PEO_SUCCESS; > + return res; > +} > + > static int evsel__open_cpu(struct evsel *evsel, struct perf_cpu_map *cpus, > struct perf_thread_map *threads, > int start_cpu, int end_cpu) > @@ -1952,6 +2028,7 @@ static int evsel__open_cpu(struct evsel *evsel, struct perf_cpu_map *cpus, > int cpu, thread, nthreads; > int pid = -1, err, old_errno; > enum rlimit_action set_rlimit = NO_CHANGE; > + struct perf_event_open_result peo_res; > > err = __evsel__prepare_open(evsel, cpus, threads); > if (err) > @@ -1979,67 +2056,22 @@ static int evsel__open_cpu(struct evsel *evsel, struct perf_cpu_map *cpus, > for (cpu = start_cpu; cpu < end_cpu; cpu++) { > > for (thread = 0; thread < nthreads; thread++) { > - int fd, group_fd; > retry_open: > if (thread >= nthreads) > break; > > - if (!evsel->cgrp && !evsel->core.system_wide) > - pid = perf_thread_map__pid(threads, thread); > - > - group_fd = get_group_fd(evsel, cpu, thread); > - > - test_attr__ready(); > - > - pr_debug2_peo("sys_perf_event_open: pid %d cpu %d group_fd %d flags %#lx", > - pid, cpus->map[cpu], group_fd, evsel->open_flags); > + peo_res = perf_event_open(evsel, pid, cpu, thread, cpus, > + threads); > > - fd = sys_perf_event_open(&evsel->core.attr, pid, cpus->map[cpu], > - group_fd, evsel->open_flags); > - > - FD(evsel, cpu, thread) = fd; > - > - if (fd < 0) { > - err = -errno; > - > - pr_debug2_peo("\nsys_perf_event_open failed, error %d\n", > - err); > + err = peo_res.rc; > + switch (peo_res.err) { > + case PEO_SUCCESS: > + set_rlimit = NO_CHANGE; > + continue; > + case PEO_FALLBACK: > goto try_fallback; > - } > - > - bpf_counter__install_pe(evsel, cpu, fd); > - > - if (unlikely(test_attr__enabled)) { > - test_attr__open(&evsel->core.attr, pid, cpus->map[cpu], > - fd, group_fd, evsel->open_flags); > - } > - > - pr_debug2_peo(" = %d\n", fd); > - > - if (evsel->bpf_fd >= 0) { > - int evt_fd = fd; > - int bpf_fd = evsel->bpf_fd; > - > - err = ioctl(evt_fd, > - PERF_EVENT_IOC_SET_BPF, > - bpf_fd); > - if (err && errno != EEXIST) { > - pr_err("failed to attach bpf fd %d: %s\n", > - bpf_fd, strerror(errno)); > - err = -EINVAL; > - goto out_close; > - } > - } > - > - set_rlimit = NO_CHANGE; > - > - /* > - * If we succeeded but had to kill clockid, fail and > - * have evsel__open_strerror() print us a nice error. > - */ > - if (perf_missing_features.clockid || > - perf_missing_features.clockid_wrong) { > - err = -EINVAL; > + default: > + case PEO_ERROR: > goto out_close; > } > } > diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h > index 0a245afab2d87d74..8c9827a93ac001a7 100644 > --- a/tools/perf/util/evsel.h > +++ b/tools/perf/util/evsel.h > @@ -282,6 +282,18 @@ int evsel__enable(struct evsel *evsel); > int evsel__disable(struct evsel *evsel); > int evsel__disable_cpu(struct evsel *evsel, int cpu); > > +enum perf_event_open_err { > + PEO_SUCCESS, > + PEO_FALLBACK, > + PEO_ERROR > +}; > + > +struct perf_event_open_result { > + enum perf_event_open_err err; > + int rc; > + int fd; > +}; > + > int evsel__open_per_cpu(struct evsel *evsel, struct perf_cpu_map *cpus, int cpu); > int evsel__open_per_thread(struct evsel *evsel, struct perf_thread_map *threads); > int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus, > -- > 2.31.1 -- - Arnaldo