Received: by 2002:a25:d7c1:0:0:0:0:0 with SMTP id o184csp7450ybg; Fri, 25 Oct 2019 15:38:40 -0700 (PDT) X-Google-Smtp-Source: APXvYqxRY6AOlzGxeyN+iZ3H98IamfnTijngkiY+E5eZEL3UQ842gc2+Z+rQRB+7KLjTCwKTwj+U X-Received: by 2002:a05:6402:1a55:: with SMTP id bf21mr6565808edb.61.1572043120123; Fri, 25 Oct 2019 15:38:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572043120; cv=none; d=google.com; s=arc-20160816; b=XntRMCBc9bpsRepsE2wCOAG3AAGEqPk0n/P/V4R0N6OShv116I2iclg4kl78s+gbJM W4kfC/Ev6Ayg03WHgl9p1tvov2hP6Az3gHrM2OupX0FgL7tMLlhuD3dCXtxyQliZRxhY hj/28KfmrEUF2EnSEZYjKHVXekdz7EU6v5zmZRa2GxsPq1ObO5snNKEMajetANJg2R8l McY1PbGRN/pXtdTpsPMGRnMEONM+YYjV33WhInUgwtB41/k22eG+/RxDGKN82TBXmklS y2Q0Qx9p/wwylXCreC46ucgX8t0AV7ZP0+K8zG+XgjUTC/YsLUEJ03sOVFo9vTjfBV87 QohQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Ww8hrVpMhxt5RDUjwp9hwiOucaStsE1+msZs3xbWMIM=; b=BNO+ZcI+8Fau7WVXY+z5fSdZb7PZkk/8+IZg1mzQ/kFUKgDvhfILmT4xhnogSNDJwc g5v9+8jcmoFmZP4CEvMra+oSJiHLU+0Cey6qMyDLg4xMttlVC4XJm/TBDoSoAl2lgQpT XJWF+GYeGzAOTKS60tcxQ8X2YTbXcWKaiV/oJcnXQFGsAmUHN+wCmWIMLibYfPe/goGn rGa7xdDDsDTuFTiuLDAMUtr9GeHuEZyQSPZdhPysPhhxLMN9CTMImdcxPEhKmzdIQ13A CQ4ywBzxn9/Q1mf/+p5cRPVrgCF+ZyBVChFfjzqTxE9SQbfrbKyVXEmRHhHYlROJ2Sbo Nc8A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e6si2787929edc.238.2019.10.25.15.38.16; Fri, 25 Oct 2019 15:38:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388252AbfJYSO0 (ORCPT + 99 others); Fri, 25 Oct 2019 14:14:26 -0400 Received: from mga04.intel.com ([192.55.52.120]:21876 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388170AbfJYSOZ (ORCPT ); Fri, 25 Oct 2019 14:14:25 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Oct 2019 11:14:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,229,1569308400"; d="scan'208";a="210415107" Received: from tassilo.jf.intel.com (HELO tassilo.localdomain) ([10.7.201.137]) by fmsmga001.fm.intel.com with ESMTP; 25 Oct 2019 11:14:25 -0700 Received: by tassilo.localdomain (Postfix, from userid 1000) id DE936300DDE; Fri, 25 Oct 2019 11:14:24 -0700 (PDT) From: Andi Kleen To: acme@kernel.org Cc: jolsa@kernel.org, eranian@google.com, linux-kernel@vger.kernel.org, Andi Kleen Subject: [PATCH v3 4/7] perf stat: Use affinity for closing file descriptors Date: Fri, 25 Oct 2019 11:14:14 -0700 Message-Id: <20191025181417.10670-5-andi@firstfloor.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191025181417.10670-1-andi@firstfloor.org> References: <20191025181417.10670-1-andi@firstfloor.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Andi Kleen Closing a perf fd can also trigger an IPI to the target CPU. Use the same affinity technique as we use for reading/enabling events to closing to optimize the CPU transitions. Before on a large test case with 94 CPUs: % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 32.56 3.085463 50 61483 close After: 10.54 0.735704 11 61485 close Signed-off-by: Andi Kleen --- v2: Use new iterator macros --- tools/perf/lib/evsel.c | 27 +++++++++++++++++++++------ tools/perf/lib/include/perf/evsel.h | 1 + tools/perf/util/evlist.c | 29 +++++++++++++++++++++++++++-- tools/perf/util/evsel.h | 1 + 4 files changed, 50 insertions(+), 8 deletions(-) diff --git a/tools/perf/lib/evsel.c b/tools/perf/lib/evsel.c index 5a89857b0381..ea775dacbd2d 100644 --- a/tools/perf/lib/evsel.c +++ b/tools/perf/lib/evsel.c @@ -114,16 +114,23 @@ int perf_evsel__open(struct perf_evsel *evsel, struct perf_cpu_map *cpus, return err; } +static void perf_evsel__close_fd_cpu(struct perf_evsel *evsel, int cpu) +{ + int thread; + + for (thread = 0; thread < xyarray__max_y(evsel->fd); ++thread) { + if (FD(evsel, cpu, thread) >= 0) + close(FD(evsel, cpu, thread)); + FD(evsel, cpu, thread) = -1; + } +} + void perf_evsel__close_fd(struct perf_evsel *evsel) { - int cpu, thread; + int cpu; for (cpu = 0; cpu < xyarray__max_x(evsel->fd); cpu++) - for (thread = 0; thread < xyarray__max_y(evsel->fd); ++thread) { - if (FD(evsel, cpu, thread) >= 0) - close(FD(evsel, cpu, thread)); - FD(evsel, cpu, thread) = -1; - } + perf_evsel__close_fd_cpu(evsel, cpu); } void perf_evsel__free_fd(struct perf_evsel *evsel) @@ -141,6 +148,14 @@ void perf_evsel__close(struct perf_evsel *evsel) perf_evsel__free_fd(evsel); } +void perf_evsel__close_cpu(struct perf_evsel *evsel, int cpu) +{ + if (evsel->fd == NULL) + return; + + perf_evsel__close_fd_cpu(evsel, cpu); +} + int perf_evsel__read_size(struct perf_evsel *evsel) { u64 read_format = evsel->attr.read_format; diff --git a/tools/perf/lib/include/perf/evsel.h b/tools/perf/lib/include/perf/evsel.h index 557f5815a9c9..e7add554f861 100644 --- a/tools/perf/lib/include/perf/evsel.h +++ b/tools/perf/lib/include/perf/evsel.h @@ -26,6 +26,7 @@ LIBPERF_API void perf_evsel__delete(struct perf_evsel *evsel); LIBPERF_API int perf_evsel__open(struct perf_evsel *evsel, struct perf_cpu_map *cpus, struct perf_thread_map *threads); LIBPERF_API void perf_evsel__close(struct perf_evsel *evsel); +LIBPERF_API void perf_evsel__close_cpu(struct perf_evsel *evsel, int cpu); LIBPERF_API int perf_evsel__read(struct perf_evsel *evsel, int cpu, int thread, struct perf_counts_values *count); LIBPERF_API int perf_evsel__enable(struct perf_evsel *evsel); diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c index da3c8f8ef68e..aeb82de36043 100644 --- a/tools/perf/util/evlist.c +++ b/tools/perf/util/evlist.c @@ -18,6 +18,7 @@ #include "debug.h" #include "units.h" #include // page_size +#include "affinity.h" #include "../perf.h" #include "asm/bug.h" #include "bpf-event.h" @@ -1170,9 +1171,33 @@ void perf_evlist__set_selected(struct evlist *evlist, void evlist__close(struct evlist *evlist) { struct evsel *evsel; + struct affinity affinity; + struct perf_cpu_map *cpus; + int i, cpu; + + if (!evlist->core.cpus) { + evlist__for_each_entry_reverse(evlist, evsel) + evsel__close(evsel); + return; + } - evlist__for_each_entry_reverse(evlist, evsel) - evsel__close(evsel); + if (affinity__setup(&affinity) < 0) + return; + cpus = evlist__cpu_iter_start(evlist); + cpumap__for_each_cpu (cpus, i, cpu) { + affinity__set(&affinity, cpu); + + evlist__for_each_entry_reverse(evlist, evsel) { + if (evlist__cpu_iter_skip(evsel, cpu)) + continue; + perf_evsel__close_cpu(&evsel->core, evsel->cpu_index); + evlist__cpu_iter_next(evsel); + } + } + evlist__for_each_entry_reverse(evlist, evsel) { + perf_evsel__free_fd(&evsel->core); + perf_evsel__free_id(&evsel->core); + } } static int perf_evlist__create_syswide_maps(struct evlist *evlist) diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h index cf90019ae744..2e3b011ed09e 100644 --- a/tools/perf/util/evsel.h +++ b/tools/perf/util/evsel.h @@ -391,4 +391,5 @@ static inline bool evsel__has_callchain(const struct evsel *evsel) struct perf_env *perf_evsel__env(struct evsel *evsel); int perf_evsel__store_ids(struct evsel *evsel, struct evlist *evlist); + #endif /* __PERF_EVSEL_H */ -- 2.21.0