Received: by 2002:a25:7ec1:0:0:0:0:0 with SMTP id z184csp254896ybc; Fri, 15 Nov 2019 21:55:22 -0800 (PST) X-Google-Smtp-Source: APXvYqxNJK1zl0WrQq0bkBf05x47bp6BseuWvXy0xn3yaOOosQ4cI0Frug7iGVCV3Ca/ThLouLh4 X-Received: by 2002:a17:906:fcdb:: with SMTP id qx27mr6994776ejb.255.1573883722370; Fri, 15 Nov 2019 21:55:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1573883722; cv=none; d=google.com; s=arc-20160816; b=Rb5CKPh9yqGaRv2bxffFHHuptCN/TpAV1IStB1c3TiBANouMLRQVIM8HzKbPsOIzNc qT9Rj1AuFJN1H133jt1NA0WeM0F2GigQB2vkz/GNYwOPSkdBsqMBImHTHW48EMYfqCjM 38W04tEryW2NC/THGFesaaHYClL4wjDDNptKCRNuJ3w/3wiLKbSu/H0Ca+NjnftjP0WE ISOjD7148UYqitoztvTlffGVVgLDXiJSfZ3tUcpTer3vK9E79Frb4xTxu1Ulf/+1yN1U 7i2oQO9tVwjsbz/Cqx+8fAWEyNFzJlXaat7pUxvUXlCSCB00HaUOBEvD4XH3qNZQcKyY 5a4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=+Gz9FS7M5pKer6denooAmKxGYW0Ebyx1uuJ+YfxraeY=; b=KyygsDeWNbCtUPy2KRPJJd+DigWC6GoAf2lVAsVZobHneBU+f79Q6+Ys6z9kqeXWKt C9nsWaRg9bUM5UrkDD9B+EsYp6WT81+7TrPhg4muu157LvrLDGwjVm8ePntyvBTA6qx3 ZeJQwcUMkZw02yoEt6mU0a7NH0iZ4NFR706tdCWa/upFVyM/1rHCNY6Xb79t87AoEMaN KVkEgyU0dj9FyJ8fvIG0Cn2m8F8KmhN/oXEuv+cYgRlC2QQhl/vzu2EYirW6uR7gGjhD 8Az9rJDsgtYpkyLt5wavHD6OSzPe5bTamfMKBr0KeSvC5ZRS/SugCDVb5cMrrpZotHsi xmKw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id be23si7581666edb.103.2019.11.15.21.54.57; Fri, 15 Nov 2019 21:55:22 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726794AbfKPFwp (ORCPT + 99 others); Sat, 16 Nov 2019 00:52:45 -0500 Received: from mga02.intel.com ([134.134.136.20]:7020 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725962AbfKPFwp (ORCPT ); Sat, 16 Nov 2019 00:52:45 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 15 Nov 2019 21:52:44 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,311,1569308400"; d="scan'208";a="203760165" Received: from tassilo.jf.intel.com (HELO tassilo.localdomain) ([10.7.201.21]) by fmsmga007.fm.intel.com with ESMTP; 15 Nov 2019 21:52:43 -0800 Received: by tassilo.localdomain (Postfix, from userid 1000) id CF7D7300FC4; Fri, 15 Nov 2019 21:52:43 -0800 (PST) From: Andi Kleen To: acme@kernel.org Cc: jolsa@kernel.org, linux-kernel@vger.kernel.org Subject: Optimize perf stat for large number of events/cpus Date: Fri, 15 Nov 2019 21:52:17 -0800 Message-Id: <20191116055229.62002-1-andi@firstfloor.org> X-Mailer: git-send-email 2.23.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org [v7: Address review feedback. Fix python script problem reported by 0day. Drop merged patches.] This patch kit optimizes perf stat for a large number of events on systems with many CPUs and PMUs. Some profiling shows that the most overhead is doing IPIs to all the target CPUs. We can optimize this by using sched_setaffinity to set the affinity to a target CPU once and then doing the perf operation for all events on that CPU. This requires some restructuring, but cuts the set up time quite a bit. In theory we could go further by parallelizing these setups too, but that would be much more complicated and for now just batching it per CPU seems to be sufficient. At some point with many more cores parallelization or a better bulk perf setup API might be needed though. In addition perf does a lot of redundant /sys accesses with many PMUs, which can be also expensve. This is also optimized. On a large test case (>700 events with many weak groups) on a 94 CPU system I go from real 0m8.607s user 0m0.550s sys 0m8.041s to real 0m3.269s user 0m0.760s sys 0m1.694s so shaving ~6 seconds of system time, at slightly more cost in perf stat itself. On a 4 socket system with the savings are more dramatic: real 0m15.641s user 0m0.873s sys 0m14.729s to real 0m4.493s user 0m1.578s sys 0m2.444s so 11s difference in the user visible set up time. Also available in git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-misc perf/stat-scale-10 v1: Initial post. v2: Rebase. Fix some minor issues. v3: Rebase. Address review feedback. Fix one minor issue v4: Modified based on review feedback. Now it maintains all_cpus per evlist. There is still a need for cpu_index iteration to get the correct index for indexing the file descriptors. Fix bug with unsorted cpu maps, now they are always sorted. Some cleanups and refactoring. v5: Split patches. Redo loop iteration again. Fix cpu map merging for uncore. Remove duplicates from cpumaps. Add unit tests. v6: Address review feedback. Fix some bugs. Add more comments. Merge one invalid patch split. v7: Address review feedback. Fix python scripting (thanks 0day) Minor updates. -Andi