Received: by 2002:a05:7412:d8a:b0:e2:908c:2ebd with SMTP id b10csp3342799rdg; Tue, 17 Oct 2023 11:31:48 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHlkKimlJatDi16BNTreNjm+zgrb40+YKsv9RngOf1xm2aJT5+CQMrfMxH3VYZR05jCzbvf X-Received: by 2002:a05:6a20:4288:b0:13a:e955:d958 with SMTP id o8-20020a056a20428800b0013ae955d958mr3455705pzj.7.1697567508318; Tue, 17 Oct 2023 11:31:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697567508; cv=none; d=google.com; s=arc-20160816; b=WbSFU5H/NZcQXL1E0bN5ZLfK3YRRcvNQYk1zDG1WMX6x25ffCqwzdmo5v5KN1KojJ4 R0DWEw5Q638uJB43yX3ts1r6Xt4xNGsU4h9qQA7brPZ+f8fTQuifNymKi7uAdIFTV3BT TPWr2yRoHDjCwDI46yT0Scjhpzl50fGE+6yuRlyZNr7EiMix6Fa5lnh3Ep4cMzT2y1AU NPcrw9oJNHYWnxEL6t7qhq1p2pEErzey7ZXIIMgfOg6k9JvwzJCaDYfRu6ngokSHzTaC T7PzFT0gkmSYbs4EmyrWcl7ENTN4ewSgOCcyuYxoXgWwomcjgSMiYtrMWhafp85MGS0R ZsnQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=M/xzAf12StiLi9Sm5AsJ5+kmhJ3jeEf6G+gSLoAJwPE=; fh=hQ+BUuCg7gBze1e8gtQ4yU2+OgLFXeAacUW3ZtDEiag=; b=fngcC47GCG43iL7lQEiKD9m7ctYvj38xg4UIWKpvL7bWGSQcRtvrf/LrCh/uptNoOy z7QcyGdfAPUIZ6cUWQUfVBsPbMgSezyjL0URCTaOJIM99DGq2nhbNpLRGcmtLJgd2l7v 0C1TSSJwDB3aQ1td7l2RMekV9/xwcjnTcHhcEEajr4nAkZ58327D60YvKQjGSpBKgfw2 SuVQYHPtYCYGDv9YPSPMLrEkIzGAox5WHL5Fdvhvw/lLlGBgMlmtYC/CDWjKnZiZhpCG Ee7UvQO25P9fSFVvHg8UL1sTa9gDqPySsyO0ExMSL8GaSSWaL2HUBxSwF9Eu70WEyP3m 2rQg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=C+aByyw7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id bv70-20020a632e49000000b00573fe48c908si294837pgb.128.2023.10.17.11.31.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 11:31:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=C+aByyw7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 225CF8026546; Tue, 17 Oct 2023 11:31:44 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234197AbjJQSbg (ORCPT + 99 others); Tue, 17 Oct 2023 14:31:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229848AbjJQSbf (ORCPT ); Tue, 17 Oct 2023 14:31:35 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D65D90 for ; Tue, 17 Oct 2023 11:31:34 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3DA58C433C7; Tue, 17 Oct 2023 18:31:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697567493; bh=I8nwPyTh3TvafnVbJJLXEU77LG2xbbve67ZL5kQU/H0=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=C+aByyw7gH3deKcZivY72gN8xOJa86XDoE11rQ34t96sTyhNqvl4iQB71G4VWqSjl WBqDGeZwrytcjj1eghojUwiCDWIljMEv8gztIfEqCeVLQUJa9Hqg9gYYH39QLI3ydT xotENTMSIQbH/1RC8aaxDgW4OG7+6tDSeqg00YXgnFUOQILB+qSSmkWQ5xblSbiOlU Pwmifc3LizKzc+hz4ujAs7xQaCsf/skkgS0QuUqjCwpkeHvt0X/cJ27EKcCNKZRYne Tqh13DfsdnDkqfjWPcJCahOUql4sCNxWp2FYxhH8M7VK3iSJoibDL+LAlG0DGwXl4f K+ZWZqUnHCSsA== Received: by quaco.ghostprotocols.net (Postfix, from userid 1000) id 9402F40016; Tue, 17 Oct 2023 15:31:30 -0300 (-03) Date: Tue, 17 Oct 2023 15:31:30 -0300 From: Arnaldo Carvalho de Melo To: Ingo Molnar Cc: Namhyung Kim , Jiri Olsa , Ian Rogers , Adrian Hunter , Peter Zijlstra , LKML , linux-perf-users@vger.kernel.org Subject: Re: [perf stat] Extend --cpu to non-system-wide runs too? was Re: [PATCH v3] perf bench sched pipe: Add -G/--cgroups option Message-ID: References: <20231016044225.1125674-1-namhyung@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Url: http://acmel.wordpress.com X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Tue, 17 Oct 2023 11:31:44 -0700 (PDT) Em Tue, Oct 17, 2023 at 02:43:45PM +0200, Ingo Molnar escreveu: > * Arnaldo Carvalho de Melo wrote: > > Em Tue, Oct 17, 2023 at 01:40:07PM +0200, Ingo Molnar escreveu: > > > Side note: it might make sense to add a sane cpumask/affinity setting > > > option to perf stat itself: > > > perf stat --cpumask > > > ... or so? > > > We do have -C: > > > -C, --cpu list of cpus to monitor in system-wide > > > ... but that's limited to --all-cpus, right? > > > Perhaps we could extend --cpu to non-system-wide runs too? > > Maybe I misunderstood your question, but its a list of cpus to limit the > > counting: > Ok. > So I thought that "--cpumask mask/list/etc" should simply do what 'taskset' > is doing: using the sched_setaffinity() syscall to make the current > workload and all its children. > There's impact on perf stat itself: it could just call sched_setaffinity() > early on, and not bother about it? > Having it built-in into perf would simply make it easier to not forget > running 'taskset'. :-) Would that be the only advantage? I think using taskset isn't that much of a burden and keeps with the Unix tradition, no? :-\ See, using 'perf record -C', i.e. sampling, will use sched_setaffinity, and in that case there is a clear advantage... wait, this train of thought made me remember something, but its just about counter setup, not about the workload: [acme@five perf-tools-next]$ grep affinity__set tools/perf/*.c tools/perf/builtin-stat.c: else if (affinity__setup(&saved_affinity) < 0) tools/perf/builtin-stat.c: if (affinity__setup(&saved_affinity) < 0) [acme@five perf-tools-next]$ /* * perf_event_open does an IPI internally to the target CPU. * It is more efficient to change perf's affinity to the target * CPU and then set up all events on that CPU, so we amortize * CPU communication. */ void affinity__set(struct affinity *a, int cpu) [root@five ~]# perf trace --summary -e sched_setaffinity perf stat -e cycles -a sleep 1 Performance counter stats for 'system wide': 6,319,186,681 cycles 1.002665795 seconds time elapsed Summary of events: perf (24307), 396 events, 87.4% syscall calls errors total min avg max stddev (msec) (msec) (msec) (msec) (%) --------------- -------- ------ -------- --------- --------- --------- ------ sched_setaffinity 198 0 4.544 0.006 0.023 0.042 2.30% [root@five ~]# [root@five ~]# perf trace --summary -e sched_setaffinity perf stat -C 1 -e cycles -a sleep 1 Performance counter stats for 'system wide': 105,311,506 cycles 1.001203282 seconds time elapsed Summary of events: perf (24633), 24 events, 29.6% syscall calls errors total min avg max stddev (msec) (msec) (msec) (msec) (%) --------------- -------- ------ -------- --------- --------- --------- ------ sched_setaffinity 12 0 0.105 0.005 0.009 0.039 32.07% [root@five ~]# perf trace --summary -e sched_setaffinity perf stat -C 1,2 -e cycles -a sleep 1 Performance counter stats for 'system wide': 131,474,375 cycles 1.001324346 seconds time elapsed Summary of events: perf (24636), 36 events, 38.7% syscall calls errors total min avg max stddev (msec) (msec) (msec) (msec) (%) --------------- -------- ------ -------- --------- --------- --------- ------ sched_setaffinity 18 0 0.442 0.000 0.025 0.093 24.75% [root@five ~]# perf trace --summary -e sched_setaffinity perf stat -C 1,2,30 -e cycles -a sleep 1 Performance counter stats for 'system wide': 191,674,889 cycles 1.001280015 seconds time elapsed Summary of events: perf (24639), 48 events, 45.7% syscall calls errors total min avg max stddev (msec) (msec) (msec) (msec) (%) --------------- -------- ------ -------- --------- --------- --------- ------ sched_setaffinity 24 0 0.835 0.000 0.035 0.144 24.40% [root@five ~]# Too much affinity setting :-) - Arnaldo