Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp297742pxu; Wed, 2 Dec 2020 23:48:16 -0800 (PST) X-Google-Smtp-Source: ABdhPJw1orVtY0ENl5OziQis1RHAZk5nU2h+jljJsPqAxzGW4HRlLPVbNlyT9jOgS83M3Hvlh0gX X-Received: by 2002:aa7:d9c1:: with SMTP id v1mr1707882eds.115.1606981696244; Wed, 02 Dec 2020 23:48:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606981696; cv=none; d=google.com; s=arc-20160816; b=y9zNHz0O9p/V3HLRqMBdZmYLJPZQRpf9qgP+otpwE8/42MXDJPjJpaTl5kxs+PRYNr 9kp1jG53Y4yjzhZrZBjzs/rmonfCRmC18MFjMsHd9sEowm4FLZw4ccG2jvQ3o0oDdi+V 6o8amT35AyJi2fG8CQlvehS4ZOrJx+hMq96ga9PqTRcgHa4bXZkByrt9S4f2GS5kphF1 XUyxmf+QRXA5hVZXa9/Hxw5/nytBAWl+i+6JJSFGzSpyZJfwvwYzM7TVp7xWI2KdnvS5 IZjH6EhpmU0FCT80A/m2wU5BhGs8IeoYTR9z6Gf899Adv7xx1SYLHbCHpHDZPhi0NgNx MVCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=rEW+7i7I0qpFZ3Yw3/hjlLvCvJchz8ADOsmXvZqf1Ms=; b=zhFP7wW6q3cBvxysmcG+elV6KfZ9miVxzguBa7rcDenFlRVTl4Ah7NVFDeXfWLwPkt BWyEHf6yA5tlSVg8LpJdGjYTVP8+fIUhCY0vrPphW10Z4EF53mIJs36G0Mue3Nz1y7xw UPJD3kFPxWDSWXvHRMve1gvV1DDJVhbQeausJj2I4T3zQjkKmUDMKatfyhJRQC0qmzAy DXbOvBTWU58WEkSEzLaesbSyMyYHOqRv38Kp9zQrfc5qSNs/liLC49TqSHfN5rgJ/L65 Ya4mL5m0Vyh3HecrvbHwGSbYyUrT5EAfQDPf3q6yHduyKMtTNIFmqoPelhp21SGEj8yV 8NTg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=Qd5Cao+L; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g19si754346ejf.221.2020.12.02.23.47.53; Wed, 02 Dec 2020 23:48:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=Qd5Cao+L; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387753AbgLCHqO (ORCPT + 99 others); Thu, 3 Dec 2020 02:46:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387585AbgLCHqN (ORCPT ); Thu, 3 Dec 2020 02:46:13 -0500 Received: from merlin.infradead.org (merlin.infradead.org [IPv6:2001:8b0:10b:1231::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 870E6C061A4D for ; Wed, 2 Dec 2020 23:45:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=rEW+7i7I0qpFZ3Yw3/hjlLvCvJchz8ADOsmXvZqf1Ms=; b=Qd5Cao+LWDPBH1ql1ftk1mKXY/ J3LLek6cP1BcTViDKv2LpH7Uvd0C/eZkkqAtZF/bI9QQCjsxvQdOxUnoo2pKmz+iDAvFRSB0LDeRK YPijnlpdJF4/IHfkmgnQB+vuoY2nDbAQjWYjT8SjNzugku9NhaXJPcTkvMLnYd2Aj6lmYjutmW/gM Uh2Vd7FX/+jrBc+W0mM8w1Zehc94kYw1bXb7fawMrp99zjr2feAN7/eRERLEdjYy8P/JX8SLQbWQG 1XIyXqCcuTCy8n3yYaluceHECzJqB/R9/6SyTHRWKBvXPTi3ffOUeC4MvmZHeZsF2McPugLPK2G6g sYfNlqvQ==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kkjIs-0006wS-NQ; Thu, 03 Dec 2020 07:45:27 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 22072302753; Thu, 3 Dec 2020 08:45:22 +0100 (CET) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id E2C78201BAA6D; Thu, 3 Dec 2020 08:45:22 +0100 (CET) Date: Thu, 3 Dec 2020 08:45:22 +0100 From: Peter Zijlstra To: Namhyung Kim Cc: Arnaldo Carvalho de Melo , Jiri Olsa , Ingo Molnar , Mark Rutland , Alexander Shishkin , LKML , Stephane Eranian , Ian Rogers , Andi Kleen Subject: Re: [RFC 1/2] perf core: Add PERF_COUNT_SW_CGROUP_SWITCHES event Message-ID: <20201203074522.GD2414@hirez.programming.kicks-ass.net> References: <20201202150205.35750-1-namhyung@kernel.org> <20201202161934.GL3021@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Dec 03, 2020 at 11:10:30AM +0900, Namhyung Kim wrote: > On Thu, Dec 3, 2020 at 1:19 AM Peter Zijlstra wrote: > > > diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h > > index 9a38f579bc76..5eb284819ee5 100644 > > --- a/include/linux/perf_event.h > > +++ b/include/linux/perf_event.h > > @@ -1174,25 +1174,19 @@ DECLARE_PER_CPU(struct pt_regs, __perf_regs[4]); > > * which is guaranteed by us not actually scheduling inside other swevents > > * because those disable preemption. > > */ > > -static __always_inline void > > -perf_sw_event_sched(u32 event_id, u64 nr, u64 addr) > > +static __always_inline void __perf_sw_event_sched(u32 event_id, u64 nr, u64 addr) > > It'd be nice to avoid the __ prefix if possible. Not having __ would seem to suggest its a function of generic utility. Still, *shrug* ;-) > > { > > - if (static_key_false(&perf_swevent_enabled[PERF_COUNT_SW_CPU_MIGRATIONS])) > > - return true; > > - return false; > > + return static_key_false(&perf_swevent_enabled[swevt]); > > } > > > > static inline void perf_event_task_migrate(struct task_struct *task) > > @@ -1207,11 +1201,9 @@ static inline void perf_event_task_sched_in(struct task_struct *prev, > > if (static_branch_unlikely(&perf_sched_events)) > > __perf_event_task_sched_in(prev, task); > > > > - if (perf_sw_migrate_enabled() && task->sched_migrated) { > > - struct pt_regs *regs = this_cpu_ptr(&__perf_regs[0]); > > - > > - perf_fetch_caller_regs(regs); > > - ___perf_sw_event(PERF_COUNT_SW_CPU_MIGRATIONS, 1, regs, 0); > > + if (__perf_sw_enabled(PERF_COUNT_SW_CPU_MIGRATIONS) && > > + task->sched_migrated) { > > It seems task->sched_migrate is set only if the event is enabled, > then can we just check the value here? Why suffer the unconditional load and test? Your L1 too big? > > + __perf_sw_event_sched(PERF_COUNT_SW_CPU_MIGRATIONS, 1, 0); > > task->sched_migrated = 0; > > } > > } > > @@ -1219,7 +1211,13 @@ static inline void perf_event_task_sched_in(struct task_struct *prev, > > static inline void perf_event_task_sched_out(struct task_struct *prev, > > struct task_struct *next) > > { > > - perf_sw_event_sched(PERF_COUNT_SW_CONTEXT_SWITCHES, 1, 0); > > + if (__perf_sw_enabled(PERF_COUNT_SW_CONTEXT_SWITCHES)) > > + __perf_sw_event_sched(PERF_COUNT_SW_CONTEXT_SWITCHES, 1, 0); > > + > > + if (__perf_sw_enabled(PERF_COUNT_SW_CGROUP_SWITCHES) && > > + (task_css_check(prev, perf_event_cgrp_id, 1)->cgroup != > > + task_css_check(next, perf_event_cgrp_id, 1)->cgroup)) > > + __perf_sw_event_sched(PERF_COUNT_SW_CGROUP_SWITCHES, 1, 0); > > I was not clear about the RCU protection here. Is it ok to access > the task's css_set directly? We're here with preemption and IRQs disabled, good luck trying to get RCU to consider that not a critical section and spirit things away under us.