Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp164551pxu; Thu, 3 Dec 2020 23:30:59 -0800 (PST) X-Google-Smtp-Source: ABdhPJyC0WWTWinnr2EbFciWZhoVTZX2FcpJqcMamp43xXkeVVOEtq2iOh/ghycgPXctGavinmlL X-Received: by 2002:a17:906:1294:: with SMTP id k20mr5804972ejb.404.1607067059691; Thu, 03 Dec 2020 23:30:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607067059; cv=none; d=google.com; s=arc-20160816; b=BruY9ppGrkj7Zmj67b0j/9OeSBzYRsW2inVD0GOQb8yV3iQA94O2AXvWe543VK/bi1 JSzI17rIjwjojfdhsukOz2S52JlWDUFM2LcO92YWrdz96z5/god/eU7vtj5bY+LGmm4r JpirfjAwzvPCXL8TXUFgflszrAcHsfnVG3Ix+cOl/VNkFUEbdrDS4HzB3AJPBFcL2TGH Zsr9GUVzzwHW9Z1BTGKYQGfKrkb/Hxp22MRwP1jtfDOCcfFzl1ceVufRQ/1OSikZUkv7 KA1LYytaodlvJKbbQH18V0EmZ0zQ3cgKG6Fv9T1WWQs/cAVuFjAsN+gJuNwTD2PpJGqt nP2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version; bh=w/0GhBP/RuBAMOpO8cPBTzjnAd3MRpvfIHK1+5TlA8k=; b=ri1mNHx6/7KK/JJU9C90wbHJDa9X6vT5gJ4vcZVQ0cZP82hbIIDZtB27FxnVDBQSRy Wba9bInXPp9Yo4WqA2O4pRLQiuMMFdZCGMoV1VqyHEVK6+ta2P46UJoVxT1MNI63931F CfU575FRGWdt6YgkCQt+ygLDAsZ42Q9x6P0IPQVep0aWx5fOjFFcNFXr+HzRKGA/4XyN Lof9B1wcmdiwyIJvgElv+zXTY37oaACQ8LsqDgxaA6KfJAv9dx7/wCdFdVhWVOVEdpBd jpBVHNLobUAdOE9o4GXn++LupqnpHGl0TxF4+j8KuYpaAJJ3XlhZpWTGyfGHq0QsB6Ck h1hg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q8si834000ejt.560.2020.12.03.23.30.35; Thu, 03 Dec 2020 23:30:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728297AbgLDH0c (ORCPT + 99 others); Fri, 4 Dec 2020 02:26:32 -0500 Received: from mail-qk1-f196.google.com ([209.85.222.196]:39875 "EHLO mail-qk1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726826AbgLDH0c (ORCPT ); Fri, 4 Dec 2020 02:26:32 -0500 Received: by mail-qk1-f196.google.com with SMTP id q22so4653889qkq.6 for ; Thu, 03 Dec 2020 23:26:16 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=w/0GhBP/RuBAMOpO8cPBTzjnAd3MRpvfIHK1+5TlA8k=; b=A5yYJ/TjsQJP/90T3ewuKN6g6uN/mn5XQsZBDm5aF/wLRBMH9mlV3zC0co/ezE9AS6 fRbQvj0271Kiyv/jI0Z535stG/Cy9znQRvJDc9YgqS+GpdQB4ZopO/OJdg125jK1Gx32 PWleYOsVZmWqfHuEKKpGVBWto0GPSakFIoT6vYyJxKuMjYEfpf6AM9RTkuU2PT8pEo8E ymeOeSk7Xnna9A504GUBgThCqIW+U8M6sCAqTqug9U8CUHgitKBE7e+yydMUieW8tMpp /5xT4QuNb2iqPR1FxlgmxBcIw5g7UkD569yvsq4wv6Kbp4NR8fh83WMw+/ybDvzEc/LW yZiQ== X-Gm-Message-State: AOAM533y0vfUNU24IJCLKEfDatM3xBN6kKCNmLrOxYyAa81uaTu2qLT9 cCvqLTWPMDPoAeW7ytwJnYVlhn5wDM8QUzcRlvE= X-Received: by 2002:a05:620a:990:: with SMTP id x16mr7300508qkx.316.1607066750554; Thu, 03 Dec 2020 23:25:50 -0800 (PST) MIME-Version: 1.0 References: <20201202150205.35750-1-namhyung@kernel.org> <20201202161934.GL3021@hirez.programming.kicks-ass.net> <20201203074522.GD2414@hirez.programming.kicks-ass.net> In-Reply-To: <20201203074522.GD2414@hirez.programming.kicks-ass.net> From: Namhyung Kim Date: Fri, 4 Dec 2020 16:25:39 +0900 Message-ID: Subject: Re: [RFC 1/2] perf core: Add PERF_COUNT_SW_CGROUP_SWITCHES event To: Peter Zijlstra Cc: Arnaldo Carvalho de Melo , Jiri Olsa , Ingo Molnar , Mark Rutland , Alexander Shishkin , LKML , Stephane Eranian , Ian Rogers , Andi Kleen Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Dec 3, 2020 at 4:45 PM Peter Zijlstra wrote: > > On Thu, Dec 03, 2020 at 11:10:30AM +0900, Namhyung Kim wrote: > > On Thu, Dec 3, 2020 at 1:19 AM Peter Zijlstra wrote: > > > > > diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h > > > index 9a38f579bc76..5eb284819ee5 100644 > > > --- a/include/linux/perf_event.h > > > +++ b/include/linux/perf_event.h > > > @@ -1174,25 +1174,19 @@ DECLARE_PER_CPU(struct pt_regs, __perf_regs[4]); > > > * which is guaranteed by us not actually scheduling inside other swevents > > > * because those disable preemption. > > > */ > > > -static __always_inline void > > > -perf_sw_event_sched(u32 event_id, u64 nr, u64 addr) > > > +static __always_inline void __perf_sw_event_sched(u32 event_id, u64 nr, u64 addr) > > > > It'd be nice to avoid the __ prefix if possible. > > Not having __ would seem to suggest its a function of generic utility. > Still, *shrug* ;-) Ok, noted. > > > > { > > > - if (static_key_false(&perf_swevent_enabled[PERF_COUNT_SW_CPU_MIGRATIONS])) > > > - return true; > > > - return false; > > > + return static_key_false(&perf_swevent_enabled[swevt]); > > > } > > > > > > static inline void perf_event_task_migrate(struct task_struct *task) > > > @@ -1207,11 +1201,9 @@ static inline void perf_event_task_sched_in(struct task_struct *prev, > > > if (static_branch_unlikely(&perf_sched_events)) > > > __perf_event_task_sched_in(prev, task); > > > > > > - if (perf_sw_migrate_enabled() && task->sched_migrated) { > > > - struct pt_regs *regs = this_cpu_ptr(&__perf_regs[0]); > > > - > > > - perf_fetch_caller_regs(regs); > > > - ___perf_sw_event(PERF_COUNT_SW_CPU_MIGRATIONS, 1, regs, 0); > > > + if (__perf_sw_enabled(PERF_COUNT_SW_CPU_MIGRATIONS) && > > > + task->sched_migrated) { > > > > It seems task->sched_migrate is set only if the event is enabled, > > then can we just check the value here? > > Why suffer the unconditional load and test? Your L1 too big? I just wanted to avoid typing long lines.. ;-p > > > > + __perf_sw_event_sched(PERF_COUNT_SW_CPU_MIGRATIONS, 1, 0); > > > task->sched_migrated = 0; > > > } > > > } > > > @@ -1219,7 +1211,13 @@ static inline void perf_event_task_sched_in(struct task_struct *prev, > > > static inline void perf_event_task_sched_out(struct task_struct *prev, > > > struct task_struct *next) > > > { > > > - perf_sw_event_sched(PERF_COUNT_SW_CONTEXT_SWITCHES, 1, 0); > > > + if (__perf_sw_enabled(PERF_COUNT_SW_CONTEXT_SWITCHES)) > > > + __perf_sw_event_sched(PERF_COUNT_SW_CONTEXT_SWITCHES, 1, 0); > > > + > > > + if (__perf_sw_enabled(PERF_COUNT_SW_CGROUP_SWITCHES) && > > > + (task_css_check(prev, perf_event_cgrp_id, 1)->cgroup != > > > + task_css_check(next, perf_event_cgrp_id, 1)->cgroup)) > > > + __perf_sw_event_sched(PERF_COUNT_SW_CGROUP_SWITCHES, 1, 0); > > > > I was not clear about the RCU protection here. Is it ok to access > > the task's css_set directly? > > We're here with preemption and IRQs disabled, good luck trying to get > RCU to consider that not a critical section and spirit things away under > us. Ok, someday I'll go reading the RCU code.. :) Thanks, Namhyung