Received: by 2002:a05:6358:45e:b0:b5:b6eb:e1f9 with SMTP id 30csp402332rwe; Thu, 25 Aug 2022 02:27:12 -0700 (PDT) X-Google-Smtp-Source: AA6agR4IAZPU0DBikEjUCF1AnSKLIZNE0pWrvK6uiPOKdMVO9nsnlSiUDpWmrnG+UPAVDrOY/j/G X-Received: by 2002:aa7:c155:0:b0:447:c272:2af6 with SMTP id r21-20020aa7c155000000b00447c2722af6mr806819edp.308.1661419631956; Thu, 25 Aug 2022 02:27:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661419631; cv=none; d=google.com; s=arc-20160816; b=jlW5YedD/gxNzTVv0FwbYYiT1rvkFSLS+3T2+QJeT5Duirue0bcjFwCJEy8yBLovLa u23eKHUl1UK/U/15l+AXhnuhywRwchH2xQwayV47UzeLGPyeEW89ijYVSFXXobkYIBzg fLFBZTMLqznNjIR8npLZuJ5YiabUl6eWAl/mOkU/dmPyFHjRavIksBxSva0NvctXS0a3 m35rxNrgmYu5s6cEvbyAOycI7GxGbmHa7IRvfzz+g5cV+DmMFDpyQd+0y8PqAbEN7aHj eAftqymYax0RvlRNmNbdRVHvMvyzNbykV71tTHU5XTbRZkunMLBes4CipVJ1MK/zRoFj u2RA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=VqZJ/jwCsluFFo6hdnIYFxBUdWMktKiNLEdK5Tkhdhg=; b=eaiq/85sR0XYuBfz/zjs/nGH4wObCB6V8YeLYz1mYhDU8UWURVd6vD9Xqu3ltXS1yt 3+3AVstZKvlddpYcbMI1iyonG4EWylihZM6gykJA3dV7IVsL6wkWrCPSvDwa9VcVPvY5 GCmwrC+Q+k0f4huYI8+MaQiJIiPvgFN3s7xnFgKMPYhKa0h0JC2F7j/wdtOXMVVHP0I2 Sd7JVOSchG4RX4T6XQMG42ufKQjfKm8tvqJ7lAUjS3FLIALjFgpDMl8OXke/72EcsEAy fkMTLH0pg4Suy6qqBAjzB7SdiQuHfOWJyRxGn3uvn8Es/HuC1wf8ElnVBWJiq9sXw+Nw XKxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=O0I8K2kb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ds8-20020a170907724800b0073dc0a15a74si2302150ejc.48.2022.08.25.02.26.46; Thu, 25 Aug 2022 02:27:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=O0I8K2kb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239534AbiHYJSW (ORCPT + 99 others); Thu, 25 Aug 2022 05:18:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47548 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231438AbiHYJSH (ORCPT ); Thu, 25 Aug 2022 05:18:07 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 03558A9C10 for ; Thu, 25 Aug 2022 02:17:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=VqZJ/jwCsluFFo6hdnIYFxBUdWMktKiNLEdK5Tkhdhg=; b=O0I8K2kbkDtnksZknhbWqmlZTr G/gQWJ+76GNrowrI1uLR0LldDIdpJdx/F69ZWzoWY3iB/2pTILH8sPzdyHHYp6pt1eexc7Uvu4as8 poEMbvF78TFneJ64ad02M8CvE8s3B1MIyO5rFZ07pBQOjK8uUDZkN7VzayoOhYmhsOOevO6G0XA1k 8jOU6V1g0nnLh5E37PdJS7S0yVq8frDHS+dp+VCchRet+ajDKhDnJWZhj8+4cl4YzRt6H9xiWOldG XzxwdOPGH70cuxYAsi6Th9KKHRG5i1frAL0Zp0UkUZwuHd9NONwhXBtIhn25fM0I7D8P9bFSQbsTl DeNERlEA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=worktop.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1oR8zT-00H7mW-Rk; Thu, 25 Aug 2022 09:17:31 +0000 Received: by worktop.programming.kicks-ass.net (Postfix, from userid 1000) id EA4D598014D; Thu, 25 Aug 2022 11:17:29 +0200 (CEST) Date: Thu, 25 Aug 2022 11:17:29 +0200 From: Peter Zijlstra To: Ravi Bangoria Cc: acme@kernel.org, alexander.shishkin@linux.intel.com, jolsa@redhat.com, namhyung@kernel.org, songliubraving@fb.com, eranian@google.com, alexey.budankov@linux.intel.com, ak@linux.intel.com, mark.rutland@arm.com, megha.dey@intel.com, frederic@kernel.org, maddy@linux.ibm.com, irogers@google.com, kim.phillips@amd.com, linux-kernel@vger.kernel.org, santosh.shukla@amd.com Subject: Re: [RFC v2] perf: Rewrite core context handling Message-ID: References: <20220113134743.1292-1-ravi.bangoria@amd.com> <10d8889e-4ca9-7e4e-a3e4-d769da79d047@amd.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <10d8889e-4ca9-7e4e-a3e4-d769da79d047@amd.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 25, 2022 at 11:09:05AM +0530, Ravi Bangoria wrote: > > -static inline int __pmu_filter_match(struct perf_event *event) > > -{ > > - struct pmu *pmu = event->pmu; > > - return pmu->filter_match ? pmu->filter_match(event) : 1; > > -} > > - > > -/* > > - * Check whether we should attempt to schedule an event group based on > > - * PMU-specific filtering. An event group can consist of HW and SW events, > > - * potentially with a SW leader, so we must check all the filters, to > > - * determine whether a group is schedulable: > > - */ > > -static inline int pmu_filter_match(struct perf_event *event) > > -{ > > - struct perf_event *sibling; > > - > > - if (!__pmu_filter_match(event)) > > - return 0; > > - > > - for_each_sibling_event(sibling, event) { > > - if (!__pmu_filter_match(sibling)) > > - return 0; > > - } > > - > > - return 1; > > -} > > - > > static inline int > > event_filter_match(struct perf_event *event) > > { > > return (event->cpu == -1 || event->cpu == smp_processor_id()) && > > - perf_cgroup_match(event) && pmu_filter_match(event); > > + perf_cgroup_match(event); > > There are many callers of event_filter_match() which might not endup calling > visit_groups_merge(). I hope this is intentional change? I thought I did, but lets go through them again. event_filter_match() is called from: - __perf_event_enable(); here we'll end up in ctx_sched_in() which will dutifully skip the pmu in question. (fwiw, this is one of those sites where ctx_sched_{out,in}() could do with a @pmu argument. - merge_sched_in(); this is after the new callsite in visit_groups_merge(). - perf_adjust_freq_unthrottle_context(); if the pmu was skipped in visit_groups_merge() then ->state != ACTIVE and we'll bail out. - perf_iterate_ctx() / perf_iterate_sb_cpu(); these are for generating side-band events, and arguably not delivering them when running on the 'wrong' CPU wasn't right to begin with. So I tihnk we're good. Hmm?