Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp2906332pxj; Mon, 10 May 2021 13:31:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwovvj3ctXLRXDregutRs3mm9mk/QNt6jB+0Rpuflur2DgnmCDmK13DucMP4Ki6Q60i+QUi X-Received: by 2002:a05:6402:1a38:: with SMTP id be24mr32037934edb.293.1620678687627; Mon, 10 May 2021 13:31:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620678687; cv=none; d=google.com; s=arc-20160816; b=E+LxARWnWhDRbjHyGWU73mbE9MZ1LLiWsfVIdsV4brx4ulwVQ0tYdBPcyt+GRQDNTh mX91BulVBY9OZ7XHG28lPN/URCMpX8MzSJKVrtqWnMUgYF9Q57kcvWGPlZBcuxuD9Kn4 QagFRaSOsilrbh2qiD+W5MNZY1omqY1FsraMLG14eyy8/4fql+Ec36p+DWeDq6FX2wch xCJumLph5Sbze2Wpq+hTLwNkA8WHkuveLcTVysM7hIxJJdvDrvvO9kUUcZfATwx9kdT+ A0q/+uEKuSAHbhu0mYJODd5+EHFDADWqLe8e5x6Rhr4aiGbK1h8gmT/BK2U30FTTVsq5 Tklw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=Auga0aGGQcQ8sTuRWJzo7jVLOP7c7q9DQIDY/3Hr4L8=; b=WxDHhFsi3bMwg4+I5OZUGPeUMq5OvNuQCYe1VEZmzvlO/JjK5ZayJyMHBd648j4/ep Ml9rV/SIFGBumAqbqL9ZdIl2QbLcT3xCEnMJOj3zFsl7ybVlLAlFa78rq9kK/ngGxgdy whzy+j0PQXvtdHKvf2xK7rT3DLkEl2op+sOgrokpqgTw4B4QYfFqjVZWTSrTgFQ3dVgi /24RlOZFjCIbf4pv/e50YAy7CurXYGWB6VgwRgEf+zqSVJgRQ90P50zb28EdDSgG4CXY R0gcr2L3j262KXdCooo+EokCOjCGxs+q8Z498l6ZbkiqLoMX5oi4K+9lJB6ArAcH4v7C suRQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=q06fs1u9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dk3si13526089edb.73.2021.05.10.13.31.03; Mon, 10 May 2021 13:31:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=q06fs1u9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231672AbhEJUaj (ORCPT + 99 others); Mon, 10 May 2021 16:30:39 -0400 Received: from mail.kernel.org ([198.145.29.99]:50448 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230449AbhEJUaj (ORCPT ); Mon, 10 May 2021 16:30:39 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id D9CC361433 for ; Mon, 10 May 2021 20:29:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1620678573; bh=oaTgjDQTfshiY3HK0jVQ1voc8/BEcB5zGI7QMfQSF9I=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=q06fs1u9eYAPV7usG9KN0CKyWOqBRsYrBPlsCUwx6B80+r1iUjyH1mESsLC470rb0 9IHY3kraSxXK7g5AVCeXPO3QvW3P6Em+LJIeYCFkNr1sph/biWor3a5xvu4+aiZAhn uVaO72KqjEBahOywnXCIe7fiCbpsyhB6Az76CMaNXz+5CjtZVEyo40PYG9JiIcy7V+ t0hrduJGxu+yyhYq5T8S/6VoBhC35aRl1NfZMULyBgYutlz0o1LfkKligyfNGE597p rA/XOBC7zAqgeFO8lnmBA8ZpFKpQt9LqU05ocAIA/u8krYeP/Tu5wHgeGh3sHccjYE I4KI7JgaB8XNQ== Received: by mail-ej1-f41.google.com with SMTP id ga25so665118ejb.12 for ; Mon, 10 May 2021 13:29:33 -0700 (PDT) X-Gm-Message-State: AOAM532UcLqcZa+aoVNO008sIfuxmv4nttODLrX6DWtytYTF/E6qxABK pkDdzh1o3RZX2JXM1kEZ7D9RrrBCfAIDAUYmCw== X-Received: by 2002:a17:906:f1d4:: with SMTP id gx20mr27377644ejb.108.1620678572460; Mon, 10 May 2021 13:29:32 -0700 (PDT) MIME-Version: 1.0 References: <1619115952-155809-1-git-send-email-kan.liang@linux.intel.com> <20210510191811.GA21560@worktop.programming.kicks-ass.net> In-Reply-To: <20210510191811.GA21560@worktop.programming.kicks-ass.net> From: Rob Herring Date: Mon, 10 May 2021 15:29:21 -0500 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH V6] perf: Reset the dirty counter to prevent the leak for an RDPMC task To: Peter Zijlstra Cc: "Liang, Kan" , Ingo Molnar , "linux-kernel@vger.kernel.org" , Andi Kleen , Arnaldo Carvalho de Melo , Mark Rutland , Andy Lutomirski , Stephane Eranian , Namhyung Kim Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, May 10, 2021 at 2:18 PM Peter Zijlstra wrote: > > On Thu, Apr 22, 2021 at 11:25:52AM -0700, kan.liang@linux.intel.com wrote: > > > - Add a new method check_leakage() to check and clear dirty counters > > to prevent potential leakage. > > I really dislike adding spurious callbacks, also because indirect calls > are teh suck, but also because it pollutes the interface so. > > That said, I'm not sure I actually like the below any better :/ > > --- > > arch/x86/events/core.c | 58 +++++++++++++++++++++++++++++++++++++++++--- > arch/x86/events/perf_event.h | 1 + > include/linux/perf_event.h | 2 ++ > kernel/events/core.c | 7 +++++- > 4 files changed, 63 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c > index 8e509325c2c3..e650c4ab603a 100644 > --- a/arch/x86/events/core.c > +++ b/arch/x86/events/core.c > @@ -740,21 +740,26 @@ void x86_pmu_enable_all(int added) > } > } > > -static inline int is_x86_event(struct perf_event *event) > +static inline bool is_x86_pmu(struct pmu *_pmu) > { > int i; > > if (!is_hybrid()) > - return event->pmu == &pmu; > + return _pmu == &pmu; > > for (i = 0; i < x86_pmu.num_hybrid_pmus; i++) { > - if (event->pmu == &x86_pmu.hybrid_pmu[i].pmu) > + if (_pmu == &x86_pmu.hybrid_pmu[i].pmu) > return true; > } > > return false; > } [...] > +bool arch_perf_needs_sched_in(struct pmu *pmu) > +{ > + if (!READ_ONCE(x86_pmu.attr_rdpmc)) > + return false; > + > + if (!is_x86_pmu(pmu)) > + return false; > + > + return current->mm && atomic_read(¤t->mm->context.perf_rdpmc_allowed); > } Why add an arch hook for something that clearly looks to be per PMU? Couldn't we add another atomic/flag for calling sched_task() that is per PMU rather than per CPU. With that, I think I can avoid a hook in switch_mm() and keep every self contained in the Arm PMU driver. Rob