Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3307112imu; Fri, 18 Jan 2019 08:11:49 -0800 (PST) X-Google-Smtp-Source: ALg8bN7HFZg/jx2Tr+6eNHewJIR4tix0H6QMiKg/q4/1Izw+Ww/OUAA0rYfV5rv6AFlFBqg3rJ4x X-Received: by 2002:a17:902:4503:: with SMTP id m3mr19976827pld.23.1547827909467; Fri, 18 Jan 2019 08:11:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547827909; cv=none; d=google.com; s=arc-20160816; b=wMkUTcW08oWOHrvqWykMz9x+zzI5LjsQ3MIs37kiPU6uWvRpr85UWvmNHxKLfIRCJO UMUCQU6u13xma3nSuNYjvG0zlmaYP3L6+pjAVRyUpAfNlu9tz6foonobrrwgtDC5pMGS HXmqhjSO6hCP5ha5b+myGEGYHCr5q+9VUCVc84AOnnH875xKG46dNmot94Xp5i0QO8Sn nyBegzFgOepxqxsbY40w3rXhZBdUw2p1k8BUudqfhXmoNyMDINHxKb2Q0pikIc3blUlt L8guJ9WUfokpvHR+zq8dCQCngMZ3mKi+tB52fJuxICa2tWiYNR7Q1v3QK05DEpNcfi5d 7rZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=iX8VW9HjgFXT7Ii2QEevfwIhExiKmVnAFwBnYZ0K2tw=; b=WqeEB+ESvZTA31FnJ+Zr7E5I7jrnY6mCTRqVpPcBoPot01rwVrhdRnSc7rHSobDPZz jsXA44hqdA/FGUFEuhILFAoxkrKHLTgtCx/6wLRL3521hIIcB2a4JSOvpnIbpdMQ4/H2 hlUt9+/inEQPWZqzzWtss1oXP2XE+9NSlGz0qO/4CsNVrdt6eb58yUYK67hwbgFzThZW dfUT0azY1LVka8gv8M4PP6IFPEzWiGJDOJGoT+AxNuDQ12ln2Anv0nVB/WYTa6OgLPUh 00IVIBezqVVgqm5Q6b4SKMNQkkfkTz4yE3zfjcLZBcIf5L2xdejV8npqT+GTarixvuxO MCaQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=l6uFPO04; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f21si5227182pgb.371.2019.01.18.08.11.31; Fri, 18 Jan 2019 08:11:49 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=l6uFPO04; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727770AbfARQJw (ORCPT + 99 others); Fri, 18 Jan 2019 11:09:52 -0500 Received: from merlin.infradead.org ([205.233.59.134]:33702 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726902AbfARQJw (ORCPT ); Fri, 18 Jan 2019 11:09:52 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=iX8VW9HjgFXT7Ii2QEevfwIhExiKmVnAFwBnYZ0K2tw=; b=l6uFPO04/Q7PWBF7g0c1NObHQ yQON4kjC6prOLFimKVoHYoVe1+V5Q4SURU8gSlcZlnysvOF35N1tH2175YsULIrkAC7D/RaRy3gqI tuzJxrbKyI9WfIg9gpDof1ebaKrcVJwKJhqRqxClqK12ed3oJdj3kxFBCUJajfCG4mmA9qUjT5FqN BP2JNY4mC0h5wX8Qhigv0CrED03BkQrZ5aSORzYdOdHhoGWrmirfJXrC/3CQsLo58HsiibqmrBLve G8q45iJr6X83/0tp6ASSnHPxeSr5NVJokZGK73TO6piKipLSMvLVEkqRG4gbyPXosCWxBdazxdVWW Vtz2uOONQ==; Received: from [89.200.44.227] (helo=worktop.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1gkWiJ-00043T-5Z; Fri, 18 Jan 2019 16:09:47 +0000 Received: by worktop.programming.kicks-ass.net (Postfix, from userid 1000) id 065D99818FF; Fri, 18 Jan 2019 16:58:15 +0100 (CET) Date: Fri, 18 Jan 2019 16:58:14 +0100 From: Peter Zijlstra To: Vince Weaver Cc: linux-kernel@vger.kernel.org, Ingo Molnar , Arnaldo Carvalho de Melo Subject: Re: perf: rdpmc bug when viewing all procs on remote cpu Message-ID: <20190118155814.GC14054@worktop.programming.kicks-ass.net> References: <20190118120149.GC27931@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 18, 2019 at 09:09:04AM -0500, Vince Weaver wrote: > On Fri, 18 Jan 2019, Peter Zijlstra wrote: > > > On Fri, Jan 11, 2019 at 04:52:22PM -0500, Vince Weaver wrote: > > > On Thu, 10 Jan 2019, Vince Weaver wrote: > > > > > > > On Thu, 10 Jan 2019, Vince Weaver wrote: > > > > > > > > > On Thu, 10 Jan 2019, Vince Weaver wrote: > > > > > > > > > > > However if you create an all-process attached to CPU event: > > > > > > perf_event_open(attr, -1, X, -1, 0); > > > > > > the mmap event index is set as if this were a valid event and so the rdpmc > > > > > > succeeds even though it shouldn't (we're trying to read an event value > > > > > > on a remote cpu with a local rdpmc). > > > > > > so on further looking at the code, it doesn't appear that rdpmc events are > > > explicitly marked as unavailable in the attach-cpu or attach-pid case, > > > it's just by luck the check for PERF_EVENT_STATE_ACTIVE catches most of > > > the cases? > > > > > > should an explicit check be added to zero out userpg->index in cases where > > > the event being measured is running on a different core? > > > > And how would we konw? We don't know what CPU will be observing the > > mmap(). > > > > RDPMC fundamentally only makes sense on 'self' (either task or CPU). > > so is this a "don't do that then" thing and I should have PAPI > userspace avoid using rdpmc() whenever a proc/cpu was attached to? You can actually use rdpmc when you attach to a CPU, but you have to ensure that the userspace component is guaranteed to run on that very CPU (sched_setaffinity(2) comes to mind). > Or is there a way to have the kernel indicate this? Does the kernel track > current CPU and original CPU of the mmap and could zero out the index > field in this case? Or would that add too much overhead? Impossible I'm afraid. Memory is not associated with a CPU; it's contents is the same whatever CPU reads from it. The best we could possibly do is put the (target, not current) cpu number in the mmap page; but userspace should already know this, for it created the event and therefore knows this already.