Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933130AbcDYQ04 (ORCPT ); Mon, 25 Apr 2016 12:26:56 -0400 Received: from mga09.intel.com ([134.134.136.24]:62198 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932882AbcDYQ0y (ORCPT ); Mon, 25 Apr 2016 12:26:54 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,533,1455004800"; d="scan'208";a="939801894" Date: Mon, 25 Apr 2016 09:26:35 -0700 (PDT) From: Vikas Shivappa X-X-Sender: vikas@vshiva-Udesk To: Peter Zijlstra cc: Vikas Shivappa , tony.luck@intel.com, ravi.v.shankar@intel.com, fenghua.yu@intel.com, vikas.shivappa@intel.com, x86@kernel.org, linux-kernel@vger.kernel.org, hpa@zytor.com, tglx@linutronix.de, mingo@kernel.org, h.peter.anvin@intel.com Subject: Re: [PATCH 1/4] perf/x86/cqm,mbm: Store cqm,mbm count for all events when RMID is recycled In-Reply-To: <20160425092035.GH3430@twins.programming.kicks-ass.net> Message-ID: References: <1461371241-4258-1-git-send-email-vikas.shivappa@linux.intel.com> <1461371241-4258-2-git-send-email-vikas.shivappa@linux.intel.com> <20160425092035.GH3430@twins.programming.kicks-ass.net> User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1355 Lines: 43 On Mon, 25 Apr 2016, Peter Zijlstra wrote: > On Fri, Apr 22, 2016 at 05:27:18PM -0700, Vikas Shivappa wrote: >> During RMID recycling, when an event loses the RMID we saved the counter >> for group leader but it was not being saved for all the events in an >> event group. This would lead to a situation where if 2 perf instances >> are counting the same PID one of them would not see the updated count >> which other perf instance is seeing. This patch tries to fix the issue >> by saving the count for all the events in the same event group. > > >> @@ -486,14 +495,21 @@ static u32 intel_cqm_xchg_rmid(struct perf_event *group, u32 rmid) >> * If our RMID is being deallocated, perform a read now. >> */ >> if (__rmid_valid(old_rmid) && !__rmid_valid(rmid)) { >> >> + rr = __init_rr(old_rmid, group->attr.config, 0); >> cqm_mask_call(&rr); >> local64_set(&group->count, atomic64_read(&rr.value)); >> + list_for_each_entry(event, head, hw.cqm_group_entry) { >> + if (event->hw.is_group_event) { >> + >> + evttype = event->attr.config; >> + rr = __init_rr(old_rmid, evttype, 0); >> + >> + cqm_mask_call(&rr); >> + local64_set(&event->count, >> + atomic64_read(&rr.value)); > > Randomly indent much? Will fix. It has been added by mistake in advance for the next patch Thanks, Vikas > >> + } >> + } >> } >