Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754285AbcDYJUl (ORCPT ); Mon, 25 Apr 2016 05:20:41 -0400 Received: from bombadil.infradead.org ([198.137.202.9]:60214 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753817AbcDYJUk (ORCPT ); Mon, 25 Apr 2016 05:20:40 -0400 Date: Mon, 25 Apr 2016 11:20:35 +0200 From: Peter Zijlstra To: Vikas Shivappa Cc: tony.luck@intel.com, ravi.v.shankar@intel.com, fenghua.yu@intel.com, vikas.shivappa@intel.com, x86@kernel.org, linux-kernel@vger.kernel.org, hpa@zytor.com, tglx@linutronix.de, mingo@kernel.org, h.peter.anvin@intel.com Subject: Re: [PATCH 1/4] perf/x86/cqm,mbm: Store cqm,mbm count for all events when RMID is recycled Message-ID: <20160425092035.GH3430@twins.programming.kicks-ass.net> References: <1461371241-4258-1-git-send-email-vikas.shivappa@linux.intel.com> <1461371241-4258-2-git-send-email-vikas.shivappa@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1461371241-4258-2-git-send-email-vikas.shivappa@linux.intel.com> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1189 Lines: 32 On Fri, Apr 22, 2016 at 05:27:18PM -0700, Vikas Shivappa wrote: > During RMID recycling, when an event loses the RMID we saved the counter > for group leader but it was not being saved for all the events in an > event group. This would lead to a situation where if 2 perf instances > are counting the same PID one of them would not see the updated count > which other perf instance is seeing. This patch tries to fix the issue > by saving the count for all the events in the same event group. > @@ -486,14 +495,21 @@ static u32 intel_cqm_xchg_rmid(struct perf_event *group, u32 rmid) > * If our RMID is being deallocated, perform a read now. > */ > if (__rmid_valid(old_rmid) && !__rmid_valid(rmid)) { > > + rr = __init_rr(old_rmid, group->attr.config, 0); > cqm_mask_call(&rr); > local64_set(&group->count, atomic64_read(&rr.value)); > + list_for_each_entry(event, head, hw.cqm_group_entry) { > + if (event->hw.is_group_event) { > + > + evttype = event->attr.config; > + rr = __init_rr(old_rmid, evttype, 0); > + > + cqm_mask_call(&rr); > + local64_set(&event->count, > + atomic64_read(&rr.value)); Randomly indent much? > + } > + } > }