On Mon, 24 Mar 2014 15:34:36 -0400, Don Zickus wrote:
> The cache contention tools needs to keep all the perf records unique in order
> to properly parse all the data. Currently add_hist_entry() will combine
> the duplicate record and add the weight/period to the existing record.
>
> This throws away the unique data the cache contention tool needs (mainly
> the data source). Create a flag to force the records to stay unique.
No. This is why I said you need to add 'mem' and 'snoop' sort keys into
the c2c tool. This is not how sort works IMHO - if you need to make
samples unique let the sort key(s) distinguish them somehow, or you can
combine same samples (in terms of sort kes) and use the combined entry's
stat.nr_events and stat.period or weight.
Thanks,
Namhyung
On Wed, Apr 09, 2014 at 02:31:00PM +0900, Namhyung Kim wrote:
> On Mon, 24 Mar 2014 15:34:36 -0400, Don Zickus wrote:
> > The cache contention tools needs to keep all the perf records unique in order
> > to properly parse all the data. Currently add_hist_entry() will combine
> > the duplicate record and add the weight/period to the existing record.
> >
> > This throws away the unique data the cache contention tool needs (mainly
> > the data source). Create a flag to force the records to stay unique.
>
> No. This is why I said you need to add 'mem' and 'snoop' sort keys into
> the c2c tool. This is not how sort works IMHO - if you need to make
> samples unique let the sort key(s) distinguish them somehow, or you can
> combine same samples (in terms of sort kes) and use the combined entry's
> stat.nr_events and stat.period or weight.
Ok. I understand your point. Perhaps this was my lack of fully
understanding the sorting algorithm when I did this. I can look into
adding the 'mem' and 'snoop'.
One concern I do have is we were caculating statistics based on the weight
(mean, median, stddev). I was afraid that combining the entries would
throw off our calculations as we could no longer accurately determine them
any more. Is that true?
Cheers,
Don
Hi Don,
On Wed, 9 Apr 2014 09:57:06 -0400, Don Zickus wrote:
> On Wed, Apr 09, 2014 at 02:31:00PM +0900, Namhyung Kim wrote:
>> On Mon, 24 Mar 2014 15:34:36 -0400, Don Zickus wrote:
>> > The cache contention tools needs to keep all the perf records unique in order
>> > to properly parse all the data. Currently add_hist_entry() will combine
>> > the duplicate record and add the weight/period to the existing record.
>> >
>> > This throws away the unique data the cache contention tool needs (mainly
>> > the data source). Create a flag to force the records to stay unique.
>>
>> No. This is why I said you need to add 'mem' and 'snoop' sort keys into
>> the c2c tool. This is not how sort works IMHO - if you need to make
>> samples unique let the sort key(s) distinguish them somehow, or you can
>> combine same samples (in terms of sort kes) and use the combined entry's
>> stat.nr_events and stat.period or weight.
>
> Ok. I understand your point. Perhaps this was my lack of fully
> understanding the sorting algorithm when I did this. I can look into
> adding the 'mem' and 'snoop'.
>
> One concern I do have is we were caculating statistics based on the weight
> (mean, median, stddev). I was afraid that combining the entries would
> throw off our calculations as we could no longer accurately determine them
> any more. Is that true?
Yes, if you want to calculate the stats accurately, it needs to be done
when processing samples not hist_entries IMHO.
Thanks,
Namhyung