Here are links to some 2.5.70 nightly regression comparisons:
Each -bk snapshot compared to the base:
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-bk1/2.5.70-vs-2.5.70-bk1/
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-bk2/2.5.70-vs-2.5.70-bk2/
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-bk3/2.5.70-vs-2.5.70-bk3/
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-bk4/2.5.70-vs-2.5.70-bk4/
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-bk5/2.5.70-vs-2.5.70-bk5/
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-bk6/2.5.70-vs-2.5.70-bk6/
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-bk7/2.5.70-vs-2.5.70-bk7/
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-bk8/2.5.70-vs-2.5.70-bk8/
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-bk9/2.5.70-vs-2.5.70-bk9/
Each -bk snapshot compared to the previous:
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-bk1/2.5.70-bk1-vs-2.5.70-bk2/
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-bk2/2.5.70-bk2-vs-2.5.70-bk3/
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-bk3/2.5.70-bk3-vs-2.5.70-bk4/
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-bk4/2.5.70-bk4-vs-2.5.70-bk5/
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-bk5/2.5.70-bk5-vs-2.5.70-bk6/
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-bk6/2.5.70-bk6-vs-2.5.70-bk7/
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-bk7/2.5.70-bk7-vs-2.5.70-bk8/
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-bk8/2.5.70-bk8-vs-2.5.70-bk9/
Each -mm patch compared to the base:
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-mm1/2.5.70-vs-2.5.70-mm1/
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-mm2/2.5.70-vs-2.5.70-mm2/
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-mm3/2.5.70-vs-2.5.70-mm3/
Each -mm patch compared to the previous:
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-mm2/2.5.70-mm1-vs-2.5.70-mm2/
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-mm3/2.5.70-mm2-vs-2.5.70-mm3/
Each -mjb patch compared to the base:
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-mjb1/2.5.70-vs-2.5.70-mjb1/
Mark
>
http://www-124.ibm.com/developerworks/oss/linuxperf/regression/2.5.70-mm1/2.5.70-vs-2.5.70-mm1/
This shows a significant (20%) degrade for SpecSDET in the mm1 tree.
The degrade carries forward in the mm2 and mm3 trees. I see lots more
calls to page_remove_rmap and page_add_rmap in the profile for mm1. Not
sure if this is the issue, but probably needs to be looked at.
Steve
Mark Peloquin wrote:
>
>
> Here are links to some 2.5.70 nightly regression comparisons:
>
It appears your tiobench reads are coming out of cache.
Would you be able add some runs with the size >= 2*ram
please? I don't know if anyone would still find the
current type useful - maybe for scalability work?
Thanks
Nick
Hi Nick,
Yes, the read tests do currently run primarily out of cache. This does
have some value for measuring the relative overhead of the i/o apis. To
avoid cache effects, and measure the real throughput, we need to bump
the size up and, as you suggest, size*2 is a good value to use. We've
tried to keep the overall test suite time down by maintaining shorter
runs whereever possible. So rather than increasing to run size by 2, we
will reduce the memory used, at boot, then use runs of newsmallsize*2.
This will keep the runs from taking too long and also avoid the cache
benefits. We will have to tweak this to come up with the appropriate
balance of memsize vs run time. This should be available in a few days.
Thanks for the feedback.
Mark
Nick Piggin wrote:
>
>
> Mark Peloquin wrote:
>
>>
>>
>> Here are links to some 2.5.70 nightly regression comparisons:
>>
> It appears your tiobench reads are coming out of cache.
> Would you be able add some runs with the size >= 2*ram
> please? I don't know if anyone would still find the
> current type useful - maybe for scalability work?
>
> Thanks
> Nick