2012-10-11 14:57:46

by Andrea Arcangeli

[permalink] [raw]
Subject: Re: [PATCH 00/33] AutoNUMA27

Hi Mel,

On Thu, Oct 11, 2012 at 11:19:30AM +0100, Mel Gorman wrote:
> As a basic sniff test I added a test to MMtests for the AutoNUMA
> Benchmark on a 4-node machine and the following fell out.
>
> 3.6.0 3.6.0
> vanilla autonuma-v33r6
> User SMT 82851.82 ( 0.00%) 33084.03 ( 60.07%)
> User THREAD_ALLOC 142723.90 ( 0.00%) 47707.38 ( 66.57%)
> System SMT 396.68 ( 0.00%) 621.46 (-56.67%)
> System THREAD_ALLOC 675.22 ( 0.00%) 836.96 (-23.95%)
> Elapsed SMT 1987.08 ( 0.00%) 828.57 ( 58.30%)
> Elapsed THREAD_ALLOC 3222.99 ( 0.00%) 1101.31 ( 65.83%)
> CPU SMT 4189.00 ( 0.00%) 4067.00 ( 2.91%)
> CPU THREAD_ALLOC 4449.00 ( 0.00%) 4407.00 ( 0.94%)

Thanks a lot for the help and for looking into it!

Just curious, why are you running only numa02_SMT and
numa01_THREAD_ALLOC? And not numa01 and numa02? (the standard version
without _suffix)

>
> The performance improvements are certainly there for this basic test but
> I note the System CPU usage is very high.

Yes, migrate is expensive, but after convergence has been reached the
system time should be the same as upstream.

btw, I improved things further in autonuma28 (new branch in aa.git).

>
> The vmstats showed up this
>
> THP fault alloc 81376 86070
> THP collapse alloc 14 40423
> THP splits 8 41792
>
> So we're doing a lot of splits and collapses for THP there. There is a
> possibility that khugepaged and the autonuma kernel thread are doing some
> busy work. Not a show-stopped, just interesting.
>
> I've done no analysis at all and this was just to have something to look
> at before looking at the code closer.

Sure, the idea is to have THP native migration, then we'll do zero
collapse/splits.

> > The objective of AutoNUMA is to provide out-of-the-box performance as
> > close as possible to (and potentially faster than) manual NUMA hard
> > bindings.
> >
> > It is not very intrusive into the kernel core and is well structured
> > into separate source modules.
> >
> > AutoNUMA was extensively tested against 3.x upstream kernels and other
> > NUMA placement algorithms such as numad (in userland through cpusets)
> > and schednuma (in kernel too) and was found superior in all cases.
> >
> > Most important: not a single benchmark showed a regression yet when
> > compared to vanilla kernels. Not even on the 2 node systems where the
> > NUMA effects are less significant.
> >
>
> Ok, I have not run a general regression test and won't get the chance to
> soon but hopefully others will. One thing they might want to watch out
> for is System CPU time. It's possible that your AutoNUMA benchmark
> triggers a worst-case but it's worth keeping an eye on because any cost
> from that has to be offset by gains from better NUMA placements.

Good idea to monitor it indeed.

> Is STREAM really a good benchmark in this case? Unless you also ran it in
> parallel mode, it basically operations against three arrays and not really
> NUMA friendly once the total size is greater than a NUMA node. I guess
> it makes sense to run it just to see does autonuma break it :)

The way this is run is that there is 1 stream, then 4 stream, then 8
until we max out all CPUs.

I think we could run "memhog" instead of "stream" and it'd be the
same. stream probably better resembles real life computations.

The upstream scheduler lacks any notion of affinity so eventually
during the 5 min run, on process changes node, it doesn't notice its
memory was elsewhere so it stays there, and the memory can't follow
the cpu either. So then it runs much slower.

So it's the simplest test of all to get right, all it requires is some
notion of node affinity.

It's also the only workload that the home node design in schednuma in
tip.git can get right (schednuma post current tip.git introduced
cpu-follow-memory design of AutoNUMA so schednuma will have a chance
to get right more stuff than just the stream multi instance
benchmark).

So it's just for a verification than the simple stuff (single threaded
process computing) is ok and the upstream regression vs hard NUMA
bindings is fixed.

stream is also one case where we have to perform identical to the hard
NUMA bindings. No migration of CPU or memory must ever happen with
AutoNUMA in the stream benchmark. AutoNUMA will just monitor it and
find that it is already in the best place and it will leave it alone.

With the autonuma-benchmark it's impossible to reach identical
performance of the _HARD_BIND case because _HARD_BIND doesn't need to
do any memory migration (I'm 3 seconds away from hard bindings in a
198 sec run though, just the 3 seconds it takes to migrate 3g of ram ;).

>
> >
> > == iozone ==
> >
> > ALL INIT RE RE RANDOM RANDOM BACKWD RECRE STRIDE F FRE F FRE
> > FILE TYPE (KB) IOS WRITE WRITE READ READ READ WRITE READ WRITE READ WRITE WRITE READ READ
> > ====--------------------------------------------------------------------------------------------------------------
> > noautonuma ALL 2492 1224 1874 2699 3669 3724 2327 2638 4091 3525 1142 1692 2668 3696
> > autonuma ALL 2531 1221 1886 2732 3757 3760 2380 2650 4192 3599 1150 1731 2712 3825
> >
> > AutoNUMA can't help much for I/O loads but you can see it seems a
> > small improvement there too. The important thing for I/O loads, is to
> > verify that there is no regression.
> >
>
> It probably is unreasonable to expect autonuma to handle the case where
> a file-based workload has not been tuned for NUMA. In too many cases
> it's going to be read/write based so you're not going to get the
> statistics you need.

Agreed. Some statistic may still accumulate and it's still better than
nothing but unless the workload is CPU and memory bound, we can't
expect to see any difference.

This is meant as a verification that we're not introducing regression
to I/O bound load.

Andrea


2012-10-11 15:35:15

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH 00/33] AutoNUMA27

On Thu, Oct 11, 2012 at 04:56:11PM +0200, Andrea Arcangeli wrote:
> Hi Mel,
>
> On Thu, Oct 11, 2012 at 11:19:30AM +0100, Mel Gorman wrote:
> > As a basic sniff test I added a test to MMtests for the AutoNUMA
> > Benchmark on a 4-node machine and the following fell out.
> >
> > 3.6.0 3.6.0
> > vanilla autonuma-v33r6
> > User SMT 82851.82 ( 0.00%) 33084.03 ( 60.07%)
> > User THREAD_ALLOC 142723.90 ( 0.00%) 47707.38 ( 66.57%)
> > System SMT 396.68 ( 0.00%) 621.46 (-56.67%)
> > System THREAD_ALLOC 675.22 ( 0.00%) 836.96 (-23.95%)
> > Elapsed SMT 1987.08 ( 0.00%) 828.57 ( 58.30%)
> > Elapsed THREAD_ALLOC 3222.99 ( 0.00%) 1101.31 ( 65.83%)
> > CPU SMT 4189.00 ( 0.00%) 4067.00 ( 2.91%)
> > CPU THREAD_ALLOC 4449.00 ( 0.00%) 4407.00 ( 0.94%)
>
> Thanks a lot for the help and for looking into it!
>
> Just curious, why are you running only numa02_SMT and
> numa01_THREAD_ALLOC? And not numa01 and numa02? (the standard version
> without _suffix)
>

Bug in the testing script on my end. Each of them are run separtly and it
looks like in retrospect that a THREAD_ALLOC test actually ran numa01 then
numa01_THREAD_ALLOC. The intention was to allow additional stats to be
gathered independently of what start_bench.sh collects. Will improve it
in the future.

> >
> > The performance improvements are certainly there for this basic test but
> > I note the System CPU usage is very high.
>
> Yes, migrate is expensive, but after convergence has been reached the
> system time should be the same as upstream.
>

Ok.

> btw, I improved things further in autonuma28 (new branch in aa.git).
>

Ok.

> >
> > The vmstats showed up this
> >
> > THP fault alloc 81376 86070
> > THP collapse alloc 14 40423
> > THP splits 8 41792
> >
> > So we're doing a lot of splits and collapses for THP there. There is a
> > possibility that khugepaged and the autonuma kernel thread are doing some
> > busy work. Not a show-stopped, just interesting.
> >
> > I've done no analysis at all and this was just to have something to look
> > at before looking at the code closer.
>
> Sure, the idea is to have THP native migration, then we'll do zero
> collapse/splits.
>

Seems reasonably. It should be obvious to measure when/if that happens.

> > > The objective of AutoNUMA is to provide out-of-the-box performance as
> > > close as possible to (and potentially faster than) manual NUMA hard
> > > bindings.
> > >
> > > It is not very intrusive into the kernel core and is well structured
> > > into separate source modules.
> > >
> > > AutoNUMA was extensively tested against 3.x upstream kernels and other
> > > NUMA placement algorithms such as numad (in userland through cpusets)
> > > and schednuma (in kernel too) and was found superior in all cases.
> > >
> > > Most important: not a single benchmark showed a regression yet when
> > > compared to vanilla kernels. Not even on the 2 node systems where the
> > > NUMA effects are less significant.
> > >
> >
> > Ok, I have not run a general regression test and won't get the chance to
> > soon but hopefully others will. One thing they might want to watch out
> > for is System CPU time. It's possible that your AutoNUMA benchmark
> > triggers a worst-case but it's worth keeping an eye on because any cost
> > from that has to be offset by gains from better NUMA placements.
>
> Good idea to monitor it indeed.
>

If System CPU time really does go down as this converges then that
should be obvious from monitoring vmstat over time for a test. Early on
- high usage with that dropping as it converges. If that doesn't happen
then the tasks are not converging, the phases change constantly or
something unexpected happened that needs to be identified.

> > Is STREAM really a good benchmark in this case? Unless you also ran it in
> > parallel mode, it basically operations against three arrays and not really
> > NUMA friendly once the total size is greater than a NUMA node. I guess
> > it makes sense to run it just to see does autonuma break it :)
>
> The way this is run is that there is 1 stream, then 4 stream, then 8
> until we max out all CPUs.
>

Ok. Are they separate STREAM instances or threads running on the same
arrays?

> I think we could run "memhog" instead of "stream" and it'd be the
> same. stream probably better resembles real life computations.
>
> The upstream scheduler lacks any notion of affinity so eventually
> during the 5 min run, on process changes node, it doesn't notice its
> memory was elsewhere so it stays there, and the memory can't follow
> the cpu either. So then it runs much slower.
>
> So it's the simplest test of all to get right, all it requires is some
> notion of node affinity.
>

Ok.

> It's also the only workload that the home node design in schednuma in
> tip.git can get right (schednuma post current tip.git introduced
> cpu-follow-memory design of AutoNUMA so schednuma will have a chance
> to get right more stuff than just the stream multi instance
> benchmark).
>
> So it's just for a verification than the simple stuff (single threaded
> process computing) is ok and the upstream regression vs hard NUMA
> bindings is fixed.
>

Verification of the simple stuff makes sense.

> stream is also one case where we have to perform identical to the hard
> NUMA bindings. No migration of CPU or memory must ever happen with
> AutoNUMA in the stream benchmark. AutoNUMA will just monitor it and
> find that it is already in the best place and it will leave it alone.
>
> With the autonuma-benchmark it's impossible to reach identical
> performance of the _HARD_BIND case because _HARD_BIND doesn't need to
> do any memory migration (I'm 3 seconds away from hard bindings in a
> 198 sec run though, just the 3 seconds it takes to migrate 3g of ram ;).
>
> >
> > >
> > > == iozone ==
> > >
> > > ALL INIT RE RE RANDOM RANDOM BACKWD RECRE STRIDE F FRE F FRE
> > > FILE TYPE (KB) IOS WRITE WRITE READ READ READ WRITE READ WRITE READ WRITE WRITE READ READ
> > > ====--------------------------------------------------------------------------------------------------------------
> > > noautonuma ALL 2492 1224 1874 2699 3669 3724 2327 2638 4091 3525 1142 1692 2668 3696
> > > autonuma ALL 2531 1221 1886 2732 3757 3760 2380 2650 4192 3599 1150 1731 2712 3825
> > >
> > > AutoNUMA can't help much for I/O loads but you can see it seems a
> > > small improvement there too. The important thing for I/O loads, is to
> > > verify that there is no regression.
> > >
> >
> > It probably is unreasonable to expect autonuma to handle the case where
> > a file-based workload has not been tuned for NUMA. In too many cases
> > it's going to be read/write based so you're not going to get the
> > statistics you need.
>
> Agreed. Some statistic may still accumulate and it's still better than
> nothing but unless the workload is CPU and memory bound, we can't
> expect to see any difference.
>
> This is meant as a verification that we're not introducing regression
> to I/O bound load.
>

Ok, that's more or less what I had guessed but nice to know for sure.
Thanks.

--
Mel Gorman
SUSE Labs

2012-10-12 00:42:01

by Andrea Arcangeli

[permalink] [raw]
Subject: Re: [PATCH 00/33] AutoNUMA27

On Thu, Oct 11, 2012 at 04:35:03PM +0100, Mel Gorman wrote:
> If System CPU time really does go down as this converges then that
> should be obvious from monitoring vmstat over time for a test. Early on
> - high usage with that dropping as it converges. If that doesn't happen
> then the tasks are not converging, the phases change constantly or
> something unexpected happened that needs to be identified.

Yes, all measurable kernel cost should be in the memory copies
(migration and khugepaged, the latter is going to be optimized away).

The migrations must stop after the workload converges. Either
migrations are used to reach convergence or they shouldn't happen in
the first place (not in any measurable amount).

> Ok. Are they separate STREAM instances or threads running on the same
> arrays?

My understanding is separate instances. I think it's a single threaded
benchmark and you run many copies. It was modified to run for 5min
(otherwise upstream has not enough time to get it wrong, as result of
background scheduling jitters).

Thanks!

2012-10-12 14:54:41

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH 00/33] AutoNUMA27

On Thu, Oct 11, 2012 at 04:35:03PM +0100, Mel Gorman wrote:
> On Thu, Oct 11, 2012 at 04:56:11PM +0200, Andrea Arcangeli wrote:
> > Hi Mel,
> >
> > On Thu, Oct 11, 2012 at 11:19:30AM +0100, Mel Gorman wrote:
> > > As a basic sniff test I added a test to MMtests for the AutoNUMA
> > > Benchmark on a 4-node machine and the following fell out.
> > >
> > > 3.6.0 3.6.0
> > > vanilla autonuma-v33r6
> > > User SMT 82851.82 ( 0.00%) 33084.03 ( 60.07%)
> > > User THREAD_ALLOC 142723.90 ( 0.00%) 47707.38 ( 66.57%)
> > > System SMT 396.68 ( 0.00%) 621.46 (-56.67%)
> > > System THREAD_ALLOC 675.22 ( 0.00%) 836.96 (-23.95%)
> > > Elapsed SMT 1987.08 ( 0.00%) 828.57 ( 58.30%)
> > > Elapsed THREAD_ALLOC 3222.99 ( 0.00%) 1101.31 ( 65.83%)
> > > CPU SMT 4189.00 ( 0.00%) 4067.00 ( 2.91%)
> > > CPU THREAD_ALLOC 4449.00 ( 0.00%) 4407.00 ( 0.94%)
> >
> > Thanks a lot for the help and for looking into it!
> >
> > Just curious, why are you running only numa02_SMT and
> > numa01_THREAD_ALLOC? And not numa01 and numa02? (the standard version
> > without _suffix)
> >
>
> Bug in the testing script on my end. Each of them are run separtly and it

Ok, MMTests 0.06 (released a few minutes ago) patches autonumabench so
it can run the tests individually. I know start_bench.sh can run all the
tests itself but in time I'll want mmtests to collect additional stats
that can also be applied to other benchmarks consistently. The revised
results look like this

AUTONUMA BENCH
3.6.0 3.6.0
vanilla autonuma-v33r6
User NUMA01 66395.58 ( 0.00%) 32000.83 ( 51.80%)
User NUMA01_THEADLOCAL 55952.48 ( 0.00%) 16950.48 ( 69.71%)
User NUMA02 6988.51 ( 0.00%) 2150.56 ( 69.23%)
User NUMA02_SMT 2914.25 ( 0.00%) 1013.11 ( 65.24%)
System NUMA01 319.12 ( 0.00%) 483.60 (-51.54%)
System NUMA01_THEADLOCAL 40.60 ( 0.00%) 184.39 (-354.16%)
System NUMA02 1.62 ( 0.00%) 23.92 (-1376.54%)
System NUMA02_SMT 0.90 ( 0.00%) 16.20 (-1700.00%)
Elapsed NUMA01 1519.53 ( 0.00%) 757.40 ( 50.16%)
Elapsed NUMA01_THEADLOCAL 1269.49 ( 0.00%) 398.63 ( 68.60%)
Elapsed NUMA02 181.12 ( 0.00%) 57.09 ( 68.48%)
Elapsed NUMA02_SMT 164.18 ( 0.00%) 53.16 ( 67.62%)
CPU NUMA01 4390.00 ( 0.00%) 4288.00 ( 2.32%)
CPU NUMA01_THEADLOCAL 4410.00 ( 0.00%) 4298.00 ( 2.54%)
CPU NUMA02 3859.00 ( 0.00%) 3808.00 ( 1.32%)
CPU NUMA02_SMT 1775.00 ( 0.00%) 1935.00 ( -9.01%)

MMTests Statistics: duration
3.6.0 3.6.0
vanilla autonuma-v33r6
User 132257.44 52121.30
System 362.79 708.62
Elapsed 3142.66 1275.72

MMTests Statistics: vmstat
3.6.0 3.6.0
vanilla autonuma-v33r6
THP fault alloc 17660 19927
THP collapse alloc 10 12399
THP splits 4 12637

The System CPU usage is high but is compenstated for with reduced User
and Elapsed times in this particular case.

--
Mel Gorman
SUSE Labs