2002-08-05 23:03:05

by Duc Vianney

[permalink] [raw]
Subject: IPC lock patch performance improvement

I ran the LMbench Pipe and IPC latency test bucket against the IPC lock
patch from Mingming Cao and found the patch improves the performance of
those functions from 1% to 9%. See the attached data. The kernel under
test is 2.5.29, SMP kernel running on a 4-way 500 MHz. The data for
2.5.29s4-ipc represents the average of three runs.

Percent
2.5.29s4 2.5.29s4-ipc Improvement
Pipe latency 12.51 11.43 9%
AF_Unix sock stream latency 21.61 19.82 8%
UDP latency using localhost 36.28 35.12 3%
TCP latency using localhost 56.90 54.89 4%
RPC/tcp latency using local host 123.30 121.91 1%
RPC/udp latency using localhost 89.78 88.70 1%
TCP/IP connection cost to localhost 192.74 187.76 3%
Note: Latency is in microseconds
Note: 2.5.29s4 is the base 2.5.29 SMP kernel running on a 4-way,
2.5.29s4-ipc is the base 2.5.29 SMP kernel built with IPC lock patch.

Duc. [email protected]


2002-08-06 13:43:09

by Hugh Dickins

[permalink] [raw]
Subject: Re: IPC lock patch performance improvement

On Mon, 5 Aug 2002, Duc Vianney wrote:
> I ran the LMbench Pipe and IPC latency test bucket against the IPC lock
> patch from Mingming Cao and found the patch improves the performance of
> those functions from 1% to 9%. See the attached data. The kernel under
> test is 2.5.29, SMP kernel running on a 4-way 500 MHz. The data for
> 2.5.29s4-ipc represents the average of three runs.
>
> Percent
> 2.5.29s4 2.5.29s4-ipc Improvement
> Pipe latency 12.51 11.43 9%
> AF_Unix sock stream latency 21.61 19.82 8%
> UDP latency using localhost 36.28 35.12 3%
> TCP latency using localhost 56.90 54.89 4%
> RPC/tcp latency using local host 123.30 121.91 1%
> RPC/udp latency using localhost 89.78 88.70 1%
> TCP/IP connection cost to localhost 192.74 187.76 3%
> Note: Latency is in microseconds
> Note: 2.5.29s4 is the base 2.5.29 SMP kernel running on a 4-way,
> 2.5.29s4-ipc is the base 2.5.29 SMP kernel built with IPC lock patch.

Please show me I'm wrong, but so far as I can see (from source and
breakpoints) LMbench never touches the SysV IPC code, which is the only
code affected by Mingming's proposed IPC locking changes. I believe
LMbench tests InterProcessCommunication via pipes and sockets,
not via the SysV IPC msg sem and shm.

If that's right, then your improvement is magical; but we can
hope for even better when the appropriate codepaths are tested.

Hugh

2002-08-08 17:04:50

by Duc Vianney

[permalink] [raw]
Subject: Re: IPC lock patch performance improvement



>Please show me I'm wrong, but so far as I can see (from source and
>breakpoints) LMbench never touches the SysV IPC code, which is the only
>code affected by Mingming's proposed IPC locking changes. I believe
>LMbench tests InterProcessCommunication via pipes and sockets,
>not via the SysV IPC msg sem and shm.

Your observation was correct.

LMbench tests Interprocess communication using pipes and sockets.
Mingming Cao's IPC lock patch was not touched by LMbench.

The reason for the performance gain when applying the patch is not
yet clear and under investigation. I will share my analysis once it
is completed. I do realize that there is a variance in the data
generated by LMbench.

Duc.