2006-08-27 09:25:58

by Jeremy Fitzhardinge

[permalink] [raw]
Subject: [PATCH RFC 0/6] Implement per-processor data areas for i386.

This patch implements per-processor data areas by using %gs as the
base segment of the per-processor memory. This has two principle
advantages:

- It allows very simple direct access to per-processor data by
effectively using an effective address of the form %gs:offset, where
offset is the offset into struct i386_pda. These sequences are faster
and smaller than the current mechanism using current_thread_info().

- It also allows per-CPU data to be allocated as each CPU is brought
up, rather than statically allocating it based on the maximum number
of CPUs which could be brought up.

I haven't measured performance yet, but when using the PDA for "current"
and "smp_processor_id", I see a 5715 byte reduction in .text segment
size for my kernel.

Unfortunately, these patches don't actually work yet. I'm not sure why;
I'm hoping review will turn something up.


Some background for people unfamiliar with x86 segmentation:

This uses the x86 segmentation stuff in a way similar to NPTL's way of
implementing Thread-Local Storage. It relies on the fact that each CPU
has its own Global Descriptor Table (GDT), which is basically an array
of base-length pairs (with some extra stuff). When a segment register
is loaded with a descriptor (approximately, an index in the GDT), and
you use that segment register for memory access, the address has the
base added to it, and the resulting address is used.

In other words, if you imagine the GDT containing an entry:
Index Offset
123: 0xc0211000 (allocated PDA)
and you load %gs with this selector:
mov $123, %gs
and then use GS later on:
mov %gs:4, %eax
This has the effect of
mov 0xc0211004, %eax
and because the GDT is per-CPU, the offset (= 0xc0211000 = memory
allocated for this CPU's PDA) can be a CPU-specific value while leaving
everything else constant.

This means that something like "current" or "smp_processor_id()" can
collapse to a single instruction:
mov %gs:PDA_current, %reg


TODO:
- Make it work. It works UP on a test QEMU machine, but it doesn't
yet work on real hardware, or SMP (though not working SMP on QEMU is
more likely to be a QEMU problem). Not sure what the problem is yet;
I'm hoping review will reveal something.
- Measure performance impact. The patch adds a segment register
save/restore on entry/exit to the kernel. This expense should be
offset by savings in using the PDA while in the kernel, but I haven't
measured this yet. Space savings are already appealing though.
- Modify more things to use the PDA. The more that uses it, the more
the cost of the %gs save/restore is amortized. smp_processor_id and
current are the obvious first choices, which are implemented in this
series.
- Make it a config option? UP systems don't need to do any of this,
other than having a single pre-allocated PDA. Unfortunately, it gets
a bit messy to do this given the changes needed in handling %gs.
--


2006-08-27 09:48:22

by Arjan van de Ven

[permalink] [raw]
Subject: Re: [PATCH RFC 0/6] Implement per-processor data areas for i386.

> - Measure performance impact. The patch adds a segment register
> save/restore on entry/exit to the kernel. This expense should be
> offset by savings in using the PDA while in the kernel, but I haven't
> measured this yet. Space savings are already appealing though.

this will be interesting; x86-64 has a nice instruction to help with
this; 32 bit does not... so far conventional wisdom has been that
without the instruction it's not going to be worth it.

When you're benchmarking this please use multiple CPU generations from
different vendors; I suspect this is one of those things that vary
greatly between models

> - Make it a config option? UP systems don't need to do any of this,
> other than having a single pre-allocated PDA. Unfortunately, it gets
> a bit messy to do this given the changes needed in handling %gs.


A config option for this is a mistake imo. Not every patch is worth a
config option! It's good or it's not, if it's good it should be there
always, if it's not.... Something this fundamental to the core doesn't
really have a "but it's optional" argument going for it, unlike
individual drivers or subsystems...

--
if you want to mail me at work (you don't), use arjan (at) linux.intel.com

2006-08-27 16:02:11

by Andi Kleen

[permalink] [raw]
Subject: Re: [PATCH RFC 0/6] Implement per-processor data areas for i386.


Very cool.

> - Make it work. It works UP on a test QEMU machine, but it doesn't
> yet work on real hardware, or SMP (though not working SMP on QEMU is
> more likely to be a QEMU problem). Not sure what the problem is yet;
> I'm hoping review will reveal something.

I bet qemu doesn't have a real descriptor cache unlike real CPUs.
So likely it is some disconnect between changing the backing GDT
and referencing the register. Reload %gs more aggressively?

Comparing with SimNow! (which should behave more like a real CPU)
might be also interesting.

> - Measure performance impact. The patch adds a segment register
> save/restore on entry/exit to the kernel. This expense should be
> offset by savings in using the PDA while in the kernel, but I haven't
> measured this yet. Space savings are already appealing though.
> - Modify more things to use the PDA. The more that uses it, the more
> the cost of the %gs save/restore is amortized. smp_processor_id and
> current are the obvious first choices, which are implemented in this
> series.

per cpu data would be the prime candidate. It is pretty simple.

> - Make it a config option? UP systems don't need to do any of this,
> other than having a single pre-allocated PDA. Unfortunately, it gets
> a bit messy to do this given the changes needed in handling %gs.

Please don't.

(weak point:)

- The stack protector code might work one day on i386 too.

-Andi

2006-08-27 16:41:27

by Jeremy Fitzhardinge

[permalink] [raw]
Subject: Re: [PATCH RFC 0/6] Implement per-processor data areas for i386.

Andi Kleen wrote:
> I bet qemu doesn't have a real descriptor cache unlike real CPUs.
> So likely it is some disconnect between changing the backing GDT
> and referencing the register. Reload %gs more aggressively?
>

The GDT only gets touched once in cpu_init(), and %gs is reloaded on
every kernel entry, so I don't think that's it. I seems to have
interrupt issues with SMP.

And either way, it still doesn't work on real hardware...

> Comparing with SimNow! (which should behave more like a real CPU)
> might be also interesting.
>

Yeah, I'll have to try that out.

>> - Measure performance impact. The patch adds a segment register
>> save/restore on entry/exit to the kernel. This expense should be
>> offset by savings in using the PDA while in the kernel, but I haven't
>> measured this yet. Space savings are already appealing though.
>> - Modify more things to use the PDA. The more that uses it, the more
>> the cost of the %gs save/restore is amortized. smp_processor_id and
>> current are the obvious first choices, which are implemented in this
>> series.
>>
>
> per cpu data would be the prime candidate. It is pretty simple.
>

Well, it has to be arch-specific per-cpu data, since the PDA is arch
specific. But there should be various pieces of interrupt state that
adapt well to it.

>> - Make it a config option? UP systems don't need to do any of this,
>> other than having a single pre-allocated PDA. Unfortunately, it gets
>> a bit messy to do this given the changes needed in handling %gs.
>>
>
> Please don't.
>

Yeah, that wasn't really a serious thought...

J

2006-08-27 16:46:37

by Jeremy Fitzhardinge

[permalink] [raw]
Subject: Re: [PATCH RFC 0/6] Implement per-processor data areas for i386.

Arjan van de Ven wrote:
> this will be interesting; x86-64 has a nice instruction to help with
> this; 32 bit does not... so far conventional wisdom has been that
> without the instruction it's not going to be worth it.
>

Hm, swapgs may be quick, but it isn't very easy to use since it doesn't
stack, and so requires careful handling for recursive kernel entries,
which involves extra tests and conditional jumps. I tried doing
something similar with my earlier patches, but it got all too messy.
Stacking %gs like the other registers turns out pretty cleanly.

> When you're benchmarking this please use multiple CPU generations from
> different vendors; I suspect this is one of those things that vary
> greatly between models
>

Hm, it seems to me that unless the existing %ds/%es register
save/restores are a significant part of the existing cost of going
through entry.S, adding %gs to the set shouldn't make too much
difference. And I'm not sure about the relative cost of using a %gs
override vs. the normal current_task_info() masking, but I'm assuming
they're at worst equal, with the %gs override having a code-size advantage.

But yes, it definitely needs measurement.

J

2006-08-27 17:21:58

by Andreas Mohr

[permalink] [raw]
Subject: Re: [PATCH RFC 0/6] Implement per-processor data areas for i386.

Hi,

On Sun, Aug 27, 2006 at 01:44:17AM -0700, Jeremy Fitzhardinge wrote:
> This patch implements per-processor data areas by using %gs as the
> base segment of the per-processor memory. This has two principle
> advantages:
>
> - It allows very simple direct access to per-processor data by
> effectively using an effective address of the form %gs:offset, where
> offset is the offset into struct i386_pda. These sequences are faster
> and smaller than the current mechanism using current_thread_info().

Yess!!
Something like that had to be done eventually about the inefficient
current_thread_info() mechanism, but I wasn't sure what exactly.

> I haven't measured performance yet, but when using the PDA for "current"
> and "smp_processor_id", I see a 5715 byte reduction in .text segment
> size for my kernel.

This is interesting, since even by doing a non-elegant
current->... --> struct task_struct *tsk = current;
replacement for excessive uses of current, I was able to gain almost 10kB
within a single file already!
I guess it's due to having tried that on an older installation with gcc 3.2,
which probably does less efficient opcode merging of current_thread_info()
requests compared to a current gcc version.
IOW, .text segment reduction could be quite a bit higher for older gcc:s.

> This uses the x86 segmentation stuff in a way similar to NPTL's way of
> implementing Thread-Local Storage. It relies on the fact that each CPU
> has its own Global Descriptor Table (GDT), which is basically an array
> of base-length pairs (with some extra stuff). When a segment register
> is loaded with a descriptor (approximately, an index in the GDT), and
> you use that segment register for memory access, the address has the
> base added to it, and the resulting address is used.

Not a problem for more daring user-space apps (i.e. Wine), I hope?

Andreas Mohr

--
No programming skills!? Why not help translate many Linux applications!
https://launchpad.ubuntu.com/rosetta
(or alternatively buy nicely packaged Linux distros/OSS software to help
support Linux developers creating shiny new things for you?)

2006-08-27 17:34:56

by Jeremy Fitzhardinge

[permalink] [raw]
Subject: Re: [PATCH RFC 0/6] Implement per-processor data areas for i386.

Andreas Mohr wrote:
> This is interesting, since even by doing a non-elegant
> current->... --> struct task_struct *tsk = current;
> replacement for excessive uses of current, I was able to gain almost 10kB
> within a single file already!
> I guess it's due to having tried that on an older installation with gcc 3.2,
> which probably does less efficient opcode merging of current_thread_info()
> requests compared to a current gcc version.
> IOW, .text segment reduction could be quite a bit higher for older gcc:s.
>

That doesn't sound likely. current_thread_info() is only about 2
instructions. Are you looking at the .text segment size, or the actual
file size? The latter can be very misleading in the presence of debug info.

>> This uses the x86 segmentation stuff in a way similar to NPTL's way of
>> implementing Thread-Local Storage. It relies on the fact that each CPU
>> has its own Global Descriptor Table (GDT), which is basically an array
>> of base-length pairs (with some extra stuff). When a segment register
>> is loaded with a descriptor (approximately, an index in the GDT), and
>> you use that segment register for memory access, the address has the
>> base added to it, and the resulting address is used.
>>
>
> Not a problem for more daring user-space apps (i.e. Wine), I hope?
>

No. The LDT is still available for userspace use.

J

2006-08-27 17:45:09

by Arjan van de Ven

[permalink] [raw]
Subject: Re: [PATCH RFC 0/6] Implement per-processor data areas for i386.

On Sun, 2006-08-27 at 09:46 -0700, Jeremy Fitzhardinge wrote:
> Arjan van de Ven wrote:
> > this will be interesting; x86-64 has a nice instruction to help with
> > this; 32 bit does not... so far conventional wisdom has been that
> > without the instruction it's not going to be worth it.
> >
>
> Hm, swapgs may be quick, but it isn't very easy to use since it doesn't
> stack, and so requires careful handling for recursive kernel entries,
> which involves extra tests and conditional jumps. I tried doing
> something similar with my earlier patches, but it got all too messy.
> Stacking %gs like the other registers turns out pretty cleanly.
>
> > When you're benchmarking this please use multiple CPU generations from
> > different vendors; I suspect this is one of those things that vary
> > greatly between models
> >
>
> Hm, it seems to me that unless the existing %ds/%es register
> save/restores are a significant part of the existing cost of going
> through entry.S,

iirc the %fs one is at least. But it has been a while since I've looked
at this part of the kernel via performance traces.

> adding %gs to the set shouldn't make too much
> difference. And I'm not sure about the relative cost of using a %gs
> override vs. the normal current_task_info() masking, but I'm assuming
> they're at worst equal, with the %gs override having a code-size advantage.

your worst case scenario would be if the segment override would make it
a "complex" instruction, so not parallel decodable. That'd mean it would
basically cost you 6 or 7 instruction slots that can't be filled...
while an and and such at least run nicely in parallel with other stuff.
I don't know which if any processors actually do this, but it's rare/new
enough that I'd not be surprised if there are some.



--
if you want to mail me at work (you don't), use arjan (at) linux.intel.com

2006-08-27 18:05:12

by Andi Kleen

[permalink] [raw]
Subject: Re: [PATCH RFC 0/6] Implement per-processor data areas for i386.


> Something like that had to be done eventually about the inefficient
> current_thread_info() mechanism,

Inefficient? It's two fast instructions. I won't call that inefficient.

> I guess it's due to having tried that on an older installation with gcc 3.2,
> which probably does less efficient opcode merging of current_thread_info()
> requests compared to a current gcc version.

gcc normally doesn't merge inline assembly at all.

-Andi

2006-08-27 18:08:15

by Andi Kleen

[permalink] [raw]
Subject: Re: [PATCH RFC 0/6] Implement per-processor data areas for i386.


> your worst case scenario would be if the segment override would make it
> a "complex" instruction, so not parallel decodable. That'd mean it would
> basically cost you 6 or 7 instruction slots that can't be filled...
> while an and and such at least run nicely in parallel with other stuff.
> I don't know which if any processors actually do this, but it's rare/new
> enough that I'd not be surprised if there are some.

On AMD K7/K8 a segment register prefix is a single cycle penalty.

I couldn't find anything in the Intel optimization manuals on it, but I assume
it's also not dramatic.

-Andi

2006-08-27 18:23:19

by Andreas Mohr

[permalink] [raw]
Subject: Re: [PATCH RFC 0/6] Implement per-processor data areas for i386.

Hi,

On Sun, Aug 27, 2006 at 10:34:46AM -0700, Jeremy Fitzhardinge wrote:
> Andreas Mohr wrote:
> >This is interesting, since even by doing a non-elegant
> >current->... --> struct task_struct *tsk = current;
> >replacement for excessive uses of current, I was able to gain almost 10kB
> >within a single file already!
> >I guess it's due to having tried that on an older installation with gcc
> >3.2,
> >which probably does less efficient opcode merging of current_thread_info()
> >requests compared to a current gcc version.
> >IOW, .text segment reduction could be quite a bit higher for older gcc:s.
> >
>
> That doesn't sound likely. current_thread_info() is only about 2
> instructions. Are you looking at the .text segment size, or the actual
> file size? The latter can be very misleading in the presence of debug info.

Got me. File size only. And that was 10kB of around 170kB, BTW.

Andreas Mohr

2006-08-27 18:27:22

by Jeremy Fitzhardinge

[permalink] [raw]
Subject: Re: [PATCH RFC 0/6] Implement per-processor data areas for i386.

Andi Kleen wrote:
> On AMD K7/K8 a segment register prefix is a single cycle penalty.
>
> I couldn't find anything in the Intel optimization manuals on it, but I assume
> it's also not dramatic.
>

All I could find was:

* avoid multiple prefixes (which was the least important guideline
in instruction selection)
* avoid using multiple segment registers (the pentium M only has one
level of segment register renaming)
* avoid prefixes which take the instruction length over 7 bytes

None of these apply to the use of %gs to access PDA.

Most of the discussion about prefixes is in avoiding the 0x66 16-bit prefix.

J

2006-08-27 18:27:11

by Andreas Mohr

[permalink] [raw]
Subject: Re: [PATCH RFC 0/6] Implement per-processor data areas for i386.

Hi,

On Sun, Aug 27, 2006 at 08:04:38PM +0200, Andi Kleen wrote:
>
> > Something like that had to be done eventually about the inefficient
> > current_thread_info() mechanism,
>
> Inefficient? It's two fast instructions. I won't call that inefficient.

And that AGI stall?

> > I guess it's due to having tried that on an older installation with gcc 3.2,
> > which probably does less efficient opcode merging of current_thread_info()
> > requests compared to a current gcc version.
>
> gcc normally doesn't merge inline assembly at all.

Depends on use of volatile, right?

OK, so probably there was no merging of separate requests,
but opcode intermingling could have played a role.

Andreas Mohr

2006-08-27 18:36:12

by Andi Kleen

[permalink] [raw]
Subject: Re: [PATCH RFC 0/6] Implement per-processor data areas for i386.

On Sunday 27 August 2006 20:27, Andreas Mohr wrote:
> Hi,
>
> On Sun, Aug 27, 2006 at 08:04:38PM +0200, Andi Kleen wrote:
> >
> > > Something like that had to be done eventually about the inefficient
> > > current_thread_info() mechanism,
> >
> > Inefficient? It's two fast instructions. I won't call that inefficient.
>
> And that AGI stall?

What AGI stall?

[btw AGI stall is an outdated concept on modern x86 CPUs]

> > > I guess it's due to having tried that on an older installation with gcc 3.2,
> > > which probably does less efficient opcode merging of current_thread_info()
> > > requests compared to a current gcc version.
> >
> > gcc normally doesn't merge inline assembly at all.
>
> Depends on use of volatile, right?

No. It can only merge statements it knows anything about, and it doesn't
about inline assembly.

> OK, so probably there was no merging of separate requests,
> but opcode intermingling could have played a role.

It seems to make some difference if it's able to move asm around
and if they don't have memory clobbers. memory clobbers really seem
to cause much worse code in the whole function.

But current_thread_info didn't have that.

-Andi

2006-08-28 09:09:43

by Chuck Ebbert

[permalink] [raw]
Subject: Re: [PATCH RFC 0/6] Implement per-processor data areas for i386.

In-Reply-To: <[email protected]>

On Sun, 27 Aug 2006 19:21:55 +0200, Andreas Mohr wrote:

> Something like that had to be done eventually about the inefficient
> current_thread_info() mechanism, but I wasn't sure what exactly.

In 2.6.18 it's done in C and the optimizer does a pretty good job
with it in recent compilers.

--
Chuck