2009-10-01 23:03:18

by Christoph Lameter

[permalink] [raw]
Subject: [this_cpu_xx V4 02/20] this_cpu: X86 optimized this_cpu operations

Basically the existing percpu ops can be used for this_cpu variants that allow
operations also on dynamically allocated percpu data. However, we do not pass a
reference to a percpu variable in. Instead a dynamically or statically
allocated percpu variable is provided.

Preempt, the non preempt and the irqsafe operations generate the same code.
It will always be possible to have the requires per cpu atomicness in a single
RMW instruction with segment override on x86.

64 bit this_cpu operations are not supported on 32 bit.

Signed-off-by: Christoph Lameter <[email protected]>

---
arch/x86/include/asm/percpu.h | 78 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 78 insertions(+)

Index: linux-2.6/arch/x86/include/asm/percpu.h
===================================================================
--- linux-2.6.orig/arch/x86/include/asm/percpu.h 2009-10-01 09:08:37.000000000 -0500
+++ linux-2.6/arch/x86/include/asm/percpu.h 2009-10-01 09:29:43.000000000 -0500
@@ -153,6 +153,84 @@ do { \
#define percpu_or(var, val) percpu_to_op("or", per_cpu__##var, val)
#define percpu_xor(var, val) percpu_to_op("xor", per_cpu__##var, val)

+#define __this_cpu_read_1(pcp) percpu_from_op("mov", (pcp), "m"(pcp))
+#define __this_cpu_read_2(pcp) percpu_from_op("mov", (pcp), "m"(pcp))
+#define __this_cpu_read_4(pcp) percpu_from_op("mov", (pcp), "m"(pcp))
+
+#define __this_cpu_write_1(pcp, val) percpu_to_op("mov", (pcp), val)
+#define __this_cpu_write_2(pcp, val) percpu_to_op("mov", (pcp), val)
+#define __this_cpu_write_4(pcp, val) percpu_to_op("mov", (pcp), val)
+#define __this_cpu_add_1(pcp, val) percpu_to_op("add", (pcp), val)
+#define __this_cpu_add_2(pcp, val) percpu_to_op("add", (pcp), val)
+#define __this_cpu_add_4(pcp, val) percpu_to_op("add", (pcp), val)
+#define __this_cpu_and_1(pcp, val) percpu_to_op("and", (pcp), val)
+#define __this_cpu_and_2(pcp, val) percpu_to_op("and", (pcp), val)
+#define __this_cpu_and_4(pcp, val) percpu_to_op("and", (pcp), val)
+#define __this_cpu_or_1(pcp, val) percpu_to_op("or", (pcp), val)
+#define __this_cpu_or_2(pcp, val) percpu_to_op("or", (pcp), val)
+#define __this_cpu_or_4(pcp, val) percpu_to_op("or", (pcp), val)
+#define __this_cpu_xor_1(pcp, val) percpu_to_op("xor", (pcp), val)
+#define __this_cpu_xor_2(pcp, val) percpu_to_op("xor", (pcp), val)
+#define __this_cpu_xor_4(pcp, val) percpu_to_op("xor", (pcp), val)
+
+#define this_cpu_read_1(pcp) percpu_from_op("mov", (pcp), "m"(pcp))
+#define this_cpu_read_2(pcp) percpu_from_op("mov", (pcp), "m"(pcp))
+#define this_cpu_read_4(pcp) percpu_from_op("mov", (pcp), "m"(pcp))
+#define this_cpu_write_1(pcp, val) percpu_to_op("mov", (pcp), val)
+#define this_cpu_write_2(pcp, val) percpu_to_op("mov", (pcp), val)
+#define this_cpu_write_4(pcp, val) percpu_to_op("mov", (pcp), val)
+#define this_cpu_add_1(pcp, val) percpu_to_op("add", (pcp), val)
+#define this_cpu_add_2(pcp, val) percpu_to_op("add", (pcp), val)
+#define this_cpu_add_4(pcp, val) percpu_to_op("add", (pcp), val)
+#define this_cpu_and_1(pcp, val) percpu_to_op("and", (pcp), val)
+#define this_cpu_and_2(pcp, val) percpu_to_op("and", (pcp), val)
+#define this_cpu_and_4(pcp, val) percpu_to_op("and", (pcp), val)
+#define this_cpu_or_1(pcp, val) percpu_to_op("or", (pcp), val)
+#define this_cpu_or_2(pcp, val) percpu_to_op("or", (pcp), val)
+#define this_cpu_or_4(pcp, val) percpu_to_op("or", (pcp), val)
+#define this_cpu_xor_1(pcp, val) percpu_to_op("xor", (pcp), val)
+#define this_cpu_xor_2(pcp, val) percpu_to_op("xor", (pcp), val)
+#define this_cpu_xor_4(pcp, val) percpu_to_op("xor", (pcp), val)
+
+#define irqsafe_cpu_add_1(pcp, val) percpu_to_op("add", (pcp), val)
+#define irqsafe_cpu_add_2(pcp, val) percpu_to_op("add", (pcp), val)
+#define irqsafe_cpu_add_4(pcp, val) percpu_to_op("add", (pcp), val)
+#define irqsafe_cpu_and_1(pcp, val) percpu_to_op("and", (pcp), val)
+#define irqsafe_cpu_and_2(pcp, val) percpu_to_op("and", (pcp), val)
+#define irqsafe_cpu_and_4(pcp, val) percpu_to_op("and", (pcp), val)
+#define irqsafe_cpu_or_1(pcp, val) percpu_to_op("or", (pcp), val)
+#define irqsafe_cpu_or_2(pcp, val) percpu_to_op("or", (pcp), val)
+#define irqsafe_cpu_or_4(pcp, val) percpu_to_op("or", (pcp), val)
+#define irqsafe_cpu_xor_1(pcp, val) percpu_to_op("xor", (pcp), val)
+#define irqsafe_cpu_xor_2(pcp, val) percpu_to_op("xor", (pcp), val)
+#define irqsafe_cpu_xor_4(pcp, val) percpu_to_op("xor", (pcp), val)
+
+/*
+ * Per cpu atomic 64 bit operations are only available under 64 bit.
+ * 32 bit must fall back to generic operations.
+ */
+#ifdef CONFIG_X86_64
+#define __this_cpu_read_8(pcp) percpu_from_op("mov", (pcp), "m"(pcp))
+#define __this_cpu_write_8(pcp, val) percpu_to_op("mov", (pcp), val)
+#define __this_cpu_add_8(pcp, val) percpu_to_op("add", (pcp), val)
+#define __this_cpu_and_8(pcp, val) percpu_to_op("and", (pcp), val)
+#define __this_cpu_or_8(pcp, val) percpu_to_op("or", (pcp), val)
+#define __this_cpu_xor_8(pcp, val) percpu_to_op("xor", (pcp), val)
+
+#define this_cpu_read_8(pcp) percpu_from_op("mov", (pcp), "m"(pcp))
+#define this_cpu_write_8(pcp, val) percpu_to_op("mov", (pcp), val)
+#define this_cpu_add_8(pcp, val) percpu_to_op("add", (pcp), val)
+#define this_cpu_and_8(pcp, val) percpu_to_op("and", (pcp), val)
+#define this_cpu_or_8(pcp, val) percpu_to_op("or", (pcp), val)
+#define this_cpu_xor_8(pcp, val) percpu_to_op("xor", (pcp), val)
+
+#define irqsafe_cpu_add_8(pcp, val) percpu_to_op("add", (pcp), val)
+#define irqsafe_cpu_and_8(pcp, val) percpu_to_op("and", (pcp), val)
+#define irqsafe_cpu_or_8(pcp, val) percpu_to_op("or", (pcp), val)
+#define irqsafe_cpu_xor_8(pcp, val) percpu_to_op("xor", (pcp), val)
+
+#endif
+
/* This is not atomic against other CPUs -- CPU preemption needs to be off */
#define x86_test_and_clear_bit_percpu(bit, var) \
({ \

--


2009-10-02 09:19:01

by Tejun Heo

[permalink] [raw]
Subject: Re: [this_cpu_xx V4 02/20] this_cpu: X86 optimized this_cpu operations

[email protected] wrote:
> Basically the existing percpu ops can be used for this_cpu variants that allow
> operations also on dynamically allocated percpu data. However, we do not pass a
> reference to a percpu variable in. Instead a dynamically or statically
> allocated percpu variable is provided.
>
> Preempt, the non preempt and the irqsafe operations generate the same code.
> It will always be possible to have the requires per cpu atomicness in a single
> RMW instruction with segment override on x86.
>
> 64 bit this_cpu operations are not supported on 32 bit.
>
> Signed-off-by: Christoph Lameter <[email protected]>

Acked-by: Tejun Heo <[email protected]>

--
tejun

2009-10-02 09:59:28

by Ingo Molnar

[permalink] [raw]
Subject: Re: [this_cpu_xx V4 02/20] this_cpu: X86 optimized this_cpu operations


* [email protected] <[email protected]> wrote:

> Basically the existing percpu ops can be used for this_cpu variants
> that allow operations also on dynamically allocated percpu data.
> However, we do not pass a reference to a percpu variable in. Instead a
> dynamically or statically allocated percpu variable is provided.
>
> Preempt, the non preempt and the irqsafe operations generate the same
> code. It will always be possible to have the requires per cpu
> atomicness in a single RMW instruction with segment override on x86.
>
> 64 bit this_cpu operations are not supported on 32 bit.
>
> Signed-off-by: Christoph Lameter <[email protected]>

Acked-by: Ingo Molnar <[email protected]>

Ingo

2009-10-03 19:34:09

by Pekka Enberg

[permalink] [raw]
Subject: Re: [this_cpu_xx V4 02/20] this_cpu: X86 optimized this_cpu operations

Hi,

Ingo Molnar wrote:
> * [email protected] <[email protected]> wrote:
>
>> Basically the existing percpu ops can be used for this_cpu variants
>> that allow operations also on dynamically allocated percpu data.
>> However, we do not pass a reference to a percpu variable in. Instead a
>> dynamically or statically allocated percpu variable is provided.
>>
>> Preempt, the non preempt and the irqsafe operations generate the same
>> code. It will always be possible to have the requires per cpu
>> atomicness in a single RMW instruction with segment override on x86.
>>
>> 64 bit this_cpu operations are not supported on 32 bit.
>>
>> Signed-off-by: Christoph Lameter <[email protected]>
>
> Acked-by: Ingo Molnar <[email protected]>

I haven't looked at the series in detail but AFAICT the SLUB patches
depend on the x86 ones. Any suggestions how to get all this into
linux-next? Should I make a topic branch in slab.git on top of -tip or
something?

Pekka

2009-10-04 16:48:42

by Ingo Molnar

[permalink] [raw]
Subject: Re: [this_cpu_xx V4 02/20] this_cpu: X86 optimized this_cpu operations


* Pekka Enberg <[email protected]> wrote:

> Hi,
>
> Ingo Molnar wrote:
>> * [email protected] <[email protected]> wrote:
>>
>>> Basically the existing percpu ops can be used for this_cpu variants
>>> that allow operations also on dynamically allocated percpu data.
>>> However, we do not pass a reference to a percpu variable in. Instead
>>> a dynamically or statically allocated percpu variable is provided.
>>>
>>> Preempt, the non preempt and the irqsafe operations generate the same
>>> code. It will always be possible to have the requires per cpu
>>> atomicness in a single RMW instruction with segment override on x86.
>>>
>>> 64 bit this_cpu operations are not supported on 32 bit.
>>>
>>> Signed-off-by: Christoph Lameter <[email protected]>
>>
>> Acked-by: Ingo Molnar <[email protected]>
>
> I haven't looked at the series in detail but AFAICT the SLUB patches
> depend on the x86 ones. Any suggestions how to get all this into
> linux-next? Should I make a topic branch in slab.git on top of -tip or
> something?

I'd suggest to keep these patches together in the right topical tree:
Tejun's percpu tree. Any problem with that approach?

Ingo

2009-10-04 16:52:28

by Pekka Enberg

[permalink] [raw]
Subject: Re: [this_cpu_xx V4 02/20] this_cpu: X86 optimized this_cpu operations

Hi Ingo,

Ingo Molnar wrote:
> * Pekka Enberg <[email protected]> wrote:
>
>> Hi,
>>
>> Ingo Molnar wrote:
>>> * [email protected] <[email protected]> wrote:
>>>
>>>> Basically the existing percpu ops can be used for this_cpu variants
>>>> that allow operations also on dynamically allocated percpu data.
>>>> However, we do not pass a reference to a percpu variable in. Instead
>>>> a dynamically or statically allocated percpu variable is provided.
>>>>
>>>> Preempt, the non preempt and the irqsafe operations generate the same
>>>> code. It will always be possible to have the requires per cpu
>>>> atomicness in a single RMW instruction with segment override on x86.
>>>>
>>>> 64 bit this_cpu operations are not supported on 32 bit.
>>>>
>>>> Signed-off-by: Christoph Lameter <[email protected]>
>>> Acked-by: Ingo Molnar <[email protected]>
>> I haven't looked at the series in detail but AFAICT the SLUB patches
>> depend on the x86 ones. Any suggestions how to get all this into
>> linux-next? Should I make a topic branch in slab.git on top of -tip or
>> something?
>
> I'd suggest to keep these patches together in the right topical tree:
> Tejun's percpu tree. Any problem with that approach?

I'm fine with that. Just wanted to make sure who is taking the patches
and if I should pick any of them up. We can get some conflicts between
the per-cpu tree and slab.git if new SLUB patches get merged but that's
probably not a huge problem.

Pekka