2009-04-01 08:13:48

by Eric Dumazet

[permalink] [raw]
Subject: [PATCH] x86: percpu_to_op() misses memory and flags clobbers

While playing with new percpu_{read|write|add|sub} stuff in network tree,
I found x86 asm was a litle bit optimistic.

We need to tell gcc that percpu_{write|add|sub|or|xor} are modyfing
memory and possibly eflags. We could add another parameter to percpu_to_op()
to separate the plain "mov" case (not changing eflags),
but let keep it simple for the moment.

Signed-off-by: Eric Dumazet <[email protected]>

diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index aee103b..fd4f8ec 100644
--- a/arch/x86/include/asm/percpu.h
+++ b/arch/x86/include/asm/percpu.h
@@ -82,22 +82,26 @@ do { \
case 1: \
asm(op "b %1,"__percpu_arg(0) \
: "+m" (var) \
- : "ri" ((T__)val)); \
+ : "ri" ((T__)val) \
+ : "memory", "cc"); \
break; \
case 2: \
asm(op "w %1,"__percpu_arg(0) \
: "+m" (var) \
- : "ri" ((T__)val)); \
+ : "ri" ((T__)val) \
+ : "memory", "cc"); \
break; \
case 4: \
asm(op "l %1,"__percpu_arg(0) \
: "+m" (var) \
- : "ri" ((T__)val)); \
+ : "ri" ((T__)val) \
+ : "memory", "cc"); \
break; \
case 8: \
asm(op "q %1,"__percpu_arg(0) \
: "+m" (var) \
- : "re" ((T__)val)); \
+ : "re" ((T__)val) \
+ : "memory", "cc"); \
break; \
default: __bad_percpu_size(); \
} \


2009-04-01 09:03:20

by Jeremy Fitzhardinge

[permalink] [raw]
Subject: Re: [PATCH] x86: percpu_to_op() misses memory and flags clobbers

Eric Dumazet wrote:
> While playing with new percpu_{read|write|add|sub} stuff in network tree,
> I found x86 asm was a litle bit optimistic.
>
> We need to tell gcc that percpu_{write|add|sub|or|xor} are modyfing
> memory and possibly eflags. We could add another parameter to percpu_to_op()
> to separate the plain "mov" case (not changing eflags),
> but let keep it simple for the moment.
>

Did you observe an actual failure that this patch fixed?

> Signed-off-by: Eric Dumazet <[email protected]>
>
> diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
> index aee103b..fd4f8ec 100644
> --- a/arch/x86/include/asm/percpu.h
> +++ b/arch/x86/include/asm/percpu.h
> @@ -82,22 +82,26 @@ do { \
> case 1: \
> asm(op "b %1,"__percpu_arg(0) \
> : "+m" (var) \
> - : "ri" ((T__)val)); \
> + : "ri" ((T__)val) \
> + : "memory", "cc"); \
>

This shouldn't be necessary. The "+m" already tells gcc that var is a
memory input and output, and there are no other memory side-effects
which it needs to be aware of; clobbering "memory" will force gcc to
reload all register-cached memory, which is a pretty hard hit. I think
all asms implicitly clobber "cc", so that shouldn't have any effect, but
it does no harm.

Now, its true that the asm isn't actually modifying var itself, but
%gs:var, which is a different location. But from gcc's perspective that
shouldn't matter because var makes a perfectly good proxy for that
location, and will make sure it correctly order all accesses to var.

I'd be surprised if this were broken, because we'd be seeing all sorts
of strange crashes all over the place. We've seen it before when the
old x86-64 pda code didn't have proper constraints on its asm statements.

J

2009-04-01 10:17:49

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH] x86: percpu_to_op() misses memory and flags clobbers

Jeremy Fitzhardinge a ?crit :
> Eric Dumazet wrote:
>> While playing with new percpu_{read|write|add|sub} stuff in network tree,
>> I found x86 asm was a litle bit optimistic.
>>
>> We need to tell gcc that percpu_{write|add|sub|or|xor} are modyfing
>> memory and possibly eflags. We could add another parameter to
>> percpu_to_op()
>> to separate the plain "mov" case (not changing eflags),
>> but let keep it simple for the moment.
>>
>
> Did you observe an actual failure that this patch fixed?
>

Not in current tree, as we dont use yet percpu_xxxx() very much.

If deployed for SNMP mibs with hundred of call sites,
can you guarantee it will work as is ?

>> Signed-off-by: Eric Dumazet <[email protected]>
>>
>> diff --git a/arch/x86/include/asm/percpu.h
>> b/arch/x86/include/asm/percpu.h
>> index aee103b..fd4f8ec 100644
>> --- a/arch/x86/include/asm/percpu.h
>> +++ b/arch/x86/include/asm/percpu.h
>> @@ -82,22 +82,26 @@ do { \
>> case 1: \
>> asm(op "b %1,"__percpu_arg(0) \
>> : "+m" (var) \
>> - : "ri" ((T__)val)); \
>> + : "ri" ((T__)val) \
>> + : "memory", "cc"); \
>>
>
> This shouldn't be necessary. The "+m" already tells gcc that var is a
> memory input and output, and there are no other memory side-effects
> which it needs to be aware of; clobbering "memory" will force gcc to
> reload all register-cached memory, which is a pretty hard hit. I think
> all asms implicitly clobber "cc", so that shouldn't have any effect, but
> it does no harm.


So, we can probably cleanup many asms in tree :)

static inline void __down_read(struct rw_semaphore *sem)
{
asm volatile("# beginning down_read\n\t"
LOCK_PREFIX " incl (%%eax)\n\t"
/* adds 0x00000001, returns the old value */
" jns 1f\n"
" call call_rwsem_down_read_failed\n"
"1:\n\t"
"# ending down_read\n\t"
: "+m" (sem->count)
: "a" (sem)
: "memory", "cc");
}




>
> Now, its true that the asm isn't actually modifying var itself, but
> %gs:var, which is a different location. But from gcc's perspective that
> shouldn't matter because var makes a perfectly good proxy for that
> location, and will make sure it correctly order all accesses to var.
>
> I'd be surprised if this were broken, because we'd be seeing all sorts
> of strange crashes all over the place. We've seen it before when the
> old x86-64 pda code didn't have proper constraints on its asm statements.

I was not saying it is broken, but a "litle bit optimistic" :)

Better be safe than sorry, because those errors are very hard to track, since
it depends a lot on gcc being aggressive or not. I dont have time to test
all gcc versions all over there.

2009-04-01 16:12:53

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH] x86: percpu_to_op() misses memory and flags clobbers


* Eric Dumazet <[email protected]> wrote:

> Jeremy Fitzhardinge a ?crit :
> > Eric Dumazet wrote:
> >> While playing with new percpu_{read|write|add|sub} stuff in network tree,
> >> I found x86 asm was a litle bit optimistic.
> >>
> >> We need to tell gcc that percpu_{write|add|sub|or|xor} are modyfing
> >> memory and possibly eflags. We could add another parameter to
> >> percpu_to_op()
> >> to separate the plain "mov" case (not changing eflags),
> >> but let keep it simple for the moment.
> >>
> >
> > Did you observe an actual failure that this patch fixed?
> >
>
> Not in current tree, as we dont use yet percpu_xxxx() very much.
>
> If deployed for SNMP mibs with hundred of call sites,
> can you guarantee it will work as is ?

Do we "guarantee" it for you? No.

Is it expected to work just fine? Yes.

Are there any known bugs in this area? No.

Will we fix it if it's demonstrated to be broken? Of course! :-)

[ Btw., it's definitely cool that you will make heavy use for it for
SNMP mib statistics - please share with us your experiences with
the facilities - good or bad experiences alike! ]

> >> Signed-off-by: Eric Dumazet <[email protected]>
> >>
> >> diff --git a/arch/x86/include/asm/percpu.h
> >> b/arch/x86/include/asm/percpu.h
> >> index aee103b..fd4f8ec 100644
> >> --- a/arch/x86/include/asm/percpu.h
> >> +++ b/arch/x86/include/asm/percpu.h
> >> @@ -82,22 +82,26 @@ do { \
> >> case 1: \
> >> asm(op "b %1,"__percpu_arg(0) \
> >> : "+m" (var) \
> >> - : "ri" ((T__)val)); \
> >> + : "ri" ((T__)val) \
> >> + : "memory", "cc"); \
> >>
> >
> > This shouldn't be necessary. The "+m" already tells gcc that var is a
> > memory input and output, and there are no other memory side-effects
> > which it needs to be aware of; clobbering "memory" will force gcc to
> > reload all register-cached memory, which is a pretty hard hit. I think
> > all asms implicitly clobber "cc", so that shouldn't have any effect, but
> > it does no harm.
>
>
> So, we can probably cleanup many asms in tree :)
>
> static inline void __down_read(struct rw_semaphore *sem)
> {
> asm volatile("# beginning down_read\n\t"
> LOCK_PREFIX " incl (%%eax)\n\t"
> /* adds 0x00000001, returns the old value */
> " jns 1f\n"
> " call call_rwsem_down_read_failed\n"
> "1:\n\t"
> "# ending down_read\n\t"
> : "+m" (sem->count)
> : "a" (sem)
> : "memory", "cc");
> }

Hm, what's your point with pasting this inline function?

> > Now, its true that the asm isn't actually modifying var itself, but
> > %gs:var, which is a different location. But from gcc's perspective that
> > shouldn't matter because var makes a perfectly good proxy for that
> > location, and will make sure it correctly order all accesses to var.
> >
> > I'd be surprised if this were broken, because we'd be seeing all sorts
> > of strange crashes all over the place. We've seen it before when the
> > old x86-64 pda code didn't have proper constraints on its asm statements.
>
> I was not saying it is broken, but a "litle bit optimistic" :)
>
> Better be safe than sorry, because those errors are very hard to
> track, since it depends a lot on gcc being aggressive or not. I
> dont have time to test all gcc versions all over there.

Well, Jeremy has already made the valid point that your patch
pessimises the constraints and hence likely causes worse code.

We can only apply assembly constraint patches that:

either fix a demonstrated bug,

or improve (speed up) the code emitted,

or very rarely, we will apply patches that dont actually make the
code worse (they are an invariant) but are perceived to be safer

This patch matches neither of these tests and in fact it will
probably make the generated code worse.

Ingo

2009-04-01 16:41:58

by Jeremy Fitzhardinge

[permalink] [raw]
Subject: Re: [PATCH] x86: percpu_to_op() misses memory and flags clobbers

Ingo Molnar wrote:
>> : "memory", "cc");
>> }
>>
>
> Hm, what's your point with pasting this inline function?
>

He's pointing out the redundant (but harmless) "cc" clobber.

J

2009-04-01 16:45:32

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH] x86: percpu_to_op() misses memory and flags clobbers


* Jeremy Fitzhardinge <[email protected]> wrote:

> Ingo Molnar wrote:
>>> : "memory", "cc");
>>> }
>>>
>>
>> Hm, what's your point with pasting this inline function?
>>
>
> He's pointing out the redundant (but harmless) "cc" clobber.

ah, yes. We are completely inconsistent about that. It doesnt
matter on x86 so i guess could be removed everywhere.

Ingo

2009-04-01 17:14:17

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH] x86: percpu_to_op() misses memory and flags clobbers

Ingo Molnar a ?crit :
> * Eric Dumazet <[email protected]> wrote:
>
>> Jeremy Fitzhardinge a ?crit :
>>> Eric Dumazet wrote:
>>>> While playing with new percpu_{read|write|add|sub} stuff in network tree,
>>>> I found x86 asm was a litle bit optimistic.
>>>>
>>>> We need to tell gcc that percpu_{write|add|sub|or|xor} are modyfing
>>>> memory and possibly eflags. We could add another parameter to
>>>> percpu_to_op()
>>>> to separate the plain "mov" case (not changing eflags),
>>>> but let keep it simple for the moment.
>>>>
>>> Did you observe an actual failure that this patch fixed?
>>>
>> Not in current tree, as we dont use yet percpu_xxxx() very much.
>>
>> If deployed for SNMP mibs with hundred of call sites,
>> can you guarantee it will work as is ?
>
> Do we "guarantee" it for you? No.
>
> Is it expected to work just fine? Yes.
>
> Are there any known bugs in this area? No.

Good to know. So I shut up. I am a jerk and should blindly trust
linux kernel, sorry.

>
> Will we fix it if it's demonstrated to be broken? Of course! :-)
>
> [ Btw., it's definitely cool that you will make heavy use for it for
> SNMP mib statistics - please share with us your experiences with
> the facilities - good or bad experiences alike! ]

I tried but I miss kind of an indirect percpu_add() function.

because of Net namespaces, mibs are dynamically allocated, and
current percpu_add() works on static percpu only (because of added
per_cpu__ prefix)

#define percpu_add(var, val) percpu_to_op("add", per_cpu__##var, val)

I tried adding :

#define dyn_percpu_add(var, val) percpu_to_op("add", var, val)

But I dont know it this is the plan ?
Should we get rid of "per_cpu__" prefix and use a special ELF section/
marker instead ?

I have a patch to add percpu_inc() and percpu_dec(), I am not
sure its worth it...

[PATCH] percpu: Adds percpu_inc() and percpu_dec()

Increments and decrements are quite common operations for SNMP mibs.

Signed-off-by: Eric Dumazet <[email protected]>

diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index aee103b..248be11 100644
--- a/arch/x86/include/asm/percpu.h
+++ b/arch/x86/include/asm/percpu.h
@@ -103,6 +103,29 @@ do { \
} \
} while (0)

+#define percpu_to_op0(op, var) \
+do { \
+ switch (sizeof(var)) { \
+ case 1: \
+ asm(op "b "__percpu_arg(0) \
+ : "+m" (var)); \
+ break; \
+ case 2: \
+ asm(op "w "__percpu_arg(0) \
+ : "+m" (var)); \
+ break; \
+ case 4: \
+ asm(op "l "__percpu_arg(0) \
+ : "+m" (var)); \
+ break; \
+ case 8: \
+ asm(op "q "__percpu_arg(0) \
+ : "+m" (var)); \
+ break; \
+ default: __bad_percpu_size(); \
+ } \
+} while (0)
+
#define percpu_from_op(op, var) \
({ \
typeof(var) ret__; \
@@ -139,6 +162,8 @@ do { \
#define percpu_and(var, val) percpu_to_op("and", per_cpu__##var, val)
#define percpu_or(var, val) percpu_to_op("or", per_cpu__##var, val)
#define percpu_xor(var, val) percpu_to_op("xor", per_cpu__##var, val)
+#define percpu_inc(var) percpu_to_op0("inc", per_cpu__##var)
+#define percpu_dec(var) percpu_to_op0("dec", per_cpu__##var)

/* This is not atomic against other CPUs -- CPU preemption needs to be off */
#define x86_test_and_clear_bit_percpu(bit, var) \
diff --git a/include/asm-generic/percpu.h b/include/asm-generic/percpu.h
index 00f45ff..c57357e 100644
--- a/include/asm-generic/percpu.h
+++ b/include/asm-generic/percpu.h
@@ -120,6 +120,14 @@ do { \
# define percpu_sub(var, val) __percpu_generic_to_op(var, (val), -=)
#endif

+#ifndef percpu_inc
+# define percpu_inc(var) do { percpu_add(var, 1); } while (0)
+#endif
+
+#ifndef percpu_dec
+# define percpu_dec(var) do { percpu_sub(var, 1); } while (0)
+#endif
+
#ifndef percpu_and
# define percpu_and(var, val) __percpu_generic_to_op(var, (val), &=)
#endif

2009-04-01 18:08:08

by Jeremy Fitzhardinge

[permalink] [raw]
Subject: Re: [PATCH] x86: percpu_to_op() misses memory and flags clobbers

Eric Dumazet wrote:
> +#define percpu_inc(var) percpu_to_op0("inc", per_cpu__##var)
> +#define percpu_dec(var) percpu_to_op0("dec", per_cpu__##var)
>

There's probably not a lot of value in this. The Intel and AMD
optimisation guides tend to deprecate inc/dec in favour of using
add/sub, because the former can cause pipeline stalls due to its partial
flags update.

J

2009-04-01 18:45:35

by Eric Dumazet

[permalink] [raw]
Subject: [RFC] percpu: convert SNMP mibs to new infra

Eric Dumazet a ?crit :
> Ingo Molnar a ?crit :
>>
>> [ Btw., it's definitely cool that you will make heavy use for it for
>> SNMP mib statistics - please share with us your experiences with
>> the facilities - good or bad experiences alike! ]
>
> I tried but I miss kind of an indirect percpu_add() function.
>
> because of Net namespaces, mibs are dynamically allocated, and
> current percpu_add() works on static percpu only (because of added
> per_cpu__ prefix)
>
> #define percpu_add(var, val) percpu_to_op("add", per_cpu__##var, val)
>
> I tried adding :
>
> #define dyn_percpu_add(var, val) percpu_to_op("add", var, val)
>
> But I dont know it this is the plan ?
> Should we get rid of "per_cpu__" prefix and use a special ELF section/
> marker instead ?
>

Here is a preliminary patch for SNMP mibs that seems to work well on x86_32

[RFC] percpu: convert SNMP mibs to new infra

Some arches can use percpu infrastructure for safe changes to mibs.
(percpu_add() is safe against preemption and interrupts), but
we want the real thing (a single instruction), not an emulation.

On arches still using an emulation, its better to keep the two views
per mib and preemption disable/enable

This shrinks size of mibs by 50%, but also shrinks vmlinux text size
(minimum IPV4 config)

$ size vmlinux.old vmlinux.new
text data bss dec hex filename
4308458 561092 1728512 6598062 64adae vmlinux.old
4303834 561092 1728512 6593438 649b9e vmlinux.new



Signed-off-by: Eric Dumazet <[email protected]>
---
arch/x86/include/asm/percpu.h | 3 +++
include/net/snmp.h | 27 ++++++++++++++++++++++-----
net/ipv4/af_inet.c | 28 +++++++++++++++++++---------
3 files changed, 44 insertions(+), 14 deletions(-)


diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index aee103b..6b82f6b 100644
--- a/arch/x86/include/asm/percpu.h
+++ b/arch/x86/include/asm/percpu.h
@@ -135,6 +135,9 @@ do { \
#define percpu_read(var) percpu_from_op("mov", per_cpu__##var)
#define percpu_write(var, val) percpu_to_op("mov", per_cpu__##var, val)
#define percpu_add(var, val) percpu_to_op("add", per_cpu__##var, val)
+#define indir_percpu_add(var, val) percpu_to_op("add", *(var), val)
+#define indir_percpu_inc(var) percpu_to_op("add", *(var), 1)
+#define indir_percpu_dec(var) percpu_to_op("add", *(var), -1)
#define percpu_sub(var, val) percpu_to_op("sub", per_cpu__##var, val)
#define percpu_and(var, val) percpu_to_op("and", per_cpu__##var, val)
#define percpu_or(var, val) percpu_to_op("or", per_cpu__##var, val)
diff --git a/include/net/snmp.h b/include/net/snmp.h
index 57c9362..ef9ed31 100644
--- a/include/net/snmp.h
+++ b/include/net/snmp.h
@@ -123,15 +123,31 @@ struct linux_xfrm_mib {
};

/*
- * FIXME: On x86 and some other CPUs the split into user and softirq parts
+ * On x86 and some other CPUs the split into user and softirq parts
* is not needed because addl $1,memory is atomic against interrupts (but
- * atomic_inc would be overkill because of the lock cycles). Wants new
- * nonlocked_atomic_inc() primitives -AK
+ * atomic_inc would be overkill because of the lock cycles).
*/
+#ifdef CONFIG_X86
+# define SNMP_ARRAY_SZ 1
+#else
+# define SNMP_ARRAY_SZ 2
+#endif
+
#define DEFINE_SNMP_STAT(type, name) \
- __typeof__(type) *name[2]
+ __typeof__(type) *name[SNMP_ARRAY_SZ]
#define DECLARE_SNMP_STAT(type, name) \
- extern __typeof__(type) *name[2]
+ extern __typeof__(type) *name[SNMP_ARRAY_SZ]
+
+#if SNMP_ARRAY_SZ == 1
+#define SNMP_INC_STATS(mib, field) indir_percpu_inc(&mib[0]->mibs[field])
+#define SNMP_INC_STATS_BH(mib, field) SNMP_INC_STATS(mib, field)
+#define SNMP_INC_STATS_USER(mib, field) SNMP_INC_STATS(mib, field)
+#define SNMP_DEC_STATS(mib, field) indir_percpu_dec(&mib[0]->mibs[field])
+#define SNMP_ADD_STATS_BH(mib, field, addend) \
+ indir_percpu_add(&mib[0]->mibs[field], addend)
+#define SNMP_ADD_STATS_USER(mib, field, addend) \
+ indir_percpu_add(&mib[0]->mibs[field], addend)
+#else

#define SNMP_STAT_BHPTR(name) (name[0])
#define SNMP_STAT_USRPTR(name) (name[1])
@@ -160,5 +176,6 @@ struct linux_xfrm_mib {
per_cpu_ptr(mib[1], get_cpu())->mibs[field] += addend; \
put_cpu(); \
} while (0)
+#endif

#endif
diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
index 7f03373..badb568 100644
--- a/net/ipv4/af_inet.c
+++ b/net/ipv4/af_inet.c
@@ -1366,27 +1366,37 @@ unsigned long snmp_fold_field(void *mib[], int offt)

for_each_possible_cpu(i) {
res += *(((unsigned long *) per_cpu_ptr(mib[0], i)) + offt);
+#if SNMP_ARRAY_SZ == 2
res += *(((unsigned long *) per_cpu_ptr(mib[1], i)) + offt);
+#endif
}
return res;
}
EXPORT_SYMBOL_GPL(snmp_fold_field);

-int snmp_mib_init(void *ptr[2], size_t mibsize)
+int snmp_mib_init(void *ptr[SNMP_ARRAY_SZ], size_t mibsize)
{
BUG_ON(ptr == NULL);
ptr[0] = __alloc_percpu(mibsize, __alignof__(unsigned long long));
if (!ptr[0])
- goto err0;
+ return -ENOMEM;
+#if SNMP_ARRAY_SZ == 2
ptr[1] = __alloc_percpu(mibsize, __alignof__(unsigned long long));
- if (!ptr[1])
- goto err1;
+ if (!ptr[1]) {
+ free_percpu(ptr[0]);
+ ptr[0] = NULL;
+ return -ENOMEM;
+ }
+#endif
+ {
+ int i;
+ printk(KERN_INFO "snmp_mib_init(%u) %p ", (unsigned int)mibsize, ptr[0]);
+ for_each_possible_cpu(i) {
+ printk(KERN_INFO "%p ", per_cpu_ptr(ptr[0], i));
+ }
+ printk(KERN_INFO "\n");
+ }
return 0;
-err1:
- free_percpu(ptr[0]);
- ptr[0] = NULL;
-err0:
- return -ENOMEM;
}
EXPORT_SYMBOL_GPL(snmp_mib_init);

2009-04-01 18:48:41

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH] x86: percpu_to_op() misses memory and flags clobbers

Jeremy Fitzhardinge a ?crit :
> Eric Dumazet wrote:
>> +#define percpu_inc(var) percpu_to_op0("inc", per_cpu__##var)
>> +#define percpu_dec(var) percpu_to_op0("dec", per_cpu__##var)
>>
>
> There's probably not a lot of value in this. The Intel and AMD
> optimisation guides tend to deprecate inc/dec in favour of using
> add/sub, because the former can cause pipeline stalls due to its partial
> flags update.
>
> J

Sure, but this saves one byte per call, this is probably why we still use
inc/dec in so many places...

2009-04-02 00:13:48

by Tejun Heo

[permalink] [raw]
Subject: Re: [RFC] percpu: convert SNMP mibs to new infra

Hello, Eric, Ingo.

Eric Dumazet wrote:
> diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
> index aee103b..6b82f6b 100644
> --- a/arch/x86/include/asm/percpu.h
> +++ b/arch/x86/include/asm/percpu.h
> @@ -135,6 +135,9 @@ do { \
> #define percpu_read(var) percpu_from_op("mov", per_cpu__##var)
> #define percpu_write(var, val) percpu_to_op("mov", per_cpu__##var, val)
> #define percpu_add(var, val) percpu_to_op("add", per_cpu__##var, val)
> +#define indir_percpu_add(var, val) percpu_to_op("add", *(var), val)
> +#define indir_percpu_inc(var) percpu_to_op("add", *(var), 1)
> +#define indir_percpu_dec(var) percpu_to_op("add", *(var), -1)
> #define percpu_sub(var, val) percpu_to_op("sub", per_cpu__##var, val)
> #define percpu_and(var, val) percpu_to_op("and", per_cpu__##var, val)
> #define percpu_or(var, val) percpu_to_op("or", per_cpu__##var, val)

The final goal is to unify static and dynamic accesses but we aren't
there yet, so, for the time being, we'll need some interim solutions.
I would prefer percpu_ptr_add() tho.

Thanks.

--
tejun

2009-04-02 04:06:19

by Ingo Molnar

[permalink] [raw]
Subject: Re: [RFC] percpu: convert SNMP mibs to new infra


* Tejun Heo <[email protected]> wrote:

> Hello, Eric, Ingo.
>
> Eric Dumazet wrote:
> > diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
> > index aee103b..6b82f6b 100644
> > --- a/arch/x86/include/asm/percpu.h
> > +++ b/arch/x86/include/asm/percpu.h
> > @@ -135,6 +135,9 @@ do { \
> > #define percpu_read(var) percpu_from_op("mov", per_cpu__##var)
> > #define percpu_write(var, val) percpu_to_op("mov", per_cpu__##var, val)
> > #define percpu_add(var, val) percpu_to_op("add", per_cpu__##var, val)
> > +#define indir_percpu_add(var, val) percpu_to_op("add", *(var), val)
> > +#define indir_percpu_inc(var) percpu_to_op("add", *(var), 1)
> > +#define indir_percpu_dec(var) percpu_to_op("add", *(var), -1)
> > #define percpu_sub(var, val) percpu_to_op("sub", per_cpu__##var, val)
> > #define percpu_and(var, val) percpu_to_op("and", per_cpu__##var, val)
> > #define percpu_or(var, val) percpu_to_op("or", per_cpu__##var, val)
>
> The final goal is to unify static and dynamic accesses but we
> aren't there yet, so, for the time being, we'll need some interim
> solutions. I would prefer percpu_ptr_add() tho.

Yep, that's the standard naming scheme for new APIs: generic to
specific, left to right.

Ingo

2009-04-02 05:05:00

by Rusty Russell

[permalink] [raw]
Subject: Re: [RFC] percpu: convert SNMP mibs to new infra

On Thursday 02 April 2009 05:14:47 Eric Dumazet wrote:
> Here is a preliminary patch for SNMP mibs that seems to work well on x86_32
>
> [RFC] percpu: convert SNMP mibs to new infra

OK, I have a whole heap of "convert to dynamic per-cpu" patches waiting in
the wings too, once Tejun's conversion is complete.

Also, what is optimal depends on the arch: we had a long discussion on this
(it's what local_t was supposed to do, with cpu_local_inc() etc: see
Subject: local_add_return 2008-12-16 thread).

eg. on S/390, atomic_inc is a win over the two-counter version. On Sparc,
two-counter wins. On x86, inc wins (obviously).

But efforts to create a single primitive have been problematic: maybe
open-coding it like this is the Right Thing.

Cheers,
Rusty.

2009-04-02 05:20:05

by Eric Dumazet

[permalink] [raw]
Subject: Re: [RFC] percpu: convert SNMP mibs to new infra

Rusty Russell a ?crit :
> On Thursday 02 April 2009 05:14:47 Eric Dumazet wrote:
>> Here is a preliminary patch for SNMP mibs that seems to work well on x86_32
>>
>> [RFC] percpu: convert SNMP mibs to new infra
>
> OK, I have a whole heap of "convert to dynamic per-cpu" patches waiting in
> the wings too, once Tejun's conversion is complete.
>
> Also, what is optimal depends on the arch: we had a long discussion on this
> (it's what local_t was supposed to do, with cpu_local_inc() etc: see
> Subject: local_add_return 2008-12-16 thread).
>
> eg. on S/390, atomic_inc is a win over the two-counter version. On Sparc,
> two-counter wins. On x86, inc wins (obviously).
>
> But efforts to create a single primitive have been problematic: maybe
> open-coding it like this is the Right Thing.
>

I tried to find a generic CONFIG_ define that would annonce that an arche
has a fast percpu_add() implementation. (faster than __raw_get_cpu_var,
for example, when we already are in a preempt disabled section)

Any idea ?


For example, net/ipv4/route.c has :

static DEFINE_PER_CPU(struct rt_cache_stat, rt_cache_stat);
#define RT_CACHE_STAT_INC(field) \
(__raw_get_cpu_var(rt_cache_stat).field++)

We could use percpu_add(rt_cache_stat.field, 1) instead, only if percpu_add()
is not the generic one.

#define __percpu_generic_to_op(var, val, op) \
do { \
get_cpu_var(var) op val; \
put_cpu_var(var); \
} while (0)
#ifndef percpu_add
# define percpu_add(var, val) __percpu_generic_to_op(var, (val), +=)
#endif

2009-04-02 08:08:45

by Eric Dumazet

[permalink] [raw]
Subject: [PATCH] percpu: convert SNMP mibs to new infra

Ingo Molnar a ?crit :
> * Tejun Heo <[email protected]> wrote:
>
>> Hello, Eric, Ingo.
>>
>> Eric Dumazet wrote:
>>> diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
>>> index aee103b..6b82f6b 100644
>>> --- a/arch/x86/include/asm/percpu.h
>>> +++ b/arch/x86/include/asm/percpu.h
>>> @@ -135,6 +135,9 @@ do { \
>>> #define percpu_read(var) percpu_from_op("mov", per_cpu__##var)
>>> #define percpu_write(var, val) percpu_to_op("mov", per_cpu__##var, val)
>>> #define percpu_add(var, val) percpu_to_op("add", per_cpu__##var, val)
>>> +#define indir_percpu_add(var, val) percpu_to_op("add", *(var), val)
>>> +#define indir_percpu_inc(var) percpu_to_op("add", *(var), 1)
>>> +#define indir_percpu_dec(var) percpu_to_op("add", *(var), -1)
>>> #define percpu_sub(var, val) percpu_to_op("sub", per_cpu__##var, val)
>>> #define percpu_and(var, val) percpu_to_op("and", per_cpu__##var, val)
>>> #define percpu_or(var, val) percpu_to_op("or", per_cpu__##var, val)
>> The final goal is to unify static and dynamic accesses but we
>> aren't there yet, so, for the time being, we'll need some interim
>> solutions. I would prefer percpu_ptr_add() tho.
>
> Yep, that's the standard naming scheme for new APIs: generic to
> specific, left to right.
>

Here is a second version of the patch, with percpu_ptr_xxx convention,
and more polished form (snmp_mib_free() was forgoten in previous RFC)

Thank you all

[PATCH] percpu: convert SNMP mibs to new infra

Some arches can use percpu infrastructure for safe changes to mibs.
(percpu_add() is safe against preemption and interrupts), but
we want the real thing (a single instruction), not an emulation.

On arches still using an emulation, its better to keep the two views
per mib and preemption disable/enable

This shrinks size of mibs by 50%, but also shrinks vmlinux text size
(minimum IPV4 config)

$ size vmlinux.old vmlinux.new
text data bss dec hex filename
4308458 561092 1728512 6598062 64adae vmlinux.old
4303834 561092 1728512 6593438 649b9e vmlinux.new



Signed-off-by: Eric Dumazet <[email protected]>
---
arch/x86/include/asm/percpu.h | 3 +++
include/net/snmp.h | 27 ++++++++++++++++++++++-----
net/ipv4/af_inet.c | 31 ++++++++++++++++++-------------
3 files changed, 43 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index aee103b..f8081e4 100644
--- a/arch/x86/include/asm/percpu.h
+++ b/arch/x86/include/asm/percpu.h
@@ -135,6 +135,9 @@ do { \
#define percpu_read(var) percpu_from_op("mov", per_cpu__##var)
#define percpu_write(var, val) percpu_to_op("mov", per_cpu__##var, val)
#define percpu_add(var, val) percpu_to_op("add", per_cpu__##var, val)
+#define percpu_ptr_add(var, val) percpu_to_op("add", *(var), val)
+#define percpu_ptr_inc(var) percpu_ptr_add(var, 1)
+#define percpu_ptr_dec(var) percpu_ptr_add(var, -1)
#define percpu_sub(var, val) percpu_to_op("sub", per_cpu__##var, val)
#define percpu_and(var, val) percpu_to_op("and", per_cpu__##var, val)
#define percpu_or(var, val) percpu_to_op("or", per_cpu__##var, val)
diff --git a/include/net/snmp.h b/include/net/snmp.h
index 57c9362..1ba584b 100644
--- a/include/net/snmp.h
+++ b/include/net/snmp.h
@@ -123,15 +123,31 @@ struct linux_xfrm_mib {
};

/*
- * FIXME: On x86 and some other CPUs the split into user and softirq parts
+ * On x86 and some other CPUs the split into user and softirq parts
* is not needed because addl $1,memory is atomic against interrupts (but
- * atomic_inc would be overkill because of the lock cycles). Wants new
- * nonlocked_atomic_inc() primitives -AK
+ * atomic_inc would be overkill because of the lock cycles).
*/
+#ifdef CONFIG_X86
+# define SNMP_ARRAY_SZ 1
+#else
+# define SNMP_ARRAY_SZ 2
+#endif
+
#define DEFINE_SNMP_STAT(type, name) \
- __typeof__(type) *name[2]
+ __typeof__(type) *name[SNMP_ARRAY_SZ]
#define DECLARE_SNMP_STAT(type, name) \
- extern __typeof__(type) *name[2]
+ extern __typeof__(type) *name[SNMP_ARRAY_SZ]
+
+#if SNMP_ARRAY_SZ == 1
+#define SNMP_INC_STATS(mib, field) percpu_ptr_inc(&mib[0]->mibs[field])
+#define SNMP_INC_STATS_BH(mib, field) SNMP_INC_STATS(mib, field)
+#define SNMP_INC_STATS_USER(mib, field) SNMP_INC_STATS(mib, field)
+#define SNMP_DEC_STATS(mib, field) percpu_ptr_dec(&mib[0]->mibs[field])
+#define SNMP_ADD_STATS_BH(mib, field, addend) \
+ percpu_ptr_add(&mib[0]->mibs[field], addend)
+#define SNMP_ADD_STATS_USER(mib, field, addend) \
+ percpu_ptr_add(&mib[0]->mibs[field], addend)
+#else

#define SNMP_STAT_BHPTR(name) (name[0])
#define SNMP_STAT_USRPTR(name) (name[1])
@@ -160,5 +176,6 @@ struct linux_xfrm_mib {
per_cpu_ptr(mib[1], get_cpu())->mibs[field] += addend; \
put_cpu(); \
} while (0)
+#endif

#endif
diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
index 7f03373..4df3a76 100644
--- a/net/ipv4/af_inet.c
+++ b/net/ipv4/af_inet.c
@@ -1366,36 +1366,41 @@ unsigned long snmp_fold_field(void *mib[], int offt)

for_each_possible_cpu(i) {
res += *(((unsigned long *) per_cpu_ptr(mib[0], i)) + offt);
+#if SNMP_ARRAY_SZ == 2
res += *(((unsigned long *) per_cpu_ptr(mib[1], i)) + offt);
+#endif
}
return res;
}
EXPORT_SYMBOL_GPL(snmp_fold_field);

-int snmp_mib_init(void *ptr[2], size_t mibsize)
+int snmp_mib_init(void *ptr[SNMP_ARRAY_SZ], size_t mibsize)
{
BUG_ON(ptr == NULL);
ptr[0] = __alloc_percpu(mibsize, __alignof__(unsigned long long));
if (!ptr[0])
- goto err0;
+ return -ENOMEM;
+#if SNMP_ARRAY_SZ == 2
ptr[1] = __alloc_percpu(mibsize, __alignof__(unsigned long long));
- if (!ptr[1])
- goto err1;
+ if (!ptr[1]) {
+ free_percpu(ptr[0]);
+ ptr[0] = NULL;
+ return -ENOMEM;
+ }
+#endif
return 0;
-err1:
- free_percpu(ptr[0]);
- ptr[0] = NULL;
-err0:
- return -ENOMEM;
}
EXPORT_SYMBOL_GPL(snmp_mib_init);

-void snmp_mib_free(void *ptr[2])
+void snmp_mib_free(void *ptr[SNMP_ARRAY_SZ])
{
+ int i;
+
BUG_ON(ptr == NULL);
- free_percpu(ptr[0]);
- free_percpu(ptr[1]);
- ptr[0] = ptr[1] = NULL;
+ for (i = 0 ; i < SNMP_ARRAY_SZ; i++) {
+ free_percpu(ptr[i]);
+ ptr[i] = NULL;
+ }
}
EXPORT_SYMBOL_GPL(snmp_mib_free);

2009-04-02 09:53:23

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH] x86: percpu_to_op() misses memory and flags clobbers

Jeremy Fitzhardinge <[email protected]> wrote:
>
> There's probably not a lot of value in this. The Intel and AMD
> optimisation guides tend to deprecate inc/dec in favour of using
> add/sub, because the former can cause pipeline stalls due to its partial
> flags update.

Is this still the case on the latest Intel CPUs?

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2009-04-02 11:46:49

by Rusty Russell

[permalink] [raw]
Subject: Re: [RFC] percpu: convert SNMP mibs to new infra

On Thursday 02 April 2009 15:49:19 Eric Dumazet wrote:
> Rusty Russell a ?crit :
> > eg. on S/390, atomic_inc is a win over the two-counter version. On Sparc,
> > two-counter wins. On x86, inc wins (obviously).
> >
> > But efforts to create a single primitive have been problematic: maybe
> > open-coding it like this is the Right Thing.
>
> I tried to find a generic CONFIG_ define that would annonce that an arche
> has a fast percpu_add() implementation. (faster than __raw_get_cpu_var,
> for example, when we already are in a preempt disabled section)

Nope, we don't have one. It was supposed to work like this:
DEFINE_PER_CPU(local_t, counter);

cpu_local_inc(counter);

That would do incl in x86, local_t could even be a long[3] (one for hardirq,
one for softirq, one for user context). But there were issues:

1) It didn't work on dynamic percpu allocs, which was much of the interesting
use (Tejun is fixing this bit right now)
2) The x86 version wasn't optimized anyway,
3) Everyone did atomic_long_inc(), so the ftrace code assumed it would be nmi
safe (tho atomic_t isn't nmi-safe on some archs anyway), so the long[3]
method would break them,
4) The long[3] version was overkill for networking, which doesn't need hardirq
so we'd want another variant of local_t plus all the ops,
5) Some people didn't want long: Christoph had a more generic but more complex
version,
6) It's still not used anywhere in the tree (tho local_t is), so there's no
reason to stick to the current semantics.

> For example, net/ipv4/route.c has :
>
> static DEFINE_PER_CPU(struct rt_cache_stat, rt_cache_stat);
> #define RT_CACHE_STAT_INC(field) \
> (__raw_get_cpu_var(rt_cache_stat).field++)
>
> We could use percpu_add(rt_cache_stat.field, 1) instead, only if percpu_add()
> is not the generic one.

Yep, but this one is different from the SNMP stats which needs softirq vs
user context safety. This is where I start wondering how many interfaces
we're going to have...

Sorry to add more questions than answers :(
Rusty.

2009-04-02 14:12:45

by Jeremy Fitzhardinge

[permalink] [raw]
Subject: Re: [PATCH] x86: percpu_to_op() misses memory and flags clobbers

Herbert Xu wrote:
> Jeremy Fitzhardinge <[email protected]> wrote:
>
>> There's probably not a lot of value in this. The Intel and AMD
>> optimisation guides tend to deprecate inc/dec in favour of using
>> add/sub, because the former can cause pipeline stalls due to its partial
>> flags update.
>>
>
> Is this still the case on the latest Intel CPUs?
>

Yes:

Assembly/Compiler Coding Rule 32. (M impact, H generality) INC and DEC
instructions should be replaced with ADD or SUB instructions,
because ADD and
SUB overwrite all flags, whereas INC and DEC do not, therefore
creating false
dependencies on earlier instructions that set the flags.

J

2009-04-03 00:39:01

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH] percpu: convert SNMP mibs to new infra

Eric Dumazet wrote:
...
> #define percpu_read(var) percpu_from_op("mov", per_cpu__##var)
> #define percpu_write(var, val) percpu_to_op("mov", per_cpu__##var, val)
> #define percpu_add(var, val) percpu_to_op("add", per_cpu__##var, val)
> +#define percpu_ptr_add(var, val) percpu_to_op("add", *(var), val)
> +#define percpu_ptr_inc(var) percpu_ptr_add(var, 1)
> +#define percpu_ptr_dec(var) percpu_ptr_add(var, -1)
> #define percpu_sub(var, val) percpu_to_op("sub", per_cpu__##var, val)
> #define percpu_and(var, val) percpu_to_op("and", per_cpu__##var, val)
> #define percpu_or(var, val) percpu_to_op("or", per_cpu__##var, val)

x86 part looks fine to me.

> diff --git a/include/net/snmp.h b/include/net/snmp.h
> index 57c9362..1ba584b 100644
> --- a/include/net/snmp.h
> +++ b/include/net/snmp.h
> @@ -123,15 +123,31 @@ struct linux_xfrm_mib {
> };
>
> /*
> - * FIXME: On x86 and some other CPUs the split into user and softirq parts
> + * On x86 and some other CPUs the split into user and softirq parts
> * is not needed because addl $1,memory is atomic against interrupts (but
> - * atomic_inc would be overkill because of the lock cycles). Wants new
> - * nonlocked_atomic_inc() primitives -AK
> + * atomic_inc would be overkill because of the lock cycles).
> */
> +#ifdef CONFIG_X86
> +# define SNMP_ARRAY_SZ 1
> +#else
> +# define SNMP_ARRAY_SZ 2
> +#endif

This is quite hacky but, well, for the time being...

Thanks.

--
tejun

2009-04-03 17:11:21

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH] percpu: convert SNMP mibs to new infra


* Eric Dumazet <[email protected]> wrote:

> Ingo Molnar a ?crit :
> > * Tejun Heo <[email protected]> wrote:
> >
> >> Hello, Eric, Ingo.
> >>
> >> Eric Dumazet wrote:
> >>> diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
> >>> index aee103b..6b82f6b 100644
> >>> --- a/arch/x86/include/asm/percpu.h
> >>> +++ b/arch/x86/include/asm/percpu.h
> >>> @@ -135,6 +135,9 @@ do { \
> >>> #define percpu_read(var) percpu_from_op("mov", per_cpu__##var)
> >>> #define percpu_write(var, val) percpu_to_op("mov", per_cpu__##var, val)
> >>> #define percpu_add(var, val) percpu_to_op("add", per_cpu__##var, val)
> >>> +#define indir_percpu_add(var, val) percpu_to_op("add", *(var), val)
> >>> +#define indir_percpu_inc(var) percpu_to_op("add", *(var), 1)
> >>> +#define indir_percpu_dec(var) percpu_to_op("add", *(var), -1)
> >>> #define percpu_sub(var, val) percpu_to_op("sub", per_cpu__##var, val)
> >>> #define percpu_and(var, val) percpu_to_op("and", per_cpu__##var, val)
> >>> #define percpu_or(var, val) percpu_to_op("or", per_cpu__##var, val)
> >> The final goal is to unify static and dynamic accesses but we
> >> aren't there yet, so, for the time being, we'll need some interim
> >> solutions. I would prefer percpu_ptr_add() tho.
> >
> > Yep, that's the standard naming scheme for new APIs: generic to
> > specific, left to right.
> >
>
> Here is a second version of the patch, with percpu_ptr_xxx convention,
> and more polished form (snmp_mib_free() was forgoten in previous RFC)
>
> Thank you all
>
> [PATCH] percpu: convert SNMP mibs to new infra
>
> Some arches can use percpu infrastructure for safe changes to mibs.
> (percpu_add() is safe against preemption and interrupts), but
> we want the real thing (a single instruction), not an emulation.
>
> On arches still using an emulation, its better to keep the two views
> per mib and preemption disable/enable
>
> This shrinks size of mibs by 50%, but also shrinks vmlinux text size
> (minimum IPV4 config)
>
> $ size vmlinux.old vmlinux.new
> text data bss dec hex filename
> 4308458 561092 1728512 6598062 64adae vmlinux.old
> 4303834 561092 1728512 6593438 649b9e vmlinux.new

Wow, that's pretty impressive!

> Signed-off-by: Eric Dumazet <[email protected]>
> ---
> arch/x86/include/asm/percpu.h | 3 +++

Acked-by: Ingo Molnar <[email protected]>

As far as x86 goes, feel free to pick it up into any of the
networking trees, these bits are easily merged and it's probably
best if the patch stays in a single piece - it looks compact enough
and if it breaks it's going to break in networking code.

Ingo