2010-06-02 22:31:00

by Jeff Kirsher

[permalink] [raw]
Subject: [net-next-2.6 PATCH 1/2] skbuff: add check for non-linear to warn_if_lro and needs_linearize

From: Alexander Duyck <[email protected]>

We can avoid an unecessary cache miss by checking if the skb is non-linear
before accessing gso_size/gso_type in skb_warn_if_lro, the same can also be
done to avoid a cache miss on nr_frags if data_len is 0.

Signed-off-by: Alexander Duyck <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
---

include/linux/skbuff.h | 3 ++-
net/core/dev.c | 7 ++++---
2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index bf243fc..645e78d 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -2129,7 +2129,8 @@ static inline bool skb_warn_if_lro(const struct sk_buff *skb)
/* LRO sets gso_size but not gso_type, whereas if GSO is really
* wanted then gso_type will be set. */
struct skb_shared_info *shinfo = skb_shinfo(skb);
- if (shinfo->gso_size != 0 && unlikely(shinfo->gso_type == 0)) {
+ if (skb_is_nonlinear(skb) && shinfo->gso_size != 0 &&
+ unlikely(shinfo->gso_type == 0)) {
__skb_warn_lro_forwarding(skb);
return true;
}
diff --git a/net/core/dev.c b/net/core/dev.c
index d03470f..9f7c407 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -2080,9 +2080,10 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q,
static inline int skb_needs_linearize(struct sk_buff *skb,
struct net_device *dev)
{
- return (skb_has_frags(skb) && !(dev->features & NETIF_F_FRAGLIST)) ||
- (skb_shinfo(skb)->nr_frags && (!(dev->features & NETIF_F_SG) ||
- illegal_highdma(dev, skb)));
+ return skb_is_nonlinear(skb) &&
+ ((skb_has_frags(skb) && !(dev->features & NETIF_F_FRAGLIST)) ||
+ (skb_shinfo(skb)->nr_frags && (!(dev->features & NETIF_F_SG) ||
+ illegal_highdma(dev, skb))));
}

/**


2010-06-02 22:25:42

by Jeff Kirsher

[permalink] [raw]
Subject: [net-next-2.6 PATCH 2/2] x86: Align skb w/ start of cache line on newer core 2/Xeon Arch

From: Alexander Duyck <[email protected]>

x86 architectures can handle unaligned accesses in hardware, and it has
been shown that unaligned DMA accesses can be expensive on Nehalem
architectures. As such we should overwrite NET_IP_ALIGN and NET_SKB_PAD
to resolve this issue.

Signed-off-by: Alexander Duyck <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
---

arch/x86/include/asm/system.h | 12 ++++++++++++
1 files changed, 12 insertions(+), 0 deletions(-)

diff --git a/arch/x86/include/asm/system.h b/arch/x86/include/asm/system.h
index b8fe48e..8acb44e 100644
--- a/arch/x86/include/asm/system.h
+++ b/arch/x86/include/asm/system.h
@@ -457,4 +457,16 @@ static inline void rdtsc_barrier(void)
alternative(ASM_NOP3, "lfence", X86_FEATURE_LFENCE_RDTSC);
}

+#ifdef CONFIG_MCORE2
+/*
+ * We handle most unaligned accesses in hardware. On the other hand
+ * unaligned DMA can be quite expensive on some Nehalem processors.
+ *
+ * Based on this we disable the IP header alignment in network drivers.
+ * We also modify NET_SKB_PAD to be a cacheline in size, thus maintaining
+ * cacheline alignment of buffers.
+ */
+#define NET_IP_ALIGN 0
+#define NET_SKB_PAD L1_CACHE_BYTES
+#endif
#endif /* _ASM_X86_SYSTEM_H */

2010-06-02 22:44:17

by Eric Dumazet

[permalink] [raw]
Subject: Re: [net-next-2.6 PATCH 2/2] x86: Align skb w/ start of cache line on newer core 2/Xeon Arch

Le mercredi 02 juin 2010 à 15:25 -0700, Jeff Kirsher a écrit :
> From: Alexander Duyck <[email protected]>
>
> x86 architectures can handle unaligned accesses in hardware, and it has
> been shown that unaligned DMA accesses can be expensive on Nehalem
> architectures. As such we should overwrite NET_IP_ALIGN and NET_SKB_PAD
> to resolve this issue.
>
> Signed-off-by: Alexander Duyck <[email protected]>
> Signed-off-by: Jeff Kirsher <[email protected]>
> ---
>
> arch/x86/include/asm/system.h | 12 ++++++++++++
> 1 files changed, 12 insertions(+), 0 deletions(-)
>
> diff --git a/arch/x86/include/asm/system.h b/arch/x86/include/asm/system.h
> index b8fe48e..8acb44e 100644
> --- a/arch/x86/include/asm/system.h
> +++ b/arch/x86/include/asm/system.h
> @@ -457,4 +457,16 @@ static inline void rdtsc_barrier(void)
> alternative(ASM_NOP3, "lfence", X86_FEATURE_LFENCE_RDTSC);
> }
>
> +#ifdef CONFIG_MCORE2
> +/*
> + * We handle most unaligned accesses in hardware. On the other hand
> + * unaligned DMA can be quite expensive on some Nehalem processors.
> + *
> + * Based on this we disable the IP header alignment in network drivers.
> + * We also modify NET_SKB_PAD to be a cacheline in size, thus maintaining
> + * cacheline alignment of buffers.
> + */
> +#define NET_IP_ALIGN 0
> +#define NET_SKB_PAD L1_CACHE_BYTES
> +#endif
> #endif /* _ASM_X86_SYSTEM_H */
>
> --

But... L1_CACHE_BYTES is 64 on MCORE2, so this matches current
NET_SKB_PAD definition...

#ifndef NET_SKB_PAD
#define NET_SKB_PAD 64
#endif


2010-06-02 23:55:30

by Duyck, Alexander H

[permalink] [raw]
Subject: RE: [net-next-2.6 PATCH 2/2] x86: Align skb w/ start of cache line on newer core 2/Xeon Arch

Eric Dumazet wrote:
> Le mercredi 02 juin 2010 à 15:25 -0700, Jeff Kirsher a écrit :
>> From: Alexander Duyck <[email protected]>
>>
>> x86 architectures can handle unaligned accesses in hardware, and it
>> has been shown that unaligned DMA accesses can be expensive on
>> Nehalem architectures. As such we should overwrite NET_IP_ALIGN and
>> NET_SKB_PAD to resolve this issue.
>>
>> Signed-off-by: Alexander Duyck <[email protected]>
>> Signed-off-by: Jeff Kirsher <[email protected]> ---
>>
>> arch/x86/include/asm/system.h | 12 ++++++++++++
>> 1 files changed, 12 insertions(+), 0 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/system.h
>> b/arch/x86/include/asm/system.h index b8fe48e..8acb44e 100644 ---
>> a/arch/x86/include/asm/system.h +++ b/arch/x86/include/asm/system.h
>> @@ -457,4 +457,16 @@ static inline void rdtsc_barrier(void)
>> alternative(ASM_NOP3, "lfence", X86_FEATURE_LFENCE_RDTSC); }
>>
>> +#ifdef CONFIG_MCORE2
>> +/*
>> + * We handle most unaligned accesses in hardware. On the other hand
>> + * unaligned DMA can be quite expensive on some Nehalem processors.
>> + * + * Based on this we disable the IP header alignment in network
>> drivers. + * We also modify NET_SKB_PAD to be a cacheline in size,
>> thus maintaining + * cacheline alignment of buffers.
>> + */
>> +#define NET_IP_ALIGN 0
>> +#define NET_SKB_PAD L1_CACHE_BYTES
>> +#endif
>> #endif /* _ASM_X86_SYSTEM_H */
>>
>> --
>
> But... L1_CACHE_BYTES is 64 on MCORE2, so this matches current
> NET_SKB_PAD definition...
>
> #ifndef NET_SKB_PAD
> #define NET_SKB_PAD 64
> #endif

I admit the current definition is redundant, but NET_SKB_PAD had been 32 until your recent change of the value, and prior to 2.6.30 the value was 16. If the value were to change again it would silently break the cacheline alignment which is provided by this patch. If we were to define NET_SKB_PAD using L1_CACHE_BYTES in skbuff.h then I might be more inclined to to pull the NET_SKB_PAD change, but right now I would prefer to treat NET_SKB_PAD as a magic number that coincidently is the same size as the L1 cache on MCORE2.

Thanks,

Alex
????{.n?+???????+%?????ݶ??w??{.n?+????{??G?????{ay?ʇڙ?,j??f???h?????????z_??(?階?ݢj"???m??????G????????????&???~???iO???z??v?^?m???? ????????I?

2010-06-05 09:54:15

by David Miller

[permalink] [raw]
Subject: Re: [net-next-2.6 PATCH 1/2] skbuff: add check for non-linear to warn_if_lro and needs_linearize

From: Jeff Kirsher <[email protected]>
Date: Wed, 02 Jun 2010 15:24:37 -0700

> From: Alexander Duyck <[email protected]>
>
> We can avoid an unecessary cache miss by checking if the skb is non-linear
> before accessing gso_size/gso_type in skb_warn_if_lro, the same can also be
> done to avoid a cache miss on nr_frags if data_len is 0.
>
> Signed-off-by: Alexander Duyck <[email protected]>
> Signed-off-by: Jeff Kirsher <[email protected]>

Applied.

2010-06-11 05:19:59

by David Miller

[permalink] [raw]
Subject: Re: [net-next-2.6 PATCH 2/2] x86: Align skb w/ start of cache line on newer core 2/Xeon Arch

From: "Duyck, Alexander H" <[email protected]>
Date: Wed, 2 Jun 2010 16:55:16 -0700

> Eric Dumazet wrote:
>>
>> But... L1_CACHE_BYTES is 64 on MCORE2, so this matches current
>> NET_SKB_PAD definition...
>>
>> #ifndef NET_SKB_PAD
>> #define NET_SKB_PAD 64
>> #endif
>
> I admit the current definition is redundant, but NET_SKB_PAD had
> been 32 until your recent change of the value, and prior to 2.6.30
> the value was 16. If the value were to change again it would
> silently break the cacheline alignment which is provided by this
> patch. If we were to define NET_SKB_PAD using L1_CACHE_BYTES in
> skbuff.h then I might be more inclined to to pull the NET_SKB_PAD
> change, but right now I would prefer to treat NET_SKB_PAD as a magic
> number that coincidently is the same size as the L1 cache on MCORE2.

Eric, why don't we do that? Make NET_SKB_PAD's define L1_CACHE_BYTES.

Reading the comments you added when the default value was changed to
64, this seems to even be your overall intent. :-)

2010-06-14 12:57:21

by Eric Dumazet

[permalink] [raw]
Subject: [PATCH net-next-2.6] net: NET_SKB_PAD should depend on L1_CACHE_BYTES

Le jeudi 10 juin 2010 à 22:20 -0700, David Miller a écrit :

> Eric, why don't we do that? Make NET_SKB_PAD's define L1_CACHE_BYTES.
>
> Reading the comments you added when the default value was changed to
> 64, this seems to even be your overall intent. :-)

Of course right you are ;)

Thanks !

[PATCH net-next-2.6] net: NET_SKB_PAD should depend on L1_CACHE_BYTES

In old kernels, NET_SKB_PAD was defined to 16.

Then commit d6301d3dd1c2 (net: Increase default NET_SKB_PAD to 32), and
commit 18e8c134f4e9 (net: Increase NET_SKB_PAD to 64 bytes) increased it
to 64.

While first patch was governed by network stack needs, second was more
driven by performance issues on current hardware. Real intent was to
align data on a cache line boundary.

So use max(32, L1_CACHE_BYTES) instead of 64, to be more generic.

Remove microblaze and powerpc own NET_SKB_PAD definitions.

Thanks to Alexander Duyck and David Miller for their comments.

Suggested-by: David Miller <[email protected]>
Signed-off-by: Eric Dumazet <[email protected]>
---
arch/microblaze/include/asm/system.h | 3 ---
arch/powerpc/include/asm/system.h | 3 ---
include/linux/skbuff.h | 8 +++++---
3 files changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/microblaze/include/asm/system.h b/arch/microblaze/include/asm/system.h
index 48c4f03..81e1f7d 100644
--- a/arch/microblaze/include/asm/system.h
+++ b/arch/microblaze/include/asm/system.h
@@ -101,10 +101,7 @@ extern struct dentry *of_debugfs_root;
* MicroBlaze doesn't handle unaligned accesses in hardware.
*
* Based on this we force the IP header alignment in network drivers.
- * We also modify NET_SKB_PAD to be a cacheline in size, thus maintaining
- * cacheline alignment of buffers.
*/
#define NET_IP_ALIGN 2
-#define NET_SKB_PAD L1_CACHE_BYTES

#endif /* _ASM_MICROBLAZE_SYSTEM_H */
diff --git a/arch/powerpc/include/asm/system.h b/arch/powerpc/include/asm/system.h
index a6297c6..6c294ac 100644
--- a/arch/powerpc/include/asm/system.h
+++ b/arch/powerpc/include/asm/system.h
@@ -515,11 +515,8 @@ __cmpxchg_local(volatile void *ptr, unsigned long old, unsigned long new,
* powers of 2 writes until it reaches sufficient alignment).
*
* Based on this we disable the IP header alignment in network drivers.
- * We also modify NET_SKB_PAD to be a cacheline in size, thus maintaining
- * cacheline alignment of buffers.
*/
#define NET_IP_ALIGN 0
-#define NET_SKB_PAD L1_CACHE_BYTES

#define cmpxchg64(ptr, o, n) \
({ \
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 122d083..ac74ee0 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -1414,12 +1414,14 @@ static inline int skb_network_offset(const struct sk_buff *skb)
*
* Various parts of the networking layer expect at least 32 bytes of
* headroom, you should not reduce this.
- * With RPS, we raised NET_SKB_PAD to 64 so that get_rps_cpus() fetches span
- * a 64 bytes aligned block to fit modern (>= 64 bytes) cache line sizes
+ *
+ * Using max(32, L1_CACHE_BYTES) makes sense (especially with RPS)
+ * to reduce average number of cache lines per packet.
+ * get_rps_cpus() for example only access one 64 bytes aligned block :
* NET_IP_ALIGN(2) + ethernet_header(14) + IP_header(20/40) + ports(8)
*/
#ifndef NET_SKB_PAD
-#define NET_SKB_PAD 64
+#define NET_SKB_PAD max(32, L1_CACHE_BYTES)
#endif

extern int ___pskb_trim(struct sk_buff *skb, unsigned int len);

2010-06-16 01:16:49

by David Miller

[permalink] [raw]
Subject: Re: [PATCH net-next-2.6] net: NET_SKB_PAD should depend on L1_CACHE_BYTES

From: Eric Dumazet <[email protected]>
Date: Mon, 14 Jun 2010 14:57:14 +0200

> [PATCH net-next-2.6] net: NET_SKB_PAD should depend on L1_CACHE_BYTES
>
> In old kernels, NET_SKB_PAD was defined to 16.
>
> Then commit d6301d3dd1c2 (net: Increase default NET_SKB_PAD to 32), and
> commit 18e8c134f4e9 (net: Increase NET_SKB_PAD to 64 bytes) increased it
> to 64.
>
> While first patch was governed by network stack needs, second was more
> driven by performance issues on current hardware. Real intent was to
> align data on a cache line boundary.
>
> So use max(32, L1_CACHE_BYTES) instead of 64, to be more generic.
>
> Remove microblaze and powerpc own NET_SKB_PAD definitions.
>
> Thanks to Alexander Duyck and David Miller for their comments.
>
> Suggested-by: David Miller <[email protected]>
> Signed-off-by: Eric Dumazet <[email protected]>

Applied, thanks Eric.