2011-05-24 14:46:38

by Richard Kennedy

[permalink] [raw]
Subject: [PATCH] x86: reorder mm_context_t to remove x86_64 alignment padding & so shrink mm_struct

Reorder mm_context_t to remove alignment padding on 64 bit builds
shrinking its size from 64 to 56 bytes.

This allows mm_struct to shrink from 840 to 832 bytes, so using one
fewer cache lines, and getting more objects per slab when using slub.


slabinfo mm_struct reports
before :-

Sizes (bytes) Slabs
-----------------------------------
Object : 840 Total : 7
SlabObj: 896 Full : 1
SlabSiz: 16384 Partial: 4
Loss : 56 CpuSlab: 2
Align : 64 Objects: 18

after :-

Sizes (bytes) Slabs
----------------------------------
Object : 832 Total : 7
SlabObj: 832 Full : 1
SlabSiz: 16384 Partial: 4
Loss : 0 CpuSlab: 2
Align : 64 Objects: 19

Signed-off-by: Richard Kennedy <[email protected]>

---
patch against v2.6.39
compiled & tested on x86_64.

regards
Richard



diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h
index aeff3e8..5f55e69 100644
--- a/arch/x86/include/asm/mmu.h
+++ b/arch/x86/include/asm/mmu.h
@@ -11,14 +11,14 @@
typedef struct {
void *ldt;
int size;
- struct mutex lock;
- void *vdso;

#ifdef CONFIG_X86_64
/* True if mm supports a task running in 32 bit compatibility mode. */
unsigned short ia32_compat;
#endif

+ struct mutex lock;
+ void *vdso;
} mm_context_t;

#ifdef CONFIG_SMP


2011-05-25 21:35:19

by Richard Kennedy

[permalink] [raw]
Subject: [tip:x86/urgent] x86: Reorder mm_context_t to remove x86_64 alignment padding and thus shrink mm_struct

Commit-ID: af6a25f0e1ec0265c267e6ee4513925eaba6d0ed
Gitweb: http://git.kernel.org/tip/af6a25f0e1ec0265c267e6ee4513925eaba6d0ed
Author: Richard Kennedy <[email protected]>
AuthorDate: Tue, 24 May 2011 14:49:59 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Wed, 25 May 2011 16:16:41 +0200

x86: Reorder mm_context_t to remove x86_64 alignment padding and thus shrink mm_struct

Reorder mm_context_t to remove alignment padding on 64 bit
builds shrinking its size from 64 to 56 bytes.

This allows mm_struct to shrink from 840 to 832 bytes, so using
one fewer cache lines, and getting more objects per slab when
using slub.

slabinfo mm_struct reports
before :-

Sizes (bytes) Slabs
-----------------------------------
Object : 840 Total : 7
SlabObj: 896 Full : 1
SlabSiz: 16384 Partial: 4
Loss : 56 CpuSlab: 2
Align : 64 Objects: 18

after :-

Sizes (bytes) Slabs
----------------------------------
Object : 832 Total : 7
SlabObj: 832 Full : 1
SlabSiz: 16384 Partial: 4
Loss : 0 CpuSlab: 2
Align : 64 Objects: 19

Signed-off-by: Richard Kennedy <[email protected]>
Cc: [email protected]
Cc: Linus Torvalds <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Pekka Enberg <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/x86/include/asm/mmu.h | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h
index aeff3e8..5f55e69 100644
--- a/arch/x86/include/asm/mmu.h
+++ b/arch/x86/include/asm/mmu.h
@@ -11,14 +11,14 @@
typedef struct {
void *ldt;
int size;
- struct mutex lock;
- void *vdso;

#ifdef CONFIG_X86_64
/* True if mm supports a task running in 32 bit compatibility mode. */
unsigned short ia32_compat;
#endif

+ struct mutex lock;
+ void *vdso;
} mm_context_t;

#ifdef CONFIG_SMP