2015-04-23 13:26:12

by Gavin Guo

[permalink] [raw]
Subject: [PATCH v3] mm/slab_common: Support the slub_debug boot option on specific object size

The slub_debug=PU,kmalloc-xx cannot work because in the
create_kmalloc_caches() the s->name is created after the
create_kmalloc_cache() is called. The name is NULL in the
create_kmalloc_cache() so the kmem_cache_flags() would not set the
slub_debug flags to the s->flags. The fix here set up a kmalloc_names
string array for the initialization purpose and delete the dynamic
name creation of kmalloc_caches.

v2->v3
- Adopted suggestion from Andrew Morton to delete the kmalloc-96/192 case
and merge the code to single initialization loop.
- Redefine the kmalloc_names.
- Add the KMALLOC_LOOP_LOW as the for loop start index to initialize the
kmalloc_caches object.

v1->v2
- Adopted suggestion from Christoph to delete the dynamic name creation
for kmalloc_caches.

Signed-off-by: Gavin Guo <[email protected]>
---
include/linux/slab.h | 22 +++++++++++++++++++
mm/slab_common.c | 62 +++++++++++++++++++++++++++++++++-------------------
2 files changed, 61 insertions(+), 23 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index ffd24c8..96f0ea5 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -153,8 +153,30 @@ size_t ksize(const void *);
#define ARCH_KMALLOC_MINALIGN ARCH_DMA_MINALIGN
#define KMALLOC_MIN_SIZE ARCH_DMA_MINALIGN
#define KMALLOC_SHIFT_LOW ilog2(ARCH_DMA_MINALIGN)
+/*
+ * The KMALLOC_LOOP_LOW is the definition for the for loop index start number
+ * to create the kmalloc_caches object in create_kmalloc_caches(). The first
+ * and the second are 96 and 192. You can see that in the kmalloc_index(), if
+ * the KMALLOC_MIN_SIZE <= 32, then return 1 (96). If KMALLOC_MIN_SIZE <= 64,
+ * then return 2 (192). If the KMALLOC_MIN_SIZE is bigger than 64, we don't
+ * need to initialize 96 and 192. Go directly to start the KMALLOC_SHIFT_LOW.
+ */
+#if KMALLOC_MIN_SIZE <= 32
+#define KMALLOC_LOOP_LOW 1
+#elif KMALLOC_MIN_SIZE <= 64
+#define KMALLOC_LOOP_LOW 2
+#else
+#define KMALLOC_LOOP_LOW KMALLOC_SHIFT_LOW
+#endif
+
#else
#define ARCH_KMALLOC_MINALIGN __alignof__(unsigned long long)
+/*
+ * The KMALLOC_MIN_SIZE of slub/slab/slob is 2^3/2^5/2^3. So, even slab is used.
+ * The KMALLOC_MIN_SIZE <= 32. The kmalloc-96 and kmalloc-192 should also be
+ * initialized.
+ */
+#define KMALLOC_LOOP_LOW 1
#endif

/*
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 999bb34..05c6439 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -784,6 +784,31 @@ struct kmem_cache *kmalloc_slab(size_t size, gfp_t flags)
}

/*
+ * The kmalloc_names is to make slub_debug=,kmalloc-xx option work in the boot
+ * time. The kmalloc_index() support to 2^26=64MB. So, the final entry of the
+ * table is kmalloc-67108864.
+ */
+static struct {
+ const char *name;
+ unsigned long size;
+} const kmalloc_names[] __initconst = {
+ {NULL, 0}, {"kmalloc-96", 96},
+ {"kmalloc-192", 192}, {"kmalloc-8", 8},
+ {"kmalloc-16", 16}, {"kmalloc-32", 32},
+ {"kmalloc-64", 64}, {"kmalloc-128", 128},
+ {"kmalloc-256", 256}, {"kmalloc-512", 512},
+ {"kmalloc-1024", 1024}, {"kmalloc-2048", 2048},
+ {"kmalloc-4096", 4096}, {"kmalloc-8192", 8192},
+ {"kmalloc-16384", 16384}, {"kmalloc-32768", 32768},
+ {"kmalloc-65536", 65536}, {"kmalloc-131072", 131072},
+ {"kmalloc-262144", 262144}, {"kmalloc-524288", 524288},
+ {"kmalloc-1048576", 1048576}, {"kmalloc-2097152", 2097152},
+ {"kmalloc-4194304", 4194304}, {"kmalloc-8388608", 8388608},
+ {"kmalloc-16777216", 16777216}, {"kmalloc-33554432", 33554432},
+ {"kmalloc-67108864", 67108864}
+};
+
+/*
* Create the kmalloc array. Some of the regular kmalloc arrays
* may already have been created because they were needed to
* enable allocations for slab creation.
@@ -833,39 +858,30 @@ void __init create_kmalloc_caches(unsigned long flags)
for (i = 128 + 8; i <= 192; i += 8)
size_index[size_index_elem(i)] = 8;
}
- for (i = KMALLOC_SHIFT_LOW; i <= KMALLOC_SHIFT_HIGH; i++) {
+ for (i = KMALLOC_LOOP_LOW; i <= KMALLOC_SHIFT_HIGH; i++) {
if (!kmalloc_caches[i]) {
- kmalloc_caches[i] = create_kmalloc_cache(NULL,
- 1 << i, flags);
+ kmalloc_caches[i] = create_kmalloc_cache(
+ kmalloc_names[i].name,
+ kmalloc_names[i].size,
+ flags);
}

/*
- * Caches that are not of the two-to-the-power-of size.
- * These have to be created immediately after the
- * earlier power of two caches
+ * "i == 2" is the "kmalloc-192" case which is the last special
+ * case for initialization and it's the point to jump to
+ * allocate the minimize size of the object. In slab allocator,
+ * the KMALLOC_SHIFT_LOW = 5. So, it needs to skip 2^3 and 2^4
+ * and go straight to allocate 2^5. If the ARCH_DMA_MINALIGN is
+ * defined, it may be larger than 2^5 and here is also the
+ * trick to skip the empty gap.
*/
- if (KMALLOC_MIN_SIZE <= 32 && !kmalloc_caches[1] && i == 6)
- kmalloc_caches[1] = create_kmalloc_cache(NULL, 96, flags);
-
- if (KMALLOC_MIN_SIZE <= 64 && !kmalloc_caches[2] && i == 7)
- kmalloc_caches[2] = create_kmalloc_cache(NULL, 192, flags);
+ if (i == 2)
+ i = (KMALLOC_SHIFT_LOW - 1);
}

/* Kmalloc array is now usable */
slab_state = UP;

- for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) {
- struct kmem_cache *s = kmalloc_caches[i];
- char *n;
-
- if (s) {
- n = kasprintf(GFP_NOWAIT, "kmalloc-%d", kmalloc_size(i));
-
- BUG_ON(!n);
- s->name = n;
- }
- }
-
#ifdef CONFIG_ZONE_DMA
for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) {
struct kmem_cache *s = kmalloc_caches[i];
--
2.0.0


Subject: Re: [PATCH v3] mm/slab_common: Support the slub_debug boot option on specific object size

On Thu, 23 Apr 2015, Gavin Guo wrote:

> - if (KMALLOC_MIN_SIZE <= 64 && !kmalloc_caches[2] && i == 7)
> - kmalloc_caches[2] = create_kmalloc_cache(NULL, 192, flags);
> + if (i == 2)
> + i = (KMALLOC_SHIFT_LOW - 1);
> }

Ok this is weird but there is a comment.

Acked-by: Christoph Lameter <[email protected]>

2015-04-23 20:51:11

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v3] mm/slab_common: Support the slub_debug boot option on specific object size

On Thu, 23 Apr 2015 21:26:00 +0800 Gavin Guo <[email protected]> wrote:

> The slub_debug=PU,kmalloc-xx cannot work because in the
> create_kmalloc_caches() the s->name is created after the
> create_kmalloc_cache() is called. The name is NULL in the
> create_kmalloc_cache() so the kmem_cache_flags() would not set the
> slub_debug flags to the s->flags. The fix here set up a kmalloc_names
> string array for the initialization purpose and delete the dynamic
> name creation of kmalloc_caches.

This code is still pretty horrid :(

What's all that stuff fiddling around with size_index[], magic
constants everywhere. Surely there's some way of making this nice and
clear: table-driven, robust to changes.

> +/*
> + * The KMALLOC_LOOP_LOW is the definition for the for loop index start number
> + * to create the kmalloc_caches object in create_kmalloc_caches(). The first
> + * and the second are 96 and 192. You can see that in the kmalloc_index(), if
> + * the KMALLOC_MIN_SIZE <= 32, then return 1 (96). If KMALLOC_MIN_SIZE <= 64,
> + * then return 2 (192). If the KMALLOC_MIN_SIZE is bigger than 64, we don't
> + * need to initialize 96 and 192. Go directly to start the KMALLOC_SHIFT_LOW.
> + */
> +#if KMALLOC_MIN_SIZE <= 32
> +#define KMALLOC_LOOP_LOW 1
> +#elif KMALLOC_MIN_SIZE <= 64
> +#define KMALLOC_LOOP_LOW 2
> +#else
> +#define KMALLOC_LOOP_LOW KMALLOC_SHIFT_LOW
> +#endif
> +
> #else
> #define ARCH_KMALLOC_MINALIGN __alignof__(unsigned long long)
> +/*
> + * The KMALLOC_MIN_SIZE of slub/slab/slob is 2^3/2^5/2^3. So, even slab is used.
> + * The KMALLOC_MIN_SIZE <= 32. The kmalloc-96 and kmalloc-192 should also be
> + * initialized.
> + */
> +#define KMALLOC_LOOP_LOW 1

Hopefully we can remove the above.

> /*
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 999bb34..05c6439 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -784,6 +784,31 @@ struct kmem_cache *kmalloc_slab(size_t size, gfp_t flags)
> }
>
> /*
> + * The kmalloc_names is to make slub_debug=,kmalloc-xx option work in the boot
> + * time. The kmalloc_index() support to 2^26=64MB. So, the final entry of the
> + * table is kmalloc-67108864.
> + */
> +static struct {
> + const char *name;
> + unsigned long size;
> +} const kmalloc_names[] __initconst = {

OK. This table is __initconst, so the kstrtoul() trick isn't needed.

> + {NULL, 0}, {"kmalloc-96", 96},
> + {"kmalloc-192", 192}, {"kmalloc-8", 8},
> + {"kmalloc-16", 16}, {"kmalloc-32", 32},
> + {"kmalloc-64", 64}, {"kmalloc-128", 128},
> + {"kmalloc-256", 256}, {"kmalloc-512", 512},
> + {"kmalloc-1024", 1024}, {"kmalloc-2048", 2048},
> + {"kmalloc-4096", 4096}, {"kmalloc-8192", 8192},
> + {"kmalloc-16384", 16384}, {"kmalloc-32768", 32768},
> + {"kmalloc-65536", 65536}, {"kmalloc-131072", 131072},
> + {"kmalloc-262144", 262144}, {"kmalloc-524288", 524288},
> + {"kmalloc-1048576", 1048576}, {"kmalloc-2097152", 2097152},
> + {"kmalloc-4194304", 4194304}, {"kmalloc-8388608", 8388608},
> + {"kmalloc-16777216", 16777216}, {"kmalloc-33554432", 33554432},
> + {"kmalloc-67108864", 67108864}
> +};
> +
>
> ...
>
> + if (i == 2)
> + i = (KMALLOC_SHIFT_LOW - 1);

Can we get rid of this by using something like

static struct {
const char *name;
unsigned long size;
} const kmalloc_names[] __initconst = {
// {NULL, 0},
{"kmalloc-96", 96},
{"kmalloc-192", 192},
#if KMALLOC_MIN_SIZE <= 8
{"kmalloc-8", 8},
#endif
#if KMALLOC_MIN_SIZE <= 16
{"kmalloc-16", 16},
#endif
#if KMALLOC_MIN_SIZE <= 32
{"kmalloc-32", 32},
#endif
{"kmalloc-64", 64},
{"kmalloc-128", 128},
{"kmalloc-256", 256},
{"kmalloc-512", 512},
{"kmalloc-1024", 1024},
{"kmalloc-2048", 2048},
{"kmalloc-4096", 4096},
{"kmalloc-8192", 8192},
...
};

(remove the zeroeth entry from kmalloc_names)

(rename kmalloc_names to kmalloc_info or something: it now holds more
than names)

and make the initialization loop do

for (i = 0; i < ARRAY_SIZE(kmalloc_names); i++)
kmalloc_caches[i] = ...


Why does the initialization code do the

if (!kmalloc_caches[i]) {

test? Can any of these really be initialized? If so, why is it
legitimate for create_kmalloc_caches() to go altering size_index[]
after some caches have already been set up?


If this is all done right, KMALLOC_LOOP_LOW, KMALLOC_SHIFT_LOW and
KMALLOC_SHIFT_HIGH should just go away - we should be able to implement
all the logic using only KMALLOC_MIN_SIZE and MAX_ORDER.


Perhaps the manipulation of size_index[] should be done while we're
initalizing the caches, perhaps driven by additional fields in
kmalloc_info.


Finally, why does create_kmalloc_caches() use GFP_NOWAIT? We're in
__init code! Makes no sense. Or if it *does* make sense, the reason
should be clearly commented.

2015-04-23 21:01:23

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v3] mm/slab_common: Support the slub_debug boot option on specific object size

On Thu, 23 Apr 2015 21:26:00 +0800 Gavin Guo <[email protected]> wrote:

> The slub_debug=PU,kmalloc-xx cannot work because in the
> create_kmalloc_caches() the s->name is created after the
> create_kmalloc_cache() is called. The name is NULL in the
> create_kmalloc_cache() so the kmem_cache_flags() would not set the
> slub_debug flags to the s->flags. The fix here set up a kmalloc_names
> string array for the initialization purpose and delete the dynamic
> name creation of kmalloc_caches.

I suppose we should worry about the bug fix before the cleanups, so I'm
merging this.

I did this on top:

--- a/mm/slab_common.c
+++ a/mm/slab_common.c
@@ -784,14 +784,14 @@ struct kmem_cache *kmalloc_slab(size_t s
}

/*
- * The kmalloc_names is to make slub_debug=,kmalloc-xx option work in the boot
- * time. The kmalloc_index() support to 2^26=64MB. So, the final entry of the
- * table is kmalloc-67108864.
+ * kmalloc_info[] is to make slub_debug=,kmalloc-xx option work at boot time.
+ * kmalloc_index() supports up to 2^26=64MB, so the final entry of the table is
+ * kmalloc-67108864.
*/
static struct {
const char *name;
unsigned long size;
-} const kmalloc_names[] __initconst = {
+} const kmalloc_info[] __initconst = {
{NULL, 0}, {"kmalloc-96", 96},
{"kmalloc-192", 192}, {"kmalloc-8", 8},
{"kmalloc-16", 16}, {"kmalloc-32", 32},
@@ -861,8 +861,8 @@ void __init create_kmalloc_caches(unsign
for (i = KMALLOC_LOOP_LOW; i <= KMALLOC_SHIFT_HIGH; i++) {
if (!kmalloc_caches[i]) {
kmalloc_caches[i] = create_kmalloc_cache(
- kmalloc_names[i].name,
- kmalloc_names[i].size,
+ kmalloc_info[i].name,
+ kmalloc_info[i].size,
flags);
}

_


This patch conflicts significantly with Daniel's "slab: correct
size_index table before replacing the bootstrap kmem_cache_node". I've
reworked Daniel's patch as below. Please review?

Regarding merge timing: coudl the slab developers please let me know
whether they believe that "slab: correct size_index table before
replacing the bootstrap kmem_cache_node" and/or "mm/slab_common:
support the slub_debug boot option on specific object size" should be
merged into 4.1? -stable?

Thanks.


From: Daniel Sanders <[email protected]>
Subject: slab: correct size_index table before replacing the bootstrap kmem_cache_node

This patch moves the initialization of the size_index table slightly
earlier so that the first few kmem_cache_node's can be safely allocated
when KMALLOC_MIN_SIZE is large.

There are currently two ways to generate indices into kmalloc_caches (via
kmalloc_index() and via the size_index table in slab_common.c) and on some
arches (possibly only MIPS) they potentially disagree with each other
until create_kmalloc_caches() has been called. It seems that the
intention is that the size_index table is a fast equivalent to
kmalloc_index() and that create_kmalloc_caches() patches the table to
return the correct value for the cases where kmalloc_index()'s
if-statements apply.

The failing sequence was:
* kmalloc_caches contains NULL elements
* kmem_cache_init initialises the element that 'struct
kmem_cache_node' will be allocated to. For 32-bit Mips, this is a
56-byte struct and kmalloc_index returns KMALLOC_SHIFT_LOW (7).
* init_list is called which calls kmalloc_node to allocate a 'struct
kmem_cache_node'.
* kmalloc_slab selects the kmem_caches element using
size_index[size_index_elem(size)]. For MIPS, size is 56, and the
expression returns 6.
* This element of kmalloc_caches is NULL and allocation fails.
* If it had not already failed, it would have called
create_kmalloc_caches() at this point which would have changed
size_index[size_index_elem(size)] to 7.

I don't believe the bug to be LLVM specific but GCC doesn't normally
encounter the problem. I haven't been able to identify exactly what GCC
is doing better (probably inlining) but it seems that GCC is managing to
optimize to the point that it eliminates the problematic allocations.
This theory is supported by the fact that GCC can be made to fail in the
same way by changing inline, __inline, __inline__, and __always_inline in
include/linux/compiler-gcc.h such that they don't actually inline things.

Signed-off-by: Daniel Sanders <[email protected]>
Acked-by: Pekka Enberg <[email protected]>
Acked-by: Christoph Lameter <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
---

mm/slab.c | 1 +
mm/slab.h | 1 +
mm/slab_common.c | 36 +++++++++++++++++++++---------------
mm/slub.c | 1 +
4 files changed, 24 insertions(+), 15 deletions(-)

diff -puN mm/slab.c~slab-correct-size_index-table-before-replacing-the-bootstrap-kmem_cache_node mm/slab.c
--- a/mm/slab.c~slab-correct-size_index-table-before-replacing-the-bootstrap-kmem_cache_node
+++ a/mm/slab.c
@@ -1454,6 +1454,7 @@ void __init kmem_cache_init(void)
kmalloc_caches[INDEX_NODE] = create_kmalloc_cache("kmalloc-node",
kmalloc_size(INDEX_NODE), ARCH_KMALLOC_FLAGS);
slab_state = PARTIAL_NODE;
+ setup_kmalloc_cache_index_table();

slab_early_init = 0;

diff -puN mm/slab.h~slab-correct-size_index-table-before-replacing-the-bootstrap-kmem_cache_node mm/slab.h
--- a/mm/slab.h~slab-correct-size_index-table-before-replacing-the-bootstrap-kmem_cache_node
+++ a/mm/slab.h
@@ -71,6 +71,7 @@ unsigned long calculate_alignment(unsign

#ifndef CONFIG_SLOB
/* Kmalloc array related functions */
+void setup_kmalloc_cache_index_table(void);
void create_kmalloc_caches(unsigned long);

/* Find the kmalloc slab corresponding for a certain size */
diff -puN mm/slab_common.c~slab-correct-size_index-table-before-replacing-the-bootstrap-kmem_cache_node mm/slab_common.c
--- a/mm/slab_common.c~slab-correct-size_index-table-before-replacing-the-bootstrap-kmem_cache_node
+++ a/mm/slab_common.c
@@ -809,25 +809,20 @@ static struct {
};

/*
- * Create the kmalloc array. Some of the regular kmalloc arrays
- * may already have been created because they were needed to
- * enable allocations for slab creation.
+ * Patch up the size_index table if we have strange large alignment
+ * requirements for the kmalloc array. This is only the case for
+ * MIPS it seems. The standard arches will not generate any code here.
+ *
+ * Largest permitted alignment is 256 bytes due to the way we
+ * handle the index determination for the smaller caches.
+ *
+ * Make sure that nothing crazy happens if someone starts tinkering
+ * around with ARCH_KMALLOC_MINALIGN
*/
-void __init create_kmalloc_caches(unsigned long flags)
+void __init setup_kmalloc_cache_index_table(void)
{
int i;

- /*
- * Patch up the size_index table if we have strange large alignment
- * requirements for the kmalloc array. This is only the case for
- * MIPS it seems. The standard arches will not generate any code here.
- *
- * Largest permitted alignment is 256 bytes due to the way we
- * handle the index determination for the smaller caches.
- *
- * Make sure that nothing crazy happens if someone starts tinkering
- * around with ARCH_KMALLOC_MINALIGN
- */
BUILD_BUG_ON(KMALLOC_MIN_SIZE > 256 ||
(KMALLOC_MIN_SIZE & (KMALLOC_MIN_SIZE - 1)));

@@ -858,6 +853,17 @@ void __init create_kmalloc_caches(unsign
for (i = 128 + 8; i <= 192; i += 8)
size_index[size_index_elem(i)] = 8;
}
+}
+
+/*
+ * Create the kmalloc array. Some of the regular kmalloc arrays
+ * may already have been created because they were needed to
+ * enable allocations for slab creation.
+ */
+void __init create_kmalloc_caches(unsigned long flags)
+{
+ int i;
+
for (i = KMALLOC_LOOP_LOW; i <= KMALLOC_SHIFT_HIGH; i++) {
if (!kmalloc_caches[i]) {
kmalloc_caches[i] = create_kmalloc_cache(
diff -puN mm/slub.c~slab-correct-size_index-table-before-replacing-the-bootstrap-kmem_cache_node mm/slub.c
--- a/mm/slub.c~slab-correct-size_index-table-before-replacing-the-bootstrap-kmem_cache_node
+++ a/mm/slub.c
@@ -3700,6 +3700,7 @@ void __init kmem_cache_init(void)
kmem_cache_node = bootstrap(&boot_kmem_cache_node);

/* Now we can use the kmem_cache to allocate kmalloc slabs */
+ setup_kmalloc_cache_index_table();
create_kmalloc_caches(0);

#ifdef CONFIG_SMP
_

Subject: Re: [PATCH v3] mm/slab_common: Support the slub_debug boot option on specific object size

On Thu, 23 Apr 2015, Andrew Morton wrote:
> >
> > + if (i == 2)
> > + i = (KMALLOC_SHIFT_LOW - 1);
>
> Can we get rid of this by using something like

Nope index is a ilog2 value of the size. The table changes would not
preserve the mapping of the index to the power of two sizes.

> static struct {
> const char *name;
> unsigned long size;
> } const kmalloc_names[] __initconst = {
> // {NULL, 0},
> {"kmalloc-96", 96},
> {"kmalloc-192", 192},
> #if KMALLOC_MIN_SIZE <= 8
> {"kmalloc-8", 8},
> #endif
> #if KMALLOC_MIN_SIZE <= 16
> {"kmalloc-16", 16},
> #endif
> #if KMALLOC_MIN_SIZE <= 32
> {"kmalloc-32", 32},
> #endif
> {"kmalloc-64", 64},
> {"kmalloc-128", 128},
> {"kmalloc-256", 256},
> {"kmalloc-512", 512},
> {"kmalloc-1024", 1024},
> {"kmalloc-2048", 2048},
> {"kmalloc-4096", 4096},
> {"kmalloc-8192", 8192},
> ...
> };
>

> Why does the initialization code do the
>
> if (!kmalloc_caches[i]) {
>
> test? Can any of these really be initialized? If so, why is it
> legitimate for create_kmalloc_caches() to go altering size_index[]
> after some caches have already been set up?

Because we know what sizes we need during bootstrap and the initial
caches that are needed to create others are first populated. If they are
already handled by the earliest bootstrap code then we should not
repopulate them later.

> Finally, why does create_kmalloc_caches() use GFP_NOWAIT? We're in
> __init code! Makes no sense. Or if it *does* make sense, the reason
> should be clearly commented.

Well I was told by Pekka to use it exactly because it was init code at
some point. The slab system is not really that functional so I doubt
it makes much of a difference.


2015-04-24 14:36:12

by Daniel Sanders

[permalink] [raw]
Subject: RE: [PATCH v3] mm/slab_common: Support the slub_debug boot option on specific object size

> This patch conflicts significantly with Daniel's "slab: correct
> size_index table before replacing the bootstrap kmem_cache_node". I've
> reworked Daniel's patch as below. Please review?

Your revised version of my patch looks good to me. I've also re-tested LLVMLinux
with Gavin's and my (revised) patch applied and it's still working.