Changes in v3:
- Fix slub_kunit tests failures by using new introduced
slab_in_kunit_test(), which doesn't increase slab_errors.
- Fix the condition of whether to check free pointer and
set "ret" correctly.
- Collect Reviewed-by tags from Vlastimil Babka.
- Link to v2: https://lore.kernel.org/r/[email protected]
Changes in v2:
- Change check_object() to do all the checks without skipping, report
their specific error findings in check_bytes_and_report() but not
print_trailer(). Once all checks were done, if any found an error,
print the trailer once from check_object(), suggested by Vlastimil.
- Consolidate the two cases with flags & SLAB_RED_ZONE and make the
complex conditional expressions a little prettier and add comments
about extending right redzone, per Vlastimil.
- Add Reviewed-by from Feng Tang.
- Link to v1: https://lore.kernel.org/r/[email protected]
Hello,
This series includes minor fix and cleanup of slub_debug, please see
the commits for details.
Signed-off-by: Chengming Zhou <[email protected]>
---
Chengming Zhou (3):
slab: make check_object() more consistent
slab: don't put freepointer outside of object if only orig_size
slab: delete useless RED_INACTIVE and RED_ACTIVE
include/linux/poison.h | 7 ++--
mm/slub.c | 77 ++++++++++++++++++++++++++++----------------
tools/include/linux/poison.h | 7 ++--
3 files changed, 53 insertions(+), 38 deletions(-)
---
base-commit: 1613e604df0cd359cf2a7fbd9be7a0bcfacfabd0
change-id: 20240528-b4-slab-debug-1d8179fc996a
Best regards,
--
Chengming Zhou <[email protected]>
Now check_object() calls check_bytes_and_report() multiple times to
check every section of the object it cares about, like left and right
redzones, object poison, paddings poison and freepointer. It will
abort the checking process and return 0 once it finds an error.
There are two inconsistencies in check_object(), which are alignment
padding checking and object padding checking. We only print the error
messages but don't return 0 to tell callers that something is wrong
and needs to be handled. Please see alloc_debug_processing() and
free_debug_processing() for details.
We want to do all checks without skipping, so use a local variable
"ret" to save each check result and change check_bytes_and_report() to
only report specific error findings. Then at end of check_object(),
print the trailer once if any found an error.
Suggested-by: Vlastimil Babka <[email protected]>
Signed-off-by: Chengming Zhou <[email protected]>
---
mm/slub.c | 62 +++++++++++++++++++++++++++++++++++++++++---------------------
1 file changed, 41 insertions(+), 21 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 0809760cf789..45f89d4bb687 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -788,8 +788,24 @@ static bool slab_add_kunit_errors(void)
kunit_put_resource(resource);
return true;
}
+
+static bool slab_in_kunit_test(void)
+{
+ struct kunit_resource *resource;
+
+ if (!kunit_get_current_test())
+ return false;
+
+ resource = kunit_find_named_resource(current->kunit_test, "slab_errors");
+ if (!resource)
+ return false;
+
+ kunit_put_resource(resource);
+ return true;
+}
#else
static inline bool slab_add_kunit_errors(void) { return false; }
+static inline bool slab_in_kunit_test(void) { return false; }
#endif
static inline unsigned int size_from_object(struct kmem_cache *s)
@@ -1192,8 +1208,6 @@ static int check_bytes_and_report(struct kmem_cache *s, struct slab *slab,
pr_err("0x%p-0x%p @offset=%tu. First byte 0x%x instead of 0x%x\n",
fault, end - 1, fault - addr,
fault[0], value);
- print_trailer(s, slab, object);
- add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
skip_bug_print:
restore_bytes(s, what, value, fault, end);
@@ -1302,15 +1316,16 @@ static int check_object(struct kmem_cache *s, struct slab *slab,
u8 *p = object;
u8 *endobject = object + s->object_size;
unsigned int orig_size, kasan_meta_size;
+ int ret = 1;
if (s->flags & SLAB_RED_ZONE) {
if (!check_bytes_and_report(s, slab, object, "Left Redzone",
object - s->red_left_pad, val, s->red_left_pad))
- return 0;
+ ret = 0;
if (!check_bytes_and_report(s, slab, object, "Right Redzone",
endobject, val, s->inuse - s->object_size))
- return 0;
+ ret = 0;
if (slub_debug_orig_size(s) && val == SLUB_RED_ACTIVE) {
orig_size = get_orig_size(s, object);
@@ -1319,14 +1334,15 @@ static int check_object(struct kmem_cache *s, struct slab *slab,
!check_bytes_and_report(s, slab, object,
"kmalloc Redzone", p + orig_size,
val, s->object_size - orig_size)) {
- return 0;
+ ret = 0;
}
}
} else {
if ((s->flags & SLAB_POISON) && s->object_size < s->inuse) {
- check_bytes_and_report(s, slab, p, "Alignment padding",
+ if (!check_bytes_and_report(s, slab, p, "Alignment padding",
endobject, POISON_INUSE,
- s->inuse - s->object_size);
+ s->inuse - s->object_size))
+ ret = 0;
}
}
@@ -1342,27 +1358,25 @@ static int check_object(struct kmem_cache *s, struct slab *slab,
!check_bytes_and_report(s, slab, p, "Poison",
p + kasan_meta_size, POISON_FREE,
s->object_size - kasan_meta_size - 1))
- return 0;
+ ret = 0;
if (kasan_meta_size < s->object_size &&
!check_bytes_and_report(s, slab, p, "End Poison",
p + s->object_size - 1, POISON_END, 1))
- return 0;
+ ret = 0;
}
/*
* check_pad_bytes cleans up on its own.
*/
- check_pad_bytes(s, slab, p);
+ if (!check_pad_bytes(s, slab, p))
+ ret = 0;
}
- if (!freeptr_outside_object(s) && val == SLUB_RED_ACTIVE)
- /*
- * Object and freepointer overlap. Cannot check
- * freepointer while object is allocated.
- */
- return 1;
-
- /* Check free pointer validity */
- if (!check_valid_pointer(s, slab, get_freepointer(s, p))) {
+ /*
+ * Cannot check freepointer while object is allocated if
+ * object and freepointer overlap.
+ */
+ if ((freeptr_outside_object(s) || val != SLUB_RED_ACTIVE) &&
+ !check_valid_pointer(s, slab, get_freepointer(s, p))) {
object_err(s, slab, p, "Freepointer corrupt");
/*
* No choice but to zap it and thus lose the remainder
@@ -1370,9 +1384,15 @@ static int check_object(struct kmem_cache *s, struct slab *slab,
* another error because the object count is now wrong.
*/
set_freepointer(s, p, NULL);
- return 0;
+ ret = 0;
}
- return 1;
+
+ if (!ret && !slab_in_kunit_test()) {
+ print_trailer(s, slab, object);
+ add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
+ }
+
+ return ret;
}
static int check_slab(struct kmem_cache *s, struct slab *slab)
--
2.45.1
These seem useless since we use the SLUB_RED_INACTIVE and SLUB_RED_ACTIVE,
so just delete them, no functional change.
Reviewed-by: Vlastimil Babka <[email protected]>
Signed-off-by: Chengming Zhou <[email protected]>
---
include/linux/poison.h | 7 ++-----
mm/slub.c | 4 ++--
tools/include/linux/poison.h | 7 ++-----
3 files changed, 6 insertions(+), 12 deletions(-)
diff --git a/include/linux/poison.h b/include/linux/poison.h
index 1f0ee2459f2a..9c1a035af97c 100644
--- a/include/linux/poison.h
+++ b/include/linux/poison.h
@@ -38,11 +38,8 @@
* Magic nums for obj red zoning.
* Placed in the first word before and the first word after an obj.
*/
-#define RED_INACTIVE 0x09F911029D74E35BULL /* when obj is inactive */
-#define RED_ACTIVE 0xD84156C5635688C0ULL /* when obj is active */
-
-#define SLUB_RED_INACTIVE 0xbb
-#define SLUB_RED_ACTIVE 0xcc
+#define SLUB_RED_INACTIVE 0xbb /* when obj is inactive */
+#define SLUB_RED_ACTIVE 0xcc /* when obj is active */
/* ...and for poisoning */
#define POISON_INUSE 0x5a /* for use-uninitialised poisoning */
diff --git a/mm/slub.c b/mm/slub.c
index 1551a0345650..efa7c88d8d8c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1230,8 +1230,8 @@ static int check_bytes_and_report(struct kmem_cache *s, struct slab *slab,
* Padding is extended by another word if Redzoning is enabled and
* object_size == inuse.
*
- * We fill with 0xbb (RED_INACTIVE) for inactive objects and with
- * 0xcc (RED_ACTIVE) for objects in use.
+ * We fill with 0xbb (SLUB_RED_INACTIVE) for inactive objects and with
+ * 0xcc (SLUB_RED_ACTIVE) for objects in use.
*
* object + s->inuse
* Meta data starts here.
diff --git a/tools/include/linux/poison.h b/tools/include/linux/poison.h
index 2e6338ac5eed..e530e54046c9 100644
--- a/tools/include/linux/poison.h
+++ b/tools/include/linux/poison.h
@@ -47,11 +47,8 @@
* Magic nums for obj red zoning.
* Placed in the first word before and the first word after an obj.
*/
-#define RED_INACTIVE 0x09F911029D74E35BULL /* when obj is inactive */
-#define RED_ACTIVE 0xD84156C5635688C0ULL /* when obj is active */
-
-#define SLUB_RED_INACTIVE 0xbb
-#define SLUB_RED_ACTIVE 0xcc
+#define SLUB_RED_INACTIVE 0xbb /* when obj is inactive */
+#define SLUB_RED_ACTIVE 0xcc /* when obj is active */
/* ...and for poisoning */
#define POISON_INUSE 0x5a /* for use-uninitialised poisoning */
--
2.45.1
On 6/7/24 10:40 AM, Chengming Zhou wrote:
> Now check_object() calls check_bytes_and_report() multiple times to
> check every section of the object it cares about, like left and right
> redzones, object poison, paddings poison and freepointer. It will
> abort the checking process and return 0 once it finds an error.
>
> There are two inconsistencies in check_object(), which are alignment
> padding checking and object padding checking. We only print the error
> messages but don't return 0 to tell callers that something is wrong
> and needs to be handled. Please see alloc_debug_processing() and
> free_debug_processing() for details.
>
> We want to do all checks without skipping, so use a local variable
> "ret" to save each check result and change check_bytes_and_report() to
> only report specific error findings. Then at end of check_object(),
> print the trailer once if any found an error.
>
> Suggested-by: Vlastimil Babka <[email protected]>
> Signed-off-by: Chengming Zhou <[email protected]>
Reviewed-by: Vlastimil Babka <[email protected]>
Thanks.
The commit 946fa0dbf2d8 ("mm/slub: extend redzone check to extra
allocated kmalloc space than requested") will extend right redzone
when allocating for orig_size < object_size. So we can't overlay the
freepointer in the object space in this case.
But the code looks like it forgot to check SLAB_RED_ZONE, since there
won't be extended right redzone if only orig_size enabled.
As we are here, make this complex conditional expressions a little
prettier and add some comments about extending right redzone when
slub_debug_orig_size() enabled.
Reviewed-by: Feng Tang <[email protected]>
Reviewed-by: Vlastimil Babka <[email protected]>
Signed-off-by: Chengming Zhou <[email protected]>
---
mm/slub.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 45f89d4bb687..1551a0345650 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -5169,10 +5169,9 @@ static int calculate_sizes(struct kmem_cache *s)
*/
s->inuse = size;
- if (slub_debug_orig_size(s) ||
- (flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)) ||
- ((flags & SLAB_RED_ZONE) && s->object_size < sizeof(void *)) ||
- s->ctor) {
+ if ((flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)) || s->ctor ||
+ ((flags & SLAB_RED_ZONE) &&
+ (s->object_size < sizeof(void *) || slub_debug_orig_size(s)))) {
/*
* Relocate free pointer after the object if it is not
* permitted to overwrite the first word of the object on
@@ -5180,7 +5179,9 @@ static int calculate_sizes(struct kmem_cache *s)
*
* This is the case if we do RCU, have a constructor or
* destructor, are poisoning the objects, or are
- * redzoning an object smaller than sizeof(void *).
+ * redzoning an object smaller than sizeof(void *) or are
+ * redzoning an object with slub_debug_orig_size() enabled,
+ * in which case the right redzone may be extended.
*
* The assumption that s->offset >= s->inuse means free
* pointer is outside of the object is used in the
--
2.45.1
On 6/7/24 10:40 AM, Chengming Zhou wrote:
> Changes in v3:
> - Fix slub_kunit tests failures by using new introduced
> slab_in_kunit_test(), which doesn't increase slab_errors.
> - Fix the condition of whether to check free pointer and
> set "ret" correctly.
> - Collect Reviewed-by tags from Vlastimil Babka.
> - Link to v2: https://lore.kernel.org/r/[email protected]
>
> Changes in v2:
> - Change check_object() to do all the checks without skipping, report
> their specific error findings in check_bytes_and_report() but not
> print_trailer(). Once all checks were done, if any found an error,
> print the trailer once from check_object(), suggested by Vlastimil.
> - Consolidate the two cases with flags & SLAB_RED_ZONE and make the
> complex conditional expressions a little prettier and add comments
> about extending right redzone, per Vlastimil.
> - Add Reviewed-by from Feng Tang.
> - Link to v1: https://lore.kernel.org/r/[email protected]
>
> Hello,
>
> This series includes minor fix and cleanup of slub_debug, please see
> the commits for details.
>
> Signed-off-by: Chengming Zhou <[email protected]>
applied to slab/for-next, thanks
> ---
> Chengming Zhou (3):
> slab: make check_object() more consistent
> slab: don't put freepointer outside of object if only orig_size
> slab: delete useless RED_INACTIVE and RED_ACTIVE
>
> include/linux/poison.h | 7 ++--
> mm/slub.c | 77 ++++++++++++++++++++++++++++----------------
> tools/include/linux/poison.h | 7 ++--
> 3 files changed, 53 insertions(+), 38 deletions(-)
> ---
> base-commit: 1613e604df0cd359cf2a7fbd9be7a0bcfacfabd0
> change-id: 20240528-b4-slab-debug-1d8179fc996a
>
> Best regards,
On Fri, 7 Jun 2024, Chengming Zhou wrote:
> There are two inconsistencies in check_object(), which are alignment
> padding checking and object padding checking. We only print the error
> messages but don't return 0 to tell callers that something is wrong
> and needs to be handled. Please see alloc_debug_processing() and
> free_debug_processing() for details.
If the error is in the padding and the redzones are ok then its likely
that the objects are ok. So we can actually continue with this slab page
instead of isolating it.
We isolate it in the case that the redzones have been violated because
that suggests someone overwrote the end of the object f.e. In that case
objects may be corrupted. Its best to isolate the slab and hope for the
best.
If it was just the padding then the assumption is that this may be a
scribble. So clean it up and continue.
On 6/10/24 7:07 PM, Christoph Lameter (Ampere) wrote:
> On Fri, 7 Jun 2024, Chengming Zhou wrote:
>
>> There are two inconsistencies in check_object(), which are alignment
>> padding checking and object padding checking. We only print the error
>> messages but don't return 0 to tell callers that something is wrong
>> and needs to be handled. Please see alloc_debug_processing() and
>> free_debug_processing() for details.
>
> If the error is in the padding and the redzones are ok then its likely
> that the objects are ok. So we can actually continue with this slab page
> instead of isolating it.
>
> We isolate it in the case that the redzones have been violated because
> that suggests someone overwrote the end of the object f.e. In that case
> objects may be corrupted. Its best to isolate the slab and hope for the
> best.
>
> If it was just the padding then the assumption is that this may be a
> scribble. So clean it up and continue.
Hm is it really worth such nuance? We enabled debugging and actually hit a
bug. I think it's best to keep things as much as they were and not try to
allow further changes. This e.g. allows more detailed analysis if somebody
later notices the bug report and decides to get a kdump crash dump (or use
drgn on live system). Maybe we should even stop doing the restore_bytes()
stuff, and prevent any further frees in the slab page to happen if possible
without affecting fast paths (now we mark everything as used but don't
prevent further frees of objects that were actually allocated before).
Even if some security people enable parts of slub debugging for security
people it is my impression they would rather panic/reboot or have memory
leaked than trying to salvage the slab page? (CC Kees)
On Mon, Jun 10, 2024 at 10:54:26PM +0200, Vlastimil Babka wrote:
> On 6/10/24 7:07 PM, Christoph Lameter (Ampere) wrote:
> > On Fri, 7 Jun 2024, Chengming Zhou wrote:
> >
> >> There are two inconsistencies in check_object(), which are alignment
> >> padding checking and object padding checking. We only print the error
> >> messages but don't return 0 to tell callers that something is wrong
> >> and needs to be handled. Please see alloc_debug_processing() and
> >> free_debug_processing() for details.
> >
> > If the error is in the padding and the redzones are ok then its likely
> > that the objects are ok. So we can actually continue with this slab page
> > instead of isolating it.
> >
> > We isolate it in the case that the redzones have been violated because
> > that suggests someone overwrote the end of the object f.e. In that case
> > objects may be corrupted. Its best to isolate the slab and hope for the
> > best.
> >
> > If it was just the padding then the assumption is that this may be a
> > scribble. So clean it up and continue.
"a scribble"? :P If padding got touched, something has the wrong size
for an object write. It should be treated just like the redzones. We
want maximal coverage if this checking is enabled.
> Hm is it really worth such nuance? We enabled debugging and actually hit a
> bug. I think it's best to keep things as much as they were and not try to
> allow further changes. This e.g. allows more detailed analysis if somebody
> later notices the bug report and decides to get a kdump crash dump (or use
> drgn on live system). Maybe we should even stop doing the restore_bytes()
> stuff, and prevent any further frees in the slab page to happen if possible
> without affecting fast paths (now we mark everything as used but don't
> prevent further frees of objects that were actually allocated before).
>
> Even if some security people enable parts of slub debugging for security
> people it is my impression they would rather panic/reboot or have memory
> leaked than trying to salvage the slab page? (CC Kees)
Yeah, if we're doing these checks, we should do the checks fully.
Padding is just extra redzone. :)
--
Kees Cook
On Mon, 10 Jun 2024, Vlastimil Babka wrote:
> Even if some security people enable parts of slub debugging for security
> people it is my impression they would rather panic/reboot or have memory
> leaked than trying to salvage the slab page? (CC Kees)
In the past these resilience features have been used to allow the
continued operation of a broken kernel.
So first the Kernel crashed with some obscure oops in the allocator due
to metadata corruption.
One can then put a slub_debug option on the kernel command line which will
result in detailed error reports on what caused the corruption. It will
also activate resilience measures that will often allow the continued
operation until a fix becomes available.
On Tue, Jun 11, 2024 at 03:52:49PM -0700, Christoph Lameter (Ampere) wrote:
> On Mon, 10 Jun 2024, Vlastimil Babka wrote:
>
> > Even if some security people enable parts of slub debugging for security
> > people it is my impression they would rather panic/reboot or have memory
> > leaked than trying to salvage the slab page? (CC Kees)
>
> In the past these resilience features have been used to allow the continued
> operation of a broken kernel.
>
> So first the Kernel crashed with some obscure oops in the allocator due to
> metadata corruption.
>
> One can then put a slub_debug option on the kernel command line which will
> result in detailed error reports on what caused the corruption. It will also
> activate resilience measures that will often allow the continued operation
> until a fix becomes available.
Sure, as long as it's up to the deployment. I just don't want padding
errors unilaterally ignored. If it's useful, there's the
CHECK_DATA_CORRUPTION() macro. That'll let a deployment escalate the
issue from WARN to BUG, etc.
--
Kees Cook
On 2024/6/12 06:52, Christoph Lameter (Ampere) wrote:
> On Mon, 10 Jun 2024, Vlastimil Babka wrote:
>
>> Even if some security people enable parts of slub debugging for security
>> people it is my impression they would rather panic/reboot or have memory
>> leaked than trying to salvage the slab page? (CC Kees)
>
> In the past these resilience features have been used to allow the continued operation of a broken kernel.
>
> So first the Kernel crashed with some obscure oops in the allocator due to metadata corruption.
>
> One can then put a slub_debug option on the kernel command line which will result in detailed error reports on what caused the corruption. It will also activate resilience measures that will often allow the continued operation until a fix becomes available.
This reminds me that we can't toggle slub_debug options for kmem_cache in runtime,
I'm wondering is it useful to be able to enable/disable debug options in runtime?
We can implement this feature by using per-slab debug options, so per-slab has
independent execution path, in which some slabs with debug options enabled go
the slow path, while others can still go fast path.
No sure if it's useful in some cases? Maybe KFENCE is enough? Just my random thoughts.
Thanks.