2023-03-10 10:32:31

by Vlastimil Babka

[permalink] [raw]
Subject: [PATCH 0/7] remove SLOB and allow kfree() with kmem_cache_alloc()

Also in git:
https://git.kernel.org/vbabka/h/slab-remove-slob-v1r1

The SLOB allocator was deprecated in 6.2 so I think we can start
exposing the complete removal in for-next and aim at 6.4 if there are no
complaints.

Besides code cleanup, the main immediate benefit will be allowing
kfree() family of function to work on kmem_cache_alloc() objects (Patch
7), which was incompatible with SLOB.

This includes kfree_rcu() so I've updated the comment there to remove
the mention of potential future addition of kmem_cache_free_rcu() as
there should be no need for that now.

Otherwise it's straightforward. Patch 2 is a cleanup in net area, that I
can either handle in slab tree or submit in net after SLOB is removed.
Another cleanup in tomoyo is already in the tomoyo tree as that didn't
need to wait until SLOB removal.

Vlastimil Babka (7):
mm/slob: remove CONFIG_SLOB
net: skbuff: remove SLOB-specific ifdefs
mm, page_flags: remove PG_slob_free
mm, pagemap: remove SLOB and SLQB from comments and documentation
mm/slab: remove CONFIG_SLOB code from slab common code
mm/slob: remove slob.c
mm/slab: document kfree() as allowed for kmem_cache_alloc() objects

Documentation/admin-guide/mm/pagemap.rst | 6 +-
Documentation/core-api/memory-allocation.rst | 15 +-
fs/proc/page.c | 5 +-
include/linux/page-flags.h | 4 -
include/linux/rcupdate.h | 6 +-
include/linux/slab.h | 39 -
init/Kconfig | 2 +-
kernel/configs/tiny.config | 1 -
mm/Kconfig | 22 -
mm/Makefile | 1 -
mm/slab.h | 61 --
mm/slab_common.c | 7 +-
mm/slob.c | 757 -------------------
net/core/skbuff.c | 16 -
tools/mm/page-types.c | 6 +-
15 files changed, 23 insertions(+), 925 deletions(-)
delete mode 100644 mm/slob.c

--
2.39.2



2023-03-10 10:32:35

by Vlastimil Babka

[permalink] [raw]
Subject: [PATCH 1/7] mm/slob: remove CONFIG_SLOB

Remove SLOB from Kconfig and Makefile. Everything under #ifdef
CONFIG_SLOB, and mm/slob.c is now dead code.

Signed-off-by: Vlastimil Babka <[email protected]>
---
init/Kconfig | 2 +-
kernel/configs/tiny.config | 1 -
mm/Kconfig | 22 ----------------------
mm/Makefile | 1 -
4 files changed, 1 insertion(+), 25 deletions(-)

diff --git a/init/Kconfig b/init/Kconfig
index 1fb5f313d18f..72ac3f66bc27 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -973,7 +973,7 @@ config MEMCG

config MEMCG_KMEM
bool
- depends on MEMCG && !SLOB
+ depends on MEMCG
default y

config BLK_CGROUP
diff --git a/kernel/configs/tiny.config b/kernel/configs/tiny.config
index c2f9c912df1c..144b2bd86b14 100644
--- a/kernel/configs/tiny.config
+++ b/kernel/configs/tiny.config
@@ -7,6 +7,5 @@ CONFIG_KERNEL_XZ=y
# CONFIG_KERNEL_LZO is not set
# CONFIG_KERNEL_LZ4 is not set
# CONFIG_SLAB is not set
-# CONFIG_SLOB_DEPRECATED is not set
CONFIG_SLUB=y
CONFIG_SLUB_TINY=y
diff --git a/mm/Kconfig b/mm/Kconfig
index 4751031f3f05..669399ab693c 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -238,30 +238,8 @@ config SLUB
and has enhanced diagnostics. SLUB is the default choice for
a slab allocator.

-config SLOB_DEPRECATED
- depends on EXPERT
- bool "SLOB (Simple Allocator - DEPRECATED)"
- depends on !PREEMPT_RT
- help
- Deprecated and scheduled for removal in a few cycles. SLUB
- recommended as replacement. CONFIG_SLUB_TINY can be considered
- on systems with 16MB or less RAM.
-
- If you need SLOB to stay, please contact [email protected] and
- people listed in the SLAB ALLOCATOR section of MAINTAINERS file,
- with your use case.
-
- SLOB replaces the stock allocator with a drastically simpler
- allocator. SLOB is generally more space efficient but
- does not perform as well on large systems.
-
endchoice

-config SLOB
- bool
- default y
- depends on SLOB_DEPRECATED
-
config SLUB_TINY
bool "Configure SLUB for minimal memory footprint"
depends on SLUB && EXPERT
diff --git a/mm/Makefile b/mm/Makefile
index 8e105e5b3e29..2d9c1e7f6085 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -81,7 +81,6 @@ obj-$(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP) += hugetlb_vmemmap.o
obj-$(CONFIG_NUMA) += mempolicy.o
obj-$(CONFIG_SPARSEMEM) += sparse.o
obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
-obj-$(CONFIG_SLOB) += slob.o
obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
obj-$(CONFIG_KSM) += ksm.o
obj-$(CONFIG_PAGE_POISONING) += page_poison.o
--
2.39.2


2023-03-10 10:32:39

by Vlastimil Babka

[permalink] [raw]
Subject: [PATCH 6/7] mm/slob: remove slob.c

Remove the SLOB implementation.

RIP SLOB allocator (2006 - 2023)

Signed-off-by: Vlastimil Babka <[email protected]>
---
mm/slob.c | 757 ------------------------------------------------------
1 file changed, 757 deletions(-)
delete mode 100644 mm/slob.c

diff --git a/mm/slob.c b/mm/slob.c
deleted file mode 100644
index fe567fcfa3a3..000000000000
--- a/mm/slob.c
+++ /dev/null
@@ -1,757 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * SLOB Allocator: Simple List Of Blocks
- *
- * Matt Mackall <[email protected]> 12/30/03
- *
- * NUMA support by Paul Mundt, 2007.
- *
- * How SLOB works:
- *
- * The core of SLOB is a traditional K&R style heap allocator, with
- * support for returning aligned objects. The granularity of this
- * allocator is as little as 2 bytes, however typically most architectures
- * will require 4 bytes on 32-bit and 8 bytes on 64-bit.
- *
- * The slob heap is a set of linked list of pages from alloc_pages(),
- * and within each page, there is a singly-linked list of free blocks
- * (slob_t). The heap is grown on demand. To reduce fragmentation,
- * heap pages are segregated into three lists, with objects less than
- * 256 bytes, objects less than 1024 bytes, and all other objects.
- *
- * Allocation from heap involves first searching for a page with
- * sufficient free blocks (using a next-fit-like approach) followed by
- * a first-fit scan of the page. Deallocation inserts objects back
- * into the free list in address order, so this is effectively an
- * address-ordered first fit.
- *
- * Above this is an implementation of kmalloc/kfree. Blocks returned
- * from kmalloc are prepended with a 4-byte header with the kmalloc size.
- * If kmalloc is asked for objects of PAGE_SIZE or larger, it calls
- * alloc_pages() directly, allocating compound pages so the page order
- * does not have to be separately tracked.
- * These objects are detected in kfree() because folio_test_slab()
- * is false for them.
- *
- * SLAB is emulated on top of SLOB by simply calling constructors and
- * destructors for every SLAB allocation. Objects are returned with the
- * 4-byte alignment unless the SLAB_HWCACHE_ALIGN flag is set, in which
- * case the low-level allocator will fragment blocks to create the proper
- * alignment. Again, objects of page-size or greater are allocated by
- * calling alloc_pages(). As SLAB objects know their size, no separate
- * size bookkeeping is necessary and there is essentially no allocation
- * space overhead, and compound pages aren't needed for multi-page
- * allocations.
- *
- * NUMA support in SLOB is fairly simplistic, pushing most of the real
- * logic down to the page allocator, and simply doing the node accounting
- * on the upper levels. In the event that a node id is explicitly
- * provided, __alloc_pages_node() with the specified node id is used
- * instead. The common case (or when the node id isn't explicitly provided)
- * will default to the current node, as per numa_node_id().
- *
- * Node aware pages are still inserted in to the global freelist, and
- * these are scanned for by matching against the node id encoded in the
- * page flags. As a result, block allocations that can be satisfied from
- * the freelist will only be done so on pages residing on the same node,
- * in order to prevent random node placement.
- */
-
-#include <linux/kernel.h>
-#include <linux/slab.h>
-
-#include <linux/mm.h>
-#include <linux/swap.h> /* struct reclaim_state */
-#include <linux/cache.h>
-#include <linux/init.h>
-#include <linux/export.h>
-#include <linux/rcupdate.h>
-#include <linux/list.h>
-#include <linux/kmemleak.h>
-
-#include <trace/events/kmem.h>
-
-#include <linux/atomic.h>
-
-#include "slab.h"
-/*
- * slob_block has a field 'units', which indicates size of block if +ve,
- * or offset of next block if -ve (in SLOB_UNITs).
- *
- * Free blocks of size 1 unit simply contain the offset of the next block.
- * Those with larger size contain their size in the first SLOB_UNIT of
- * memory, and the offset of the next free block in the second SLOB_UNIT.
- */
-#if PAGE_SIZE <= (32767 * 2)
-typedef s16 slobidx_t;
-#else
-typedef s32 slobidx_t;
-#endif
-
-struct slob_block {
- slobidx_t units;
-};
-typedef struct slob_block slob_t;
-
-/*
- * All partially free slob pages go on these lists.
- */
-#define SLOB_BREAK1 256
-#define SLOB_BREAK2 1024
-static LIST_HEAD(free_slob_small);
-static LIST_HEAD(free_slob_medium);
-static LIST_HEAD(free_slob_large);
-
-/*
- * slob_page_free: true for pages on free_slob_pages list.
- */
-static inline int slob_page_free(struct slab *slab)
-{
- return PageSlobFree(slab_page(slab));
-}
-
-static void set_slob_page_free(struct slab *slab, struct list_head *list)
-{
- list_add(&slab->slab_list, list);
- __SetPageSlobFree(slab_page(slab));
-}
-
-static inline void clear_slob_page_free(struct slab *slab)
-{
- list_del(&slab->slab_list);
- __ClearPageSlobFree(slab_page(slab));
-}
-
-#define SLOB_UNIT sizeof(slob_t)
-#define SLOB_UNITS(size) DIV_ROUND_UP(size, SLOB_UNIT)
-
-/*
- * struct slob_rcu is inserted at the tail of allocated slob blocks, which
- * were created with a SLAB_TYPESAFE_BY_RCU slab. slob_rcu is used to free
- * the block using call_rcu.
- */
-struct slob_rcu {
- struct rcu_head head;
- int size;
-};
-
-/*
- * slob_lock protects all slob allocator structures.
- */
-static DEFINE_SPINLOCK(slob_lock);
-
-/*
- * Encode the given size and next info into a free slob block s.
- */
-static void set_slob(slob_t *s, slobidx_t size, slob_t *next)
-{
- slob_t *base = (slob_t *)((unsigned long)s & PAGE_MASK);
- slobidx_t offset = next - base;
-
- if (size > 1) {
- s[0].units = size;
- s[1].units = offset;
- } else
- s[0].units = -offset;
-}
-
-/*
- * Return the size of a slob block.
- */
-static slobidx_t slob_units(slob_t *s)
-{
- if (s->units > 0)
- return s->units;
- return 1;
-}
-
-/*
- * Return the next free slob block pointer after this one.
- */
-static slob_t *slob_next(slob_t *s)
-{
- slob_t *base = (slob_t *)((unsigned long)s & PAGE_MASK);
- slobidx_t next;
-
- if (s[0].units < 0)
- next = -s[0].units;
- else
- next = s[1].units;
- return base+next;
-}
-
-/*
- * Returns true if s is the last free block in its page.
- */
-static int slob_last(slob_t *s)
-{
- return !((unsigned long)slob_next(s) & ~PAGE_MASK);
-}
-
-static void *slob_new_pages(gfp_t gfp, int order, int node)
-{
- struct page *page;
-
-#ifdef CONFIG_NUMA
- if (node != NUMA_NO_NODE)
- page = __alloc_pages_node(node, gfp, order);
- else
-#endif
- page = alloc_pages(gfp, order);
-
- if (!page)
- return NULL;
-
- mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B,
- PAGE_SIZE << order);
- return page_address(page);
-}
-
-static void slob_free_pages(void *b, int order)
-{
- struct page *sp = virt_to_page(b);
-
- if (current->reclaim_state)
- current->reclaim_state->reclaimed_slab += 1 << order;
-
- mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
- -(PAGE_SIZE << order));
- __free_pages(sp, order);
-}
-
-/*
- * slob_page_alloc() - Allocate a slob block within a given slob_page sp.
- * @sp: Page to look in.
- * @size: Size of the allocation.
- * @align: Allocation alignment.
- * @align_offset: Offset in the allocated block that will be aligned.
- * @page_removed_from_list: Return parameter.
- *
- * Tries to find a chunk of memory at least @size bytes big within @page.
- *
- * Return: Pointer to memory if allocated, %NULL otherwise. If the
- * allocation fills up @page then the page is removed from the
- * freelist, in this case @page_removed_from_list will be set to
- * true (set to false otherwise).
- */
-static void *slob_page_alloc(struct slab *sp, size_t size, int align,
- int align_offset, bool *page_removed_from_list)
-{
- slob_t *prev, *cur, *aligned = NULL;
- int delta = 0, units = SLOB_UNITS(size);
-
- *page_removed_from_list = false;
- for (prev = NULL, cur = sp->freelist; ; prev = cur, cur = slob_next(cur)) {
- slobidx_t avail = slob_units(cur);
-
- /*
- * 'aligned' will hold the address of the slob block so that the
- * address 'aligned'+'align_offset' is aligned according to the
- * 'align' parameter. This is for kmalloc() which prepends the
- * allocated block with its size, so that the block itself is
- * aligned when needed.
- */
- if (align) {
- aligned = (slob_t *)
- (ALIGN((unsigned long)cur + align_offset, align)
- - align_offset);
- delta = aligned - cur;
- }
- if (avail >= units + delta) { /* room enough? */
- slob_t *next;
-
- if (delta) { /* need to fragment head to align? */
- next = slob_next(cur);
- set_slob(aligned, avail - delta, next);
- set_slob(cur, delta, aligned);
- prev = cur;
- cur = aligned;
- avail = slob_units(cur);
- }
-
- next = slob_next(cur);
- if (avail == units) { /* exact fit? unlink. */
- if (prev)
- set_slob(prev, slob_units(prev), next);
- else
- sp->freelist = next;
- } else { /* fragment */
- if (prev)
- set_slob(prev, slob_units(prev), cur + units);
- else
- sp->freelist = cur + units;
- set_slob(cur + units, avail - units, next);
- }
-
- sp->units -= units;
- if (!sp->units) {
- clear_slob_page_free(sp);
- *page_removed_from_list = true;
- }
- return cur;
- }
- if (slob_last(cur))
- return NULL;
- }
-}
-
-/*
- * slob_alloc: entry point into the slob allocator.
- */
-static void *slob_alloc(size_t size, gfp_t gfp, int align, int node,
- int align_offset)
-{
- struct folio *folio;
- struct slab *sp;
- struct list_head *slob_list;
- slob_t *b = NULL;
- unsigned long flags;
- bool _unused;
-
- if (size < SLOB_BREAK1)
- slob_list = &free_slob_small;
- else if (size < SLOB_BREAK2)
- slob_list = &free_slob_medium;
- else
- slob_list = &free_slob_large;
-
- spin_lock_irqsave(&slob_lock, flags);
- /* Iterate through each partially free page, try to find room */
- list_for_each_entry(sp, slob_list, slab_list) {
- bool page_removed_from_list = false;
-#ifdef CONFIG_NUMA
- /*
- * If there's a node specification, search for a partial
- * page with a matching node id in the freelist.
- */
- if (node != NUMA_NO_NODE && slab_nid(sp) != node)
- continue;
-#endif
- /* Enough room on this page? */
- if (sp->units < SLOB_UNITS(size))
- continue;
-
- b = slob_page_alloc(sp, size, align, align_offset, &page_removed_from_list);
- if (!b)
- continue;
-
- /*
- * If slob_page_alloc() removed sp from the list then we
- * cannot call list functions on sp. If so allocation
- * did not fragment the page anyway so optimisation is
- * unnecessary.
- */
- if (!page_removed_from_list) {
- /*
- * Improve fragment distribution and reduce our average
- * search time by starting our next search here. (see
- * Knuth vol 1, sec 2.5, pg 449)
- */
- if (!list_is_first(&sp->slab_list, slob_list))
- list_rotate_to_front(&sp->slab_list, slob_list);
- }
- break;
- }
- spin_unlock_irqrestore(&slob_lock, flags);
-
- /* Not enough space: must allocate a new page */
- if (!b) {
- b = slob_new_pages(gfp & ~__GFP_ZERO, 0, node);
- if (!b)
- return NULL;
- folio = virt_to_folio(b);
- __folio_set_slab(folio);
- sp = folio_slab(folio);
-
- spin_lock_irqsave(&slob_lock, flags);
- sp->units = SLOB_UNITS(PAGE_SIZE);
- sp->freelist = b;
- INIT_LIST_HEAD(&sp->slab_list);
- set_slob(b, SLOB_UNITS(PAGE_SIZE), b + SLOB_UNITS(PAGE_SIZE));
- set_slob_page_free(sp, slob_list);
- b = slob_page_alloc(sp, size, align, align_offset, &_unused);
- BUG_ON(!b);
- spin_unlock_irqrestore(&slob_lock, flags);
- }
- if (unlikely(gfp & __GFP_ZERO))
- memset(b, 0, size);
- return b;
-}
-
-/*
- * slob_free: entry point into the slob allocator.
- */
-static void slob_free(void *block, int size)
-{
- struct slab *sp;
- slob_t *prev, *next, *b = (slob_t *)block;
- slobidx_t units;
- unsigned long flags;
- struct list_head *slob_list;
-
- if (unlikely(ZERO_OR_NULL_PTR(block)))
- return;
- BUG_ON(!size);
-
- sp = virt_to_slab(block);
- units = SLOB_UNITS(size);
-
- spin_lock_irqsave(&slob_lock, flags);
-
- if (sp->units + units == SLOB_UNITS(PAGE_SIZE)) {
- /* Go directly to page allocator. Do not pass slob allocator */
- if (slob_page_free(sp))
- clear_slob_page_free(sp);
- spin_unlock_irqrestore(&slob_lock, flags);
- __folio_clear_slab(slab_folio(sp));
- slob_free_pages(b, 0);
- return;
- }
-
- if (!slob_page_free(sp)) {
- /* This slob page is about to become partially free. Easy! */
- sp->units = units;
- sp->freelist = b;
- set_slob(b, units,
- (void *)((unsigned long)(b +
- SLOB_UNITS(PAGE_SIZE)) & PAGE_MASK));
- if (size < SLOB_BREAK1)
- slob_list = &free_slob_small;
- else if (size < SLOB_BREAK2)
- slob_list = &free_slob_medium;
- else
- slob_list = &free_slob_large;
- set_slob_page_free(sp, slob_list);
- goto out;
- }
-
- /*
- * Otherwise the page is already partially free, so find reinsertion
- * point.
- */
- sp->units += units;
-
- if (b < (slob_t *)sp->freelist) {
- if (b + units == sp->freelist) {
- units += slob_units(sp->freelist);
- sp->freelist = slob_next(sp->freelist);
- }
- set_slob(b, units, sp->freelist);
- sp->freelist = b;
- } else {
- prev = sp->freelist;
- next = slob_next(prev);
- while (b > next) {
- prev = next;
- next = slob_next(prev);
- }
-
- if (!slob_last(prev) && b + units == next) {
- units += slob_units(next);
- set_slob(b, units, slob_next(next));
- } else
- set_slob(b, units, next);
-
- if (prev + slob_units(prev) == b) {
- units = slob_units(b) + slob_units(prev);
- set_slob(prev, units, slob_next(b));
- } else
- set_slob(prev, slob_units(prev), b);
- }
-out:
- spin_unlock_irqrestore(&slob_lock, flags);
-}
-
-#ifdef CONFIG_PRINTK
-void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
-{
- kpp->kp_ptr = object;
- kpp->kp_slab = slab;
-}
-#endif
-
-/*
- * End of slob allocator proper. Begin kmem_cache_alloc and kmalloc frontend.
- */
-
-static __always_inline void *
-__do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller)
-{
- unsigned int *m;
- unsigned int minalign;
- void *ret;
-
- minalign = max_t(unsigned int, ARCH_KMALLOC_MINALIGN,
- arch_slab_minalign());
- gfp &= gfp_allowed_mask;
-
- might_alloc(gfp);
-
- if (size < PAGE_SIZE - minalign) {
- int align = minalign;
-
- /*
- * For power of two sizes, guarantee natural alignment for
- * kmalloc()'d objects.
- */
- if (is_power_of_2(size))
- align = max_t(unsigned int, minalign, size);
-
- if (!size)
- return ZERO_SIZE_PTR;
-
- m = slob_alloc(size + minalign, gfp, align, node, minalign);
-
- if (!m)
- return NULL;
- *m = size;
- ret = (void *)m + minalign;
-
- trace_kmalloc(caller, ret, size, size + minalign, gfp, node);
- } else {
- unsigned int order = get_order(size);
-
- if (likely(order))
- gfp |= __GFP_COMP;
- ret = slob_new_pages(gfp, order, node);
-
- trace_kmalloc(caller, ret, size, PAGE_SIZE << order, gfp, node);
- }
-
- kmemleak_alloc(ret, size, 1, gfp);
- return ret;
-}
-
-void *__kmalloc(size_t size, gfp_t gfp)
-{
- return __do_kmalloc_node(size, gfp, NUMA_NO_NODE, _RET_IP_);
-}
-EXPORT_SYMBOL(__kmalloc);
-
-void *__kmalloc_node_track_caller(size_t size, gfp_t gfp,
- int node, unsigned long caller)
-{
- return __do_kmalloc_node(size, gfp, node, caller);
-}
-EXPORT_SYMBOL(__kmalloc_node_track_caller);
-
-void kfree(const void *block)
-{
- struct folio *sp;
-
- trace_kfree(_RET_IP_, block);
-
- if (unlikely(ZERO_OR_NULL_PTR(block)))
- return;
- kmemleak_free(block);
-
- sp = virt_to_folio(block);
- if (folio_test_slab(sp)) {
- unsigned int align = max_t(unsigned int,
- ARCH_KMALLOC_MINALIGN,
- arch_slab_minalign());
- unsigned int *m = (unsigned int *)(block - align);
-
- slob_free(m, *m + align);
- } else {
- unsigned int order = folio_order(sp);
-
- mod_node_page_state(folio_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
- -(PAGE_SIZE << order));
- __free_pages(folio_page(sp, 0), order);
-
- }
-}
-EXPORT_SYMBOL(kfree);
-
-size_t kmalloc_size_roundup(size_t size)
-{
- /* Short-circuit the 0 size case. */
- if (unlikely(size == 0))
- return 0;
- /* Short-circuit saturated "too-large" case. */
- if (unlikely(size == SIZE_MAX))
- return SIZE_MAX;
-
- return ALIGN(size, ARCH_KMALLOC_MINALIGN);
-}
-
-EXPORT_SYMBOL(kmalloc_size_roundup);
-
-/* can't use ksize for kmem_cache_alloc memory, only kmalloc */
-size_t __ksize(const void *block)
-{
- struct folio *folio;
- unsigned int align;
- unsigned int *m;
-
- BUG_ON(!block);
- if (unlikely(block == ZERO_SIZE_PTR))
- return 0;
-
- folio = virt_to_folio(block);
- if (unlikely(!folio_test_slab(folio)))
- return folio_size(folio);
-
- align = max_t(unsigned int, ARCH_KMALLOC_MINALIGN,
- arch_slab_minalign());
- m = (unsigned int *)(block - align);
- return SLOB_UNITS(*m) * SLOB_UNIT;
-}
-
-int __kmem_cache_create(struct kmem_cache *c, slab_flags_t flags)
-{
- if (flags & SLAB_TYPESAFE_BY_RCU) {
- /* leave room for rcu footer at the end of object */
- c->size += sizeof(struct slob_rcu);
- }
-
- /* Actual size allocated */
- c->size = SLOB_UNITS(c->size) * SLOB_UNIT;
- c->flags = flags;
- return 0;
-}
-
-static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
-{
- void *b;
-
- flags &= gfp_allowed_mask;
-
- might_alloc(flags);
-
- if (c->size < PAGE_SIZE) {
- b = slob_alloc(c->size, flags, c->align, node, 0);
- trace_kmem_cache_alloc(_RET_IP_, b, c, flags, node);
- } else {
- b = slob_new_pages(flags, get_order(c->size), node);
- trace_kmem_cache_alloc(_RET_IP_, b, c, flags, node);
- }
-
- if (b && c->ctor) {
- WARN_ON_ONCE(flags & __GFP_ZERO);
- c->ctor(b);
- }
-
- kmemleak_alloc_recursive(b, c->size, 1, c->flags, flags);
- return b;
-}
-
-void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags)
-{
- return slob_alloc_node(cachep, flags, NUMA_NO_NODE);
-}
-EXPORT_SYMBOL(kmem_cache_alloc);
-
-
-void *kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags)
-{
- return slob_alloc_node(cachep, flags, NUMA_NO_NODE);
-}
-EXPORT_SYMBOL(kmem_cache_alloc_lru);
-
-void *__kmalloc_node(size_t size, gfp_t gfp, int node)
-{
- return __do_kmalloc_node(size, gfp, node, _RET_IP_);
-}
-EXPORT_SYMBOL(__kmalloc_node);
-
-void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t gfp, int node)
-{
- return slob_alloc_node(cachep, gfp, node);
-}
-EXPORT_SYMBOL(kmem_cache_alloc_node);
-
-static void __kmem_cache_free(void *b, int size)
-{
- if (size < PAGE_SIZE)
- slob_free(b, size);
- else
- slob_free_pages(b, get_order(size));
-}
-
-static void kmem_rcu_free(struct rcu_head *head)
-{
- struct slob_rcu *slob_rcu = (struct slob_rcu *)head;
- void *b = (void *)slob_rcu - (slob_rcu->size - sizeof(struct slob_rcu));
-
- __kmem_cache_free(b, slob_rcu->size);
-}
-
-void kmem_cache_free(struct kmem_cache *c, void *b)
-{
- kmemleak_free_recursive(b, c->flags);
- trace_kmem_cache_free(_RET_IP_, b, c);
- if (unlikely(c->flags & SLAB_TYPESAFE_BY_RCU)) {
- struct slob_rcu *slob_rcu;
- slob_rcu = b + (c->size - sizeof(struct slob_rcu));
- slob_rcu->size = c->size;
- call_rcu(&slob_rcu->head, kmem_rcu_free);
- } else {
- __kmem_cache_free(b, c->size);
- }
-}
-EXPORT_SYMBOL(kmem_cache_free);
-
-void kmem_cache_free_bulk(struct kmem_cache *s, size_t nr, void **p)
-{
- size_t i;
-
- for (i = 0; i < nr; i++) {
- if (s)
- kmem_cache_free(s, p[i]);
- else
- kfree(p[i]);
- }
-}
-EXPORT_SYMBOL(kmem_cache_free_bulk);
-
-int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t nr,
- void **p)
-{
- size_t i;
-
- for (i = 0; i < nr; i++) {
- void *x = p[i] = kmem_cache_alloc(s, flags);
-
- if (!x) {
- kmem_cache_free_bulk(s, i, p);
- return 0;
- }
- }
- return i;
-}
-EXPORT_SYMBOL(kmem_cache_alloc_bulk);
-
-int __kmem_cache_shutdown(struct kmem_cache *c)
-{
- /* No way to check for remaining objects */
- return 0;
-}
-
-void __kmem_cache_release(struct kmem_cache *c)
-{
-}
-
-int __kmem_cache_shrink(struct kmem_cache *d)
-{
- return 0;
-}
-
-static struct kmem_cache kmem_cache_boot = {
- .name = "kmem_cache",
- .size = sizeof(struct kmem_cache),
- .flags = SLAB_PANIC,
- .align = ARCH_KMALLOC_MINALIGN,
-};
-
-void __init kmem_cache_init(void)
-{
- kmem_cache = &kmem_cache_boot;
- slab_state = UP;
-}
-
-void __init kmem_cache_init_late(void)
-{
- slab_state = FULL;
-}
--
2.39.2


2023-03-10 10:32:43

by Vlastimil Babka

[permalink] [raw]
Subject: [PATCH 7/7] mm/slab: document kfree() as allowed for kmem_cache_alloc() objects

This will make it easier to free objects in situations when they can
come from either kmalloc() or kmem_cache_alloc(), and also allow
kfree_rcu() for freeing objects from kmem_cache_alloc().

For the SLAB and SLUB allocators this was always possible so with SLOB
gone, we can document it as supported.

Signed-off-by: Vlastimil Babka <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Neeraj Upadhyay <[email protected]>
Cc: Josh Triplett <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Lai Jiangshan <[email protected]>
Cc: Joel Fernandes <[email protected]>
---
Documentation/core-api/memory-allocation.rst | 15 +++++++++++----
include/linux/rcupdate.h | 6 ++++--
mm/slab_common.c | 5 +----
3 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/Documentation/core-api/memory-allocation.rst b/Documentation/core-api/memory-allocation.rst
index 5954ddf6ee13..f9e8d352ed67 100644
--- a/Documentation/core-api/memory-allocation.rst
+++ b/Documentation/core-api/memory-allocation.rst
@@ -170,7 +170,14 @@ should be used if a part of the cache might be copied to the userspace.
After the cache is created kmem_cache_alloc() and its convenience
wrappers can allocate memory from that cache.

-When the allocated memory is no longer needed it must be freed. You can
-use kvfree() for the memory allocated with `kmalloc`, `vmalloc` and
-`kvmalloc`. The slab caches should be freed with kmem_cache_free(). And
-don't forget to destroy the cache with kmem_cache_destroy().
+When the allocated memory is no longer needed it must be freed. Objects
+allocated by `kmalloc` can be freed by `kfree` or `kvfree`.
+Objects allocated by `kmem_cache_alloc` can be freed with `kmem_cache_free`
+or also by `kfree` or `kvfree`, which can be more convenient as it does
+not require the kmem_cache pointed.
+The rules for _bulk and _rcu flavors of freeing functions are analogical.
+
+Memory allocated by `vmalloc` can be freed with `vfree` or `kvfree`.
+Memory allocated by `kvmalloc` can be freed with `kvfree`.
+Caches created by `kmem_cache_create` should be freed with
+`kmem_cache_destroy` only after freeing all the allocated objects first.
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 094321c17e48..dcd2cf1e8326 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -976,8 +976,10 @@ static inline notrace void rcu_read_unlock_sched_notrace(void)
* either fall back to use of call_rcu() or rearrange the structure to
* position the rcu_head structure into the first 4096 bytes.
*
- * Note that the allowable offset might decrease in the future, for example,
- * to allow something like kmem_cache_free_rcu().
+ * The object to be freed can be allocated either by kmalloc() or
+ * kmem_cache_alloc().
+ *
+ * Note that the allowable offset might decrease in the future.
*
* The BUILD_BUG_ON check must not involve any function calls, hence the
* checks are done in macros here.
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 1522693295f5..607249785c07 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -989,12 +989,9 @@ EXPORT_SYMBOL(__kmalloc_node_track_caller);

/**
* kfree - free previously allocated memory
- * @object: pointer returned by kmalloc.
+ * @object: pointer returned by kmalloc() or kmem_cache_alloc()
*
* If @object is NULL, no operation is performed.
- *
- * Don't free memory not originally allocated by kmalloc()
- * or you will run into trouble.
*/
void kfree(const void *object)
{
--
2.39.2


2023-03-10 10:32:48

by Vlastimil Babka

[permalink] [raw]
Subject: [PATCH 3/7] mm, page_flags: remove PG_slob_free

With SLOB removed we no longer need the PG_slob_free alias for
PG_private. Also update tools/mm/page-types.

Signed-off-by: Vlastimil Babka <[email protected]>
---
include/linux/page-flags.h | 4 ----
tools/mm/page-types.c | 6 +-----
2 files changed, 1 insertion(+), 9 deletions(-)

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index a7e3a3405520..2bdc41cb0594 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -174,9 +174,6 @@ enum pageflags {
/* Remapped by swiotlb-xen. */
PG_xen_remapped = PG_owner_priv_1,

- /* SLOB */
- PG_slob_free = PG_private,
-
#ifdef CONFIG_MEMORY_FAILURE
/*
* Compound pages. Stored in first tail page's flags.
@@ -483,7 +480,6 @@ PAGEFLAG(Active, active, PF_HEAD) __CLEARPAGEFLAG(Active, active, PF_HEAD)
PAGEFLAG(Workingset, workingset, PF_HEAD)
TESTCLEARFLAG(Workingset, workingset, PF_HEAD)
__PAGEFLAG(Slab, slab, PF_NO_TAIL)
-__PAGEFLAG(SlobFree, slob_free, PF_NO_TAIL)
PAGEFLAG(Checked, checked, PF_NO_COMPOUND) /* Used by some filesystems */

/* Xen */
diff --git a/tools/mm/page-types.c b/tools/mm/page-types.c
index 381dcc00cb62..8d5595b6c59f 100644
--- a/tools/mm/page-types.c
+++ b/tools/mm/page-types.c
@@ -85,7 +85,6 @@
*/
#define KPF_ANON_EXCLUSIVE 47
#define KPF_READAHEAD 48
-#define KPF_SLOB_FREE 49
#define KPF_SLUB_FROZEN 50
#define KPF_SLUB_DEBUG 51
#define KPF_FILE 61
@@ -141,7 +140,6 @@ static const char * const page_flag_names[] = {

[KPF_ANON_EXCLUSIVE] = "d:anon_exclusive",
[KPF_READAHEAD] = "I:readahead",
- [KPF_SLOB_FREE] = "P:slob_free",
[KPF_SLUB_FROZEN] = "A:slub_frozen",
[KPF_SLUB_DEBUG] = "E:slub_debug",

@@ -478,10 +476,8 @@ static uint64_t expand_overloaded_flags(uint64_t flags, uint64_t pme)
if ((flags & BIT(ANON)) && (flags & BIT(MAPPEDTODISK)))
flags ^= BIT(MAPPEDTODISK) | BIT(ANON_EXCLUSIVE);

- /* SLOB/SLUB overload several page flags */
+ /* SLUB overloads several page flags */
if (flags & BIT(SLAB)) {
- if (flags & BIT(PRIVATE))
- flags ^= BIT(PRIVATE) | BIT(SLOB_FREE);
if (flags & BIT(ACTIVE))
flags ^= BIT(ACTIVE) | BIT(SLUB_FROZEN);
if (flags & BIT(ERROR))
--
2.39.2


2023-03-10 10:32:51

by Vlastimil Babka

[permalink] [raw]
Subject: [PATCH 5/7] mm/slab: remove CONFIG_SLOB code from slab common code

CONFIG_SLOB has been removed from Kconfig. Remove code and #ifdef's
specific to SLOB in the slab headers and common code.

Signed-off-by: Vlastimil Babka <[email protected]>
---
include/linux/slab.h | 39 ----------------------------
mm/slab.h | 61 --------------------------------------------
mm/slab_common.c | 2 --
3 files changed, 102 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 45af70315a94..7f645a4c1298 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -298,19 +298,6 @@ static inline unsigned int arch_slab_minalign(void)
#endif
#endif

-#ifdef CONFIG_SLOB
-/*
- * SLOB passes all requests larger than one page to the page allocator.
- * No kmalloc array is necessary since objects of different sizes can
- * be allocated from the same page.
- */
-#define KMALLOC_SHIFT_HIGH PAGE_SHIFT
-#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1)
-#ifndef KMALLOC_SHIFT_LOW
-#define KMALLOC_SHIFT_LOW 3
-#endif
-#endif
-
/* Maximum allocatable size */
#define KMALLOC_MAX_SIZE (1UL << KMALLOC_SHIFT_MAX)
/* Maximum size for which we actually use a slab cache */
@@ -366,7 +353,6 @@ enum kmalloc_cache_type {
NR_KMALLOC_TYPES
};

-#ifndef CONFIG_SLOB
extern struct kmem_cache *
kmalloc_caches[NR_KMALLOC_TYPES][KMALLOC_SHIFT_HIGH + 1];

@@ -458,7 +444,6 @@ static __always_inline unsigned int __kmalloc_index(size_t size,
}
static_assert(PAGE_SHIFT <= 20);
#define kmalloc_index(s) __kmalloc_index(s, true)
-#endif /* !CONFIG_SLOB */

void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __alloc_size(1);

@@ -487,10 +472,6 @@ void kmem_cache_free(struct kmem_cache *s, void *objp);
void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p);
int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p);

-/*
- * Caller must not use kfree_bulk() on memory not originally allocated
- * by kmalloc(), because the SLOB allocator cannot handle this.
- */
static __always_inline void kfree_bulk(size_t size, void **p)
{
kmem_cache_free_bulk(NULL, size, p);
@@ -567,7 +548,6 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_align
* Try really hard to succeed the allocation but fail
* eventually.
*/
-#ifndef CONFIG_SLOB
static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags)
{
if (__builtin_constant_p(size) && size) {
@@ -583,17 +563,7 @@ static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags)
}
return __kmalloc(size, flags);
}
-#else
-static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags)
-{
- if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE)
- return kmalloc_large(size, flags);
-
- return __kmalloc(size, flags);
-}
-#endif

-#ifndef CONFIG_SLOB
static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node)
{
if (__builtin_constant_p(size) && size) {
@@ -609,15 +579,6 @@ static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t fla
}
return __kmalloc_node(size, flags, node);
}
-#else
-static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node)
-{
- if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE)
- return kmalloc_large_node(size, flags, node);
-
- return __kmalloc_node(size, flags, node);
-}
-#endif

/**
* kmalloc_array - allocate memory for an array.
diff --git a/mm/slab.h b/mm/slab.h
index 43966aa5fadf..399966b3ce52 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -51,14 +51,6 @@ struct slab {
};
unsigned int __unused;

-#elif defined(CONFIG_SLOB)
-
- struct list_head slab_list;
- void *__unused_1;
- void *freelist; /* first free block */
- long units;
- unsigned int __unused_2;
-
#else
#error "Unexpected slab allocator configured"
#endif
@@ -72,11 +64,7 @@ struct slab {
#define SLAB_MATCH(pg, sl) \
static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl))
SLAB_MATCH(flags, __page_flags);
-#ifndef CONFIG_SLOB
SLAB_MATCH(compound_head, slab_cache); /* Ensure bit 0 is clear */
-#else
-SLAB_MATCH(compound_head, slab_list); /* Ensure bit 0 is clear */
-#endif
SLAB_MATCH(_refcount, __page_refcount);
#ifdef CONFIG_MEMCG
SLAB_MATCH(memcg_data, memcg_data);
@@ -200,31 +188,6 @@ static inline size_t slab_size(const struct slab *slab)
return PAGE_SIZE << slab_order(slab);
}

-#ifdef CONFIG_SLOB
-/*
- * Common fields provided in kmem_cache by all slab allocators
- * This struct is either used directly by the allocator (SLOB)
- * or the allocator must include definitions for all fields
- * provided in kmem_cache_common in their definition of kmem_cache.
- *
- * Once we can do anonymous structs (C11 standard) we could put a
- * anonymous struct definition in these allocators so that the
- * separate allocations in the kmem_cache structure of SLAB and
- * SLUB is no longer needed.
- */
-struct kmem_cache {
- unsigned int object_size;/* The original size of the object */
- unsigned int size; /* The aligned/padded/added on size */
- unsigned int align; /* Alignment as calculated */
- slab_flags_t flags; /* Active flags on the slab */
- const char *name; /* Slab name for sysfs */
- int refcount; /* Use counter */
- void (*ctor)(void *); /* Called on object slot creation */
- struct list_head list; /* List of all slab caches on the system */
-};
-
-#endif /* CONFIG_SLOB */
-
#ifdef CONFIG_SLAB
#include <linux/slab_def.h>
#endif
@@ -274,7 +237,6 @@ extern const struct kmalloc_info_struct {
unsigned int size;
} kmalloc_info[];

-#ifndef CONFIG_SLOB
/* Kmalloc array related functions */
void setup_kmalloc_cache_index_table(void);
void create_kmalloc_caches(slab_flags_t);
@@ -286,7 +248,6 @@ void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags,
int node, size_t orig_size,
unsigned long caller);
void __kmem_cache_free(struct kmem_cache *s, void *x, unsigned long caller);
-#endif

gfp_t kmalloc_fix_flags(gfp_t flags);

@@ -303,33 +264,16 @@ extern void create_boot_cache(struct kmem_cache *, const char *name,
int slab_unmergeable(struct kmem_cache *s);
struct kmem_cache *find_mergeable(unsigned size, unsigned align,
slab_flags_t flags, const char *name, void (*ctor)(void *));
-#ifndef CONFIG_SLOB
struct kmem_cache *
__kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
slab_flags_t flags, void (*ctor)(void *));

slab_flags_t kmem_cache_flags(unsigned int object_size,
slab_flags_t flags, const char *name);
-#else
-static inline struct kmem_cache *
-__kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
- slab_flags_t flags, void (*ctor)(void *))
-{ return NULL; }
-
-static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
- slab_flags_t flags, const char *name)
-{
- return flags;
-}
-#endif

static inline bool is_kmalloc_cache(struct kmem_cache *s)
{
-#ifndef CONFIG_SLOB
return (s->flags & SLAB_KMALLOC);
-#else
- return false;
-#endif
}

/* Legal flag mask for kmem_cache_create(), for various configurations */
@@ -634,7 +578,6 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab,
}
#endif /* CONFIG_MEMCG_KMEM */

-#ifndef CONFIG_SLOB
static inline struct kmem_cache *virt_to_cache(const void *obj)
{
struct slab *slab;
@@ -684,8 +627,6 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)

void free_large_kmalloc(struct folio *folio, void *object);

-#endif /* CONFIG_SLOB */
-
size_t __ksize(const void *objp);

static inline size_t slab_ksize(const struct kmem_cache *s)
@@ -777,7 +718,6 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
memcg_slab_post_alloc_hook(s, objcg, flags, size, p);
}

-#ifndef CONFIG_SLOB
/*
* The slab lists for all objects.
*/
@@ -824,7 +764,6 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
for (__node = 0; __node < nr_node_ids; __node++) \
if ((__n = get_node(__s, __node)))

-#endif

#if defined(CONFIG_SLAB) || defined(CONFIG_SLUB_DEBUG)
void dump_unreclaimable_slab(void);
diff --git a/mm/slab_common.c b/mm/slab_common.c
index bf4e777cfe90..1522693295f5 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -625,7 +625,6 @@ void kmem_dump_obj(void *object)
EXPORT_SYMBOL_GPL(kmem_dump_obj);
#endif

-#ifndef CONFIG_SLOB
/* Create a cache during boot when no slab services are available yet */
void __init create_boot_cache(struct kmem_cache *s, const char *name,
unsigned int size, slab_flags_t flags,
@@ -1079,7 +1078,6 @@ void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags,
return ret;
}
EXPORT_SYMBOL(kmalloc_node_trace);
-#endif /* !CONFIG_SLOB */

gfp_t kmalloc_fix_flags(gfp_t flags)
{
--
2.39.2


2023-03-10 10:32:57

by Vlastimil Babka

[permalink] [raw]
Subject: [PATCH 4/7] mm, pagemap: remove SLOB and SLQB from comments and documentation

SLOB has been removed and SLQB never merged, so remove their mentions
from comments and documentation of pagemap.

Signed-off-by: Vlastimil Babka <[email protected]>
---
Documentation/admin-guide/mm/pagemap.rst | 6 +++---
fs/proc/page.c | 5 ++---
2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/Documentation/admin-guide/mm/pagemap.rst b/Documentation/admin-guide/mm/pagemap.rst
index b5f970dc91e7..bb4aa897a773 100644
--- a/Documentation/admin-guide/mm/pagemap.rst
+++ b/Documentation/admin-guide/mm/pagemap.rst
@@ -91,9 +91,9 @@ Short descriptions to the page flags
The page is being locked for exclusive access, e.g. by undergoing read/write
IO.
7 - SLAB
- The page is managed by the SLAB/SLOB/SLUB/SLQB kernel memory allocator.
- When compound page is used, SLUB/SLQB will only set this flag on the head
- page; SLOB will not flag it at all.
+ The page is managed by the SLAB/SLUB kernel memory allocator.
+ When compound page is used, either will only set this flag on the head
+ page..
10 - BUDDY
A free memory block managed by the buddy system allocator.
The buddy system organizes free memory in blocks of various orders.
diff --git a/fs/proc/page.c b/fs/proc/page.c
index 6249c347809a..1356aeffd8dc 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -125,7 +125,7 @@ u64 stable_page_flags(struct page *page)
/*
* pseudo flags for the well known (anonymous) memory mapped pages
*
- * Note that page->_mapcount is overloaded in SLOB/SLUB/SLQB, so the
+ * Note that page->_mapcount is overloaded in SLAB/SLUB, so the
* simple test in page_mapped() is not enough.
*/
if (!PageSlab(page) && page_mapped(page))
@@ -166,8 +166,7 @@ u64 stable_page_flags(struct page *page)

/*
* Caveats on high order pages: page->_refcount will only be set
- * -1 on the head page; SLUB/SLQB do the same for PG_slab;
- * SLOB won't set PG_slab at all on compound pages.
+ * -1 on the head page; SLAB/SLUB do the same for PG_slab;
*/
if (PageBuddy(page))
u |= 1 << KPF_BUDDY;
--
2.39.2


2023-03-11 01:00:51

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [PATCH 0/7] remove SLOB and allow kfree() with kmem_cache_alloc()

On Fri, 10 Mar 2023 11:32:02 +0100 Vlastimil Babka wrote:
> Otherwise it's straightforward. Patch 2 is a cleanup in net area, that I
> can either handle in slab tree or submit in net after SLOB is removed.

Letter would be better, if you don't mind. skbuff.c is relatively hot,
there's a good chance we'll create a conflict.

2023-03-12 09:51:42

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH 0/7] remove SLOB and allow kfree() with kmem_cache_alloc()

Hi Vlastimil,

On Fri, Mar 10, 2023 at 11:32:02AM +0100, Vlastimil Babka wrote:
> Also in git:
> https://git.kernel.org/vbabka/h/slab-remove-slob-v1r1
>
> The SLOB allocator was deprecated in 6.2 so I think we can start
> exposing the complete removal in for-next and aim at 6.4 if there are no
> complaints.
>
> Besides code cleanup, the main immediate benefit will be allowing
> kfree() family of function to work on kmem_cache_alloc() objects (Patch
> 7), which was incompatible with SLOB.
>
> This includes kfree_rcu() so I've updated the comment there to remove
> the mention of potential future addition of kmem_cache_free_rcu() as
> there should be no need for that now.
>
> Otherwise it's straightforward. Patch 2 is a cleanup in net area, that I
> can either handle in slab tree or submit in net after SLOB is removed.
> Another cleanup in tomoyo is already in the tomoyo tree as that didn't
> need to wait until SLOB removal.
>
> Vlastimil Babka (7):
> mm/slob: remove CONFIG_SLOB
> net: skbuff: remove SLOB-specific ifdefs
> mm, page_flags: remove PG_slob_free
> mm, pagemap: remove SLOB and SLQB from comments and documentation
> mm/slab: remove CONFIG_SLOB code from slab common code
> mm/slob: remove slob.c
> mm/slab: document kfree() as allowed for kmem_cache_alloc() objects
>
> Documentation/admin-guide/mm/pagemap.rst | 6 +-
> Documentation/core-api/memory-allocation.rst | 15 +-
> fs/proc/page.c | 5 +-
> include/linux/page-flags.h | 4 -
> include/linux/rcupdate.h | 6 +-
> include/linux/slab.h | 39 -
> init/Kconfig | 2 +-
> kernel/configs/tiny.config | 1 -
> mm/Kconfig | 22 -
> mm/Makefile | 1 -
> mm/slab.h | 61 --
> mm/slab_common.c | 7 +-
> mm/slob.c | 757 -------------------
> net/core/skbuff.c | 16 -
> tools/mm/page-types.c | 6 +-
> 15 files changed, 23 insertions(+), 925 deletions(-)
> delete mode 100644 mm/slob.c

git grep -in slob still gives a couple of matches. I've dropped the
irrelevant ones it it left me with these:

CREDITS:14:D: SLOB slab allocator
kernel/trace/ring_buffer.c:358: * Also stolen from mm/slob.c. Thanks to Mathieu Desnoyers for pointing
mm/Kconfig:251: SLOB allocator and is not recommended for systems with more than
mm/Makefile:25:KCOV_INSTRUMENT_slob.o := n

Except the comment in kernel/trace/ring_buffer.c all are trivial.

As for the comment in ring_buffer.c, it looks completely irrelevant at this
point.

@Steve?

> --
> 2.39.2
>

--
Sincerely yours,
Mike.

2023-03-12 10:01:45

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH 7/7] mm/slab: document kfree() as allowed for kmem_cache_alloc() objects

On Fri, Mar 10, 2023 at 11:32:09AM +0100, Vlastimil Babka wrote:
> This will make it easier to free objects in situations when they can
> come from either kmalloc() or kmem_cache_alloc(), and also allow
> kfree_rcu() for freeing objects from kmem_cache_alloc().
>
> For the SLAB and SLUB allocators this was always possible so with SLOB
> gone, we can document it as supported.
>
> Signed-off-by: Vlastimil Babka <[email protected]>
> Cc: Mike Rapoport <[email protected]>
> Cc: Jonathan Corbet <[email protected]>
> Cc: "Paul E. McKenney" <[email protected]>
> Cc: Frederic Weisbecker <[email protected]>
> Cc: Neeraj Upadhyay <[email protected]>
> Cc: Josh Triplett <[email protected]>
> Cc: Steven Rostedt <[email protected]>
> Cc: Mathieu Desnoyers <[email protected]>
> Cc: Lai Jiangshan <[email protected]>
> Cc: Joel Fernandes <[email protected]>
> ---
> Documentation/core-api/memory-allocation.rst | 15 +++++++++++----
> include/linux/rcupdate.h | 6 ++++--
> mm/slab_common.c | 5 +----
> 3 files changed, 16 insertions(+), 10 deletions(-)
>
> diff --git a/Documentation/core-api/memory-allocation.rst b/Documentation/core-api/memory-allocation.rst
> index 5954ddf6ee13..f9e8d352ed67 100644
> --- a/Documentation/core-api/memory-allocation.rst
> +++ b/Documentation/core-api/memory-allocation.rst
> @@ -170,7 +170,14 @@ should be used if a part of the cache might be copied to the userspace.
> After the cache is created kmem_cache_alloc() and its convenience
> wrappers can allocate memory from that cache.
>
> -When the allocated memory is no longer needed it must be freed. You can
> -use kvfree() for the memory allocated with `kmalloc`, `vmalloc` and
> -`kvmalloc`. The slab caches should be freed with kmem_cache_free(). And
> -don't forget to destroy the cache with kmem_cache_destroy().
> +When the allocated memory is no longer needed it must be freed. Objects

I'd add a line break before Objects ^

> +allocated by `kmalloc` can be freed by `kfree` or `kvfree`.
> +Objects allocated by `kmem_cache_alloc` can be freed with `kmem_cache_free`
> +or also by `kfree` or `kvfree`, which can be more convenient as it does

Maybe replace 'or also by' with a coma:

Objects allocated by `kmem_cache_alloc` can be freed with `kmem_cache_free`,
`kfree` or `kvfree`, which can be more convenient as it does


> +not require the kmem_cache pointed.

^ pointer.

> +The rules for _bulk and _rcu flavors of freeing functions are analogical.

Maybe

The same rules apply to _bulk and _rcu flavors of freeing functions.

> +
> +Memory allocated by `vmalloc` can be freed with `vfree` or `kvfree`.
> +Memory allocated by `kvmalloc` can be freed with `kvfree`.
> +Caches created by `kmem_cache_create` should be freed with
> +`kmem_cache_destroy` only after freeing all the allocated objects first.
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index 094321c17e48..dcd2cf1e8326 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -976,8 +976,10 @@ static inline notrace void rcu_read_unlock_sched_notrace(void)
> * either fall back to use of call_rcu() or rearrange the structure to
> * position the rcu_head structure into the first 4096 bytes.
> *
> - * Note that the allowable offset might decrease in the future, for example,
> - * to allow something like kmem_cache_free_rcu().
> + * The object to be freed can be allocated either by kmalloc() or
> + * kmem_cache_alloc().
> + *
> + * Note that the allowable offset might decrease in the future.
> *
> * The BUILD_BUG_ON check must not involve any function calls, hence the
> * checks are done in macros here.
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 1522693295f5..607249785c07 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -989,12 +989,9 @@ EXPORT_SYMBOL(__kmalloc_node_track_caller);
>
> /**
> * kfree - free previously allocated memory
> - * @object: pointer returned by kmalloc.
> + * @object: pointer returned by kmalloc() or kmem_cache_alloc()
> *
> * If @object is NULL, no operation is performed.
> - *
> - * Don't free memory not originally allocated by kmalloc()
> - * or you will run into trouble.
> */
> void kfree(const void *object)
> {
> --
> 2.39.2
>

--
Sincerely yours,
Mike.

2023-03-13 16:33:39

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH 0/7] remove SLOB and allow kfree() with kmem_cache_alloc()

On Sun, 12 Mar 2023 11:51:29 +0200
Mike Rapoport <[email protected]> wrote:

> git grep -in slob still gives a couple of matches. I've dropped the
> irrelevant ones it it left me with these:
>
> CREDITS:14:D: SLOB slab allocator
> kernel/trace/ring_buffer.c:358: * Also stolen from mm/slob.c. Thanks to Mathieu Desnoyers for pointing
> mm/Kconfig:251: SLOB allocator and is not recommended for systems with more than
> mm/Makefile:25:KCOV_INSTRUMENT_slob.o := n
>
> Except the comment in kernel/trace/ring_buffer.c all are trivial.
>
> As for the comment in ring_buffer.c, it looks completely irrelevant at this
> point.
>
> @Steve?

You want me to remember something I wrote almost 15 years ago? I think I
understand that comment as much as you do. Yeah, that was when I was still
learning to write comments for my older self to understand, and I failed
miserably!

But git history comes to the rescue. The commit that added that comment was:

ed56829cb3195 ("ring_buffer: reset buffer page when freeing")

This was at a time when it was suggested to me to use the struct page
directly in the ring buffer and where we could do fun "tricks" for
"performance". (I was never really for this, but I wasn't going to argue).

And the code in question then had:

/*
* Also stolen from mm/slob.c. Thanks to Mathieu Desnoyers for pointing
* this issue out.
*/
static inline void free_buffer_page(struct buffer_page *bpage)
{
reset_page_mapcount(&bpage->page);
bpage->page.mapping = NULL;
__free_page(&bpage->page);
}


But looking at commit: e4c2ce82ca27 ("ring_buffer: allocate buffer page
pointer")

It was finally decided that method was not safe, and we should not be using
struct page but just allocate an actual page (much safer!).

I never got rid of the comment, which was more about that
"reset_page_mapcount()", and should have been deleted back then.

Just remove that comment. And you could even add:

Suggested-by: Steven Rostedt (Google) <[email protected]>
Fixes: e4c2ce82ca27 ("ring_buffer: allocate buffer page pointer")

-- Steve

2023-03-13 16:37:33

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH 0/7] remove SLOB and allow kfree() with kmem_cache_alloc()

On 3/12/23 10:51, Mike Rapoport wrote:
> Hi Vlastimil,
>
> On Fri, Mar 10, 2023 at 11:32:02AM +0100, Vlastimil Babka wrote:
>> Also in git:
>> https://git.kernel.org/vbabka/h/slab-remove-slob-v1r1
>>
>> The SLOB allocator was deprecated in 6.2 so I think we can start
>> exposing the complete removal in for-next and aim at 6.4 if there are no
>> complaints.
>>
>> Besides code cleanup, the main immediate benefit will be allowing
>> kfree() family of function to work on kmem_cache_alloc() objects (Patch
>> 7), which was incompatible with SLOB.
>>
>> This includes kfree_rcu() so I've updated the comment there to remove
>> the mention of potential future addition of kmem_cache_free_rcu() as
>> there should be no need for that now.
>>
>> Otherwise it's straightforward. Patch 2 is a cleanup in net area, that I
>> can either handle in slab tree or submit in net after SLOB is removed.
>> Another cleanup in tomoyo is already in the tomoyo tree as that didn't
>> need to wait until SLOB removal.
>>
>> Vlastimil Babka (7):
>> mm/slob: remove CONFIG_SLOB
>> net: skbuff: remove SLOB-specific ifdefs
>> mm, page_flags: remove PG_slob_free
>> mm, pagemap: remove SLOB and SLQB from comments and documentation
>> mm/slab: remove CONFIG_SLOB code from slab common code
>> mm/slob: remove slob.c
>> mm/slab: document kfree() as allowed for kmem_cache_alloc() objects
>>
>> Documentation/admin-guide/mm/pagemap.rst | 6 +-
>> Documentation/core-api/memory-allocation.rst | 15 +-
>> fs/proc/page.c | 5 +-
>> include/linux/page-flags.h | 4 -
>> include/linux/rcupdate.h | 6 +-
>> include/linux/slab.h | 39 -
>> init/Kconfig | 2 +-
>> kernel/configs/tiny.config | 1 -
>> mm/Kconfig | 22 -
>> mm/Makefile | 1 -
>> mm/slab.h | 61 --
>> mm/slab_common.c | 7 +-
>> mm/slob.c | 757 -------------------
>> net/core/skbuff.c | 16 -
>> tools/mm/page-types.c | 6 +-
>> 15 files changed, 23 insertions(+), 925 deletions(-)
>> delete mode 100644 mm/slob.c
>
> git grep -in slob still gives a couple of matches. I've dropped the
> irrelevant ones it it left me with these:
>
> CREDITS:14:D: SLOB slab allocator

I think it wouldn't be fair to remove that one as it's a historical record
of some sort?

> kernel/trace/ring_buffer.c:358: * Also stolen from mm/slob.c. Thanks to Mathieu Desnoyers for pointing
> mm/Kconfig:251: SLOB allocator and is not recommended for systems with more than

Yeah that's a help text for SLUB_TINY which can still help those who migrate
from SLOB.

> mm/Makefile:25:KCOV_INSTRUMENT_slob.o := n

That one I will remove, thanks!

> Except the comment in kernel/trace/ring_buffer.c all are trivial.
>
> As for the comment in ring_buffer.c, it looks completely irrelevant at this
> point.
>
> @Steve?
>
>> --
>> 2.39.2
>>
>


2023-03-13 18:00:38

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH 0/7] remove SLOB and allow kfree() with kmem_cache_alloc()

On Mon, Mar 13, 2023 at 12:31:47PM -0400, Steven Rostedt wrote:
> On Sun, 12 Mar 2023 11:51:29 +0200
> Mike Rapoport <[email protected]> wrote:
>
> > git grep -in slob still gives a couple of matches. I've dropped the
> > irrelevant ones it it left me with these:
> >
> > CREDITS:14:D: SLOB slab allocator
> > kernel/trace/ring_buffer.c:358: * Also stolen from mm/slob.c. Thanks to Mathieu Desnoyers for pointing
> > mm/Kconfig:251: SLOB allocator and is not recommended for systems with more than
> > mm/Makefile:25:KCOV_INSTRUMENT_slob.o := n
> >
> > Except the comment in kernel/trace/ring_buffer.c all are trivial.
> >
> > As for the comment in ring_buffer.c, it looks completely irrelevant at this
> > point.
> >
> > @Steve?
>
> You want me to remember something I wrote almost 15 years ago?

I just wanted to make sure you don't have a problem with removing this
comment :)

> I think I understand that comment as much as you do. Yeah, that was when
> I was still learning to write comments for my older self to understand,
> and I failed miserably!
>
> But git history comes to the rescue. The commit that added that comment was:
>
> ed56829cb3195 ("ring_buffer: reset buffer page when freeing")
>
> This was at a time when it was suggested to me to use the struct page
> directly in the ring buffer and where we could do fun "tricks" for
> "performance". (I was never really for this, but I wasn't going to argue).
>
> And the code in question then had:
>
> /*
> * Also stolen from mm/slob.c. Thanks to Mathieu Desnoyers for pointing
> * this issue out.
> */
> static inline void free_buffer_page(struct buffer_page *bpage)
> {
> reset_page_mapcount(&bpage->page);
> bpage->page.mapping = NULL;
> __free_page(&bpage->page);
> }
>
>
> But looking at commit: e4c2ce82ca27 ("ring_buffer: allocate buffer page
> pointer")
>
> It was finally decided that method was not safe, and we should not be using
> struct page but just allocate an actual page (much safer!).
>
> I never got rid of the comment, which was more about that
> "reset_page_mapcount()", and should have been deleted back then.

Yeah, I did the same analysis, just was too lazy to post it.

> Just remove that comment. And you could even add:
>
> Suggested-by: Steven Rostedt (Google) <[email protected]>
> Fixes: e4c2ce82ca27 ("ring_buffer: allocate buffer page pointer")
>
> -- Steve

--
Sincerely yours,
Mike.

2023-03-14 07:18:25

by Hyeonggon Yoo

[permalink] [raw]
Subject: Re: [PATCH 1/7] mm/slob: remove CONFIG_SLOB

On Fri, Mar 10, 2023 at 11:32:03AM +0100, Vlastimil Babka wrote:
> Remove SLOB from Kconfig and Makefile. Everything under #ifdef
> CONFIG_SLOB, and mm/slob.c is now dead code.
>
> Signed-off-by: Vlastimil Babka <[email protected]>
> ---
> init/Kconfig | 2 +-
> kernel/configs/tiny.config | 1 -
> mm/Kconfig | 22 ----------------------
> mm/Makefile | 1 -
> 4 files changed, 1 insertion(+), 25 deletions(-)
>
> diff --git a/init/Kconfig b/init/Kconfig
> index 1fb5f313d18f..72ac3f66bc27 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -973,7 +973,7 @@ config MEMCG
>
> config MEMCG_KMEM
> bool
> - depends on MEMCG && !SLOB
> + depends on MEMCG
> default y
>
> config BLK_CGROUP
> diff --git a/kernel/configs/tiny.config b/kernel/configs/tiny.config
> index c2f9c912df1c..144b2bd86b14 100644
> --- a/kernel/configs/tiny.config
> +++ b/kernel/configs/tiny.config
> @@ -7,6 +7,5 @@ CONFIG_KERNEL_XZ=y
> # CONFIG_KERNEL_LZO is not set
> # CONFIG_KERNEL_LZ4 is not set
> # CONFIG_SLAB is not set
> -# CONFIG_SLOB_DEPRECATED is not set
> CONFIG_SLUB=y
> CONFIG_SLUB_TINY=y
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 4751031f3f05..669399ab693c 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -238,30 +238,8 @@ config SLUB
> and has enhanced diagnostics. SLUB is the default choice for
> a slab allocator.
>
> -config SLOB_DEPRECATED
> - depends on EXPERT
> - bool "SLOB (Simple Allocator - DEPRECATED)"
> - depends on !PREEMPT_RT
> - help
> - Deprecated and scheduled for removal in a few cycles. SLUB
> - recommended as replacement. CONFIG_SLUB_TINY can be considered
> - on systems with 16MB or less RAM.
> -
> - If you need SLOB to stay, please contact [email protected] and
> - people listed in the SLAB ALLOCATOR section of MAINTAINERS file,
> - with your use case.
> -
> - SLOB replaces the stock allocator with a drastically simpler
> - allocator. SLOB is generally more space efficient but
> - does not perform as well on large systems.
> -
> endchoice
>
> -config SLOB
> - bool
> - default y
> - depends on SLOB_DEPRECATED
> -
> config SLUB_TINY
> bool "Configure SLUB for minimal memory footprint"
> depends on SLUB && EXPERT
> diff --git a/mm/Makefile b/mm/Makefile
> index 8e105e5b3e29..2d9c1e7f6085 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -81,7 +81,6 @@ obj-$(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP) += hugetlb_vmemmap.o
> obj-$(CONFIG_NUMA) += mempolicy.o
> obj-$(CONFIG_SPARSEMEM) += sparse.o
> obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
> -obj-$(CONFIG_SLOB) += slob.o
> obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
> obj-$(CONFIG_KSM) += ksm.o
> obj-$(CONFIG_PAGE_POISONING) += page_poison.o

With what Mike pointed:
(removing 'mm/Makefile:KCOV_INSTRUMENT_slob.o := n')

Acked-by: Hyeonggon Yoo <[email protected]>

> --
> 2.39.2
>

2023-03-14 07:26:07

by Hyeonggon Yoo

[permalink] [raw]
Subject: Re: [PATCH 3/7] mm, page_flags: remove PG_slob_free

On Fri, Mar 10, 2023 at 11:32:05AM +0100, Vlastimil Babka wrote:
> With SLOB removed we no longer need the PG_slob_free alias for
> PG_private. Also update tools/mm/page-types.
>
> Signed-off-by: Vlastimil Babka <[email protected]>
> ---
> include/linux/page-flags.h | 4 ----
> tools/mm/page-types.c | 6 +-----
> 2 files changed, 1 insertion(+), 9 deletions(-)
>
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index a7e3a3405520..2bdc41cb0594 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -174,9 +174,6 @@ enum pageflags {
> /* Remapped by swiotlb-xen. */
> PG_xen_remapped = PG_owner_priv_1,
>
> - /* SLOB */
> - PG_slob_free = PG_private,
> -
> #ifdef CONFIG_MEMORY_FAILURE
> /*
> * Compound pages. Stored in first tail page's flags.
> @@ -483,7 +480,6 @@ PAGEFLAG(Active, active, PF_HEAD) __CLEARPAGEFLAG(Active, active, PF_HEAD)
> PAGEFLAG(Workingset, workingset, PF_HEAD)
> TESTCLEARFLAG(Workingset, workingset, PF_HEAD)
> __PAGEFLAG(Slab, slab, PF_NO_TAIL)
> -__PAGEFLAG(SlobFree, slob_free, PF_NO_TAIL)
> PAGEFLAG(Checked, checked, PF_NO_COMPOUND) /* Used by some filesystems */
>
> /* Xen */
> diff --git a/tools/mm/page-types.c b/tools/mm/page-types.c
> index 381dcc00cb62..8d5595b6c59f 100644
> --- a/tools/mm/page-types.c
> +++ b/tools/mm/page-types.c
> @@ -85,7 +85,6 @@
> */
> #define KPF_ANON_EXCLUSIVE 47
> #define KPF_READAHEAD 48
> -#define KPF_SLOB_FREE 49
> #define KPF_SLUB_FROZEN 50
> #define KPF_SLUB_DEBUG 51
> #define KPF_FILE 61
> @@ -141,7 +140,6 @@ static const char * const page_flag_names[] = {
>
> [KPF_ANON_EXCLUSIVE] = "d:anon_exclusive",
> [KPF_READAHEAD] = "I:readahead",
> - [KPF_SLOB_FREE] = "P:slob_free",
> [KPF_SLUB_FROZEN] = "A:slub_frozen",
> [KPF_SLUB_DEBUG] = "E:slub_debug",
>
> @@ -478,10 +476,8 @@ static uint64_t expand_overloaded_flags(uint64_t flags, uint64_t pme)
> if ((flags & BIT(ANON)) && (flags & BIT(MAPPEDTODISK)))
> flags ^= BIT(MAPPEDTODISK) | BIT(ANON_EXCLUSIVE);
>
> - /* SLOB/SLUB overload several page flags */
> + /* SLUB overloads several page flags */
> if (flags & BIT(SLAB)) {
> - if (flags & BIT(PRIVATE))
> - flags ^= BIT(PRIVATE) | BIT(SLOB_FREE);
> if (flags & BIT(ACTIVE))
> flags ^= BIT(ACTIVE) | BIT(SLUB_FROZEN);
> if (flags & BIT(ERROR))
> --
> 2.39.2

Acked-by: Hyeonggon Yoo <[email protected]>

2023-03-14 08:20:59

by Hyeonggon Yoo

[permalink] [raw]
Subject: Re: [PATCH 4/7] mm, pagemap: remove SLOB and SLQB from comments and documentation

On Fri, Mar 10, 2023 at 11:32:06AM +0100, Vlastimil Babka wrote:
> SLOB has been removed and SLQB never merged, so remove their mentions
> from comments and documentation of pagemap.
>
> Signed-off-by: Vlastimil Babka <[email protected]>
> ---
> Documentation/admin-guide/mm/pagemap.rst | 6 +++---
> fs/proc/page.c | 5 ++---
> 2 files changed, 5 insertions(+), 6 deletions(-)
>
> diff --git a/Documentation/admin-guide/mm/pagemap.rst b/Documentation/admin-guide/mm/pagemap.rst
> index b5f970dc91e7..bb4aa897a773 100644
> --- a/Documentation/admin-guide/mm/pagemap.rst
> +++ b/Documentation/admin-guide/mm/pagemap.rst
> @@ -91,9 +91,9 @@ Short descriptions to the page flags
> The page is being locked for exclusive access, e.g. by undergoing read/write
> IO.
> 7 - SLAB
> - The page is managed by the SLAB/SLOB/SLUB/SLQB kernel memory allocator.
> - When compound page is used, SLUB/SLQB will only set this flag on the head
> - page; SLOB will not flag it at all.
> + The page is managed by the SLAB/SLUB kernel memory allocator.
> + When compound page is used, either will only set this flag on the head
> + page..
> 10 - BUDDY
> A free memory block managed by the buddy system allocator.
> The buddy system organizes free memory in blocks of various orders.
> diff --git a/fs/proc/page.c b/fs/proc/page.c
> index 6249c347809a..1356aeffd8dc 100644
> --- a/fs/proc/page.c
> +++ b/fs/proc/page.c
> @@ -125,7 +125,7 @@ u64 stable_page_flags(struct page *page)
> /*
> * pseudo flags for the well known (anonymous) memory mapped pages
> *
> - * Note that page->_mapcount is overloaded in SLOB/SLUB/SLQB, so the
> + * Note that page->_mapcount is overloaded in SLAB/SLUB, so the

SLUB does not overload _mapcount.

> * simple test in page_mapped() is not enough.
> */
> if (!PageSlab(page) && page_mapped(page))
> @@ -166,8 +166,7 @@ u64 stable_page_flags(struct page *page)
>
> /*
> * Caveats on high order pages: page->_refcount will only be set
> - * -1 on the head page; SLUB/SLQB do the same for PG_slab;
> - * SLOB won't set PG_slab at all on compound pages.
> + * -1 on the head page; SLAB/SLUB do the same for PG_slab;

I think this comment could be just saying that PG_buddy is only set on
head page, not saying

_refcount is set to -1 on head page (is it even correct?)

> */
> if (PageBuddy(page))
> u |= 1 << KPF_BUDDY;
> --
> 2.39.2
>

2023-03-14 09:29:04

by Hyeonggon Yoo

[permalink] [raw]
Subject: Re: [PATCH 5/7] mm/slab: remove CONFIG_SLOB code from slab common code

On Fri, Mar 10, 2023 at 11:32:07AM +0100, Vlastimil Babka wrote:
> CONFIG_SLOB has been removed from Kconfig. Remove code and #ifdef's
> specific to SLOB in the slab headers and common code.
>
> Signed-off-by: Vlastimil Babka <[email protected]>
> ---
> include/linux/slab.h | 39 ----------------------------
> mm/slab.h | 61 --------------------------------------------
> mm/slab_common.c | 2 --
> 3 files changed, 102 deletions(-)
>
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 45af70315a94..7f645a4c1298 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -298,19 +298,6 @@ static inline unsigned int arch_slab_minalign(void)
> #endif
> #endif
>
> -#ifdef CONFIG_SLOB
> -/*
> - * SLOB passes all requests larger than one page to the page allocator.
> - * No kmalloc array is necessary since objects of different sizes can
> - * be allocated from the same page.
> - */
> -#define KMALLOC_SHIFT_HIGH PAGE_SHIFT
> -#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1)
> -#ifndef KMALLOC_SHIFT_LOW
> -#define KMALLOC_SHIFT_LOW 3
> -#endif
> -#endif
> -
> /* Maximum allocatable size */
> #define KMALLOC_MAX_SIZE (1UL << KMALLOC_SHIFT_MAX)
> /* Maximum size for which we actually use a slab cache */
> @@ -366,7 +353,6 @@ enum kmalloc_cache_type {
> NR_KMALLOC_TYPES
> };
>
> -#ifndef CONFIG_SLOB
> extern struct kmem_cache *
> kmalloc_caches[NR_KMALLOC_TYPES][KMALLOC_SHIFT_HIGH + 1];
>
> @@ -458,7 +444,6 @@ static __always_inline unsigned int __kmalloc_index(size_t size,
> }
> static_assert(PAGE_SHIFT <= 20);
> #define kmalloc_index(s) __kmalloc_index(s, true)
> -#endif /* !CONFIG_SLOB */
>
> void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __alloc_size(1);
>
> @@ -487,10 +472,6 @@ void kmem_cache_free(struct kmem_cache *s, void *objp);
> void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p);
> int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p);
>
> -/*
> - * Caller must not use kfree_bulk() on memory not originally allocated
> - * by kmalloc(), because the SLOB allocator cannot handle this.
> - */
> static __always_inline void kfree_bulk(size_t size, void **p)
> {
> kmem_cache_free_bulk(NULL, size, p);
> @@ -567,7 +548,6 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_align
> * Try really hard to succeed the allocation but fail
> * eventually.
> */
> -#ifndef CONFIG_SLOB
> static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags)
> {
> if (__builtin_constant_p(size) && size) {
> @@ -583,17 +563,7 @@ static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags)
> }
> return __kmalloc(size, flags);
> }
> -#else
> -static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags)
> -{
> - if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE)
> - return kmalloc_large(size, flags);
> -
> - return __kmalloc(size, flags);
> -}
> -#endif
>
> -#ifndef CONFIG_SLOB
> static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node)
> {
> if (__builtin_constant_p(size) && size) {
> @@ -609,15 +579,6 @@ static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t fla
> }
> return __kmalloc_node(size, flags, node);
> }
> -#else
> -static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node)
> -{
> - if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE)
> - return kmalloc_large_node(size, flags, node);
> -
> - return __kmalloc_node(size, flags, node);
> -}
> -#endif
>
> /**
> * kmalloc_array - allocate memory for an array.
> diff --git a/mm/slab.h b/mm/slab.h
> index 43966aa5fadf..399966b3ce52 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -51,14 +51,6 @@ struct slab {
> };
> unsigned int __unused;
>
> -#elif defined(CONFIG_SLOB)
> -
> - struct list_head slab_list;
> - void *__unused_1;
> - void *freelist; /* first free block */
> - long units;
> - unsigned int __unused_2;
> -
> #else
> #error "Unexpected slab allocator configured"
> #endif
> @@ -72,11 +64,7 @@ struct slab {
> #define SLAB_MATCH(pg, sl) \
> static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl))
> SLAB_MATCH(flags, __page_flags);
> -#ifndef CONFIG_SLOB
> SLAB_MATCH(compound_head, slab_cache); /* Ensure bit 0 is clear */
> -#else
> -SLAB_MATCH(compound_head, slab_list); /* Ensure bit 0 is clear */
> -#endif
> SLAB_MATCH(_refcount, __page_refcount);
> #ifdef CONFIG_MEMCG
> SLAB_MATCH(memcg_data, memcg_data);
> @@ -200,31 +188,6 @@ static inline size_t slab_size(const struct slab *slab)
> return PAGE_SIZE << slab_order(slab);
> }
>
> -#ifdef CONFIG_SLOB
> -/*
> - * Common fields provided in kmem_cache by all slab allocators
> - * This struct is either used directly by the allocator (SLOB)
> - * or the allocator must include definitions for all fields
> - * provided in kmem_cache_common in their definition of kmem_cache.
> - *
> - * Once we can do anonymous structs (C11 standard) we could put a
> - * anonymous struct definition in these allocators so that the
> - * separate allocations in the kmem_cache structure of SLAB and
> - * SLUB is no longer needed.
> - */
> -struct kmem_cache {
> - unsigned int object_size;/* The original size of the object */
> - unsigned int size; /* The aligned/padded/added on size */
> - unsigned int align; /* Alignment as calculated */
> - slab_flags_t flags; /* Active flags on the slab */
> - const char *name; /* Slab name for sysfs */
> - int refcount; /* Use counter */
> - void (*ctor)(void *); /* Called on object slot creation */
> - struct list_head list; /* List of all slab caches on the system */
> -};
> -
> -#endif /* CONFIG_SLOB */
> -
> #ifdef CONFIG_SLAB
> #include <linux/slab_def.h>
> #endif
> @@ -274,7 +237,6 @@ extern const struct kmalloc_info_struct {
> unsigned int size;
> } kmalloc_info[];
>
> -#ifndef CONFIG_SLOB
> /* Kmalloc array related functions */
> void setup_kmalloc_cache_index_table(void);
> void create_kmalloc_caches(slab_flags_t);
> @@ -286,7 +248,6 @@ void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags,
> int node, size_t orig_size,
> unsigned long caller);
> void __kmem_cache_free(struct kmem_cache *s, void *x, unsigned long caller);
> -#endif
>
> gfp_t kmalloc_fix_flags(gfp_t flags);
>
> @@ -303,33 +264,16 @@ extern void create_boot_cache(struct kmem_cache *, const char *name,
> int slab_unmergeable(struct kmem_cache *s);
> struct kmem_cache *find_mergeable(unsigned size, unsigned align,
> slab_flags_t flags, const char *name, void (*ctor)(void *));
> -#ifndef CONFIG_SLOB
> struct kmem_cache *
> __kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
> slab_flags_t flags, void (*ctor)(void *));
>
> slab_flags_t kmem_cache_flags(unsigned int object_size,
> slab_flags_t flags, const char *name);
> -#else
> -static inline struct kmem_cache *
> -__kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
> - slab_flags_t flags, void (*ctor)(void *))
> -{ return NULL; }
> -
> -static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
> - slab_flags_t flags, const char *name)
> -{
> - return flags;
> -}
> -#endif
>
> static inline bool is_kmalloc_cache(struct kmem_cache *s)
> {
> -#ifndef CONFIG_SLOB
> return (s->flags & SLAB_KMALLOC);
> -#else
> - return false;
> -#endif
> }
>
> /* Legal flag mask for kmem_cache_create(), for various configurations */
> @@ -634,7 +578,6 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab,
> }
> #endif /* CONFIG_MEMCG_KMEM */
>
> -#ifndef CONFIG_SLOB
> static inline struct kmem_cache *virt_to_cache(const void *obj)
> {
> struct slab *slab;
> @@ -684,8 +627,6 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
>
> void free_large_kmalloc(struct folio *folio, void *object);
>
> -#endif /* CONFIG_SLOB */
> -
> size_t __ksize(const void *objp);
>
> static inline size_t slab_ksize(const struct kmem_cache *s)
> @@ -777,7 +718,6 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
> memcg_slab_post_alloc_hook(s, objcg, flags, size, p);
> }
>
> -#ifndef CONFIG_SLOB
> /*
> * The slab lists for all objects.
> */
> @@ -824,7 +764,6 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
> for (__node = 0; __node < nr_node_ids; __node++) \
> if ((__n = get_node(__s, __node)))
>
> -#endif
>
> #if defined(CONFIG_SLAB) || defined(CONFIG_SLUB_DEBUG)
> void dump_unreclaimable_slab(void);
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index bf4e777cfe90..1522693295f5 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -625,7 +625,6 @@ void kmem_dump_obj(void *object)
> EXPORT_SYMBOL_GPL(kmem_dump_obj);
> #endif
>
> -#ifndef CONFIG_SLOB
> /* Create a cache during boot when no slab services are available yet */
> void __init create_boot_cache(struct kmem_cache *s, const char *name,
> unsigned int size, slab_flags_t flags,
> @@ -1079,7 +1078,6 @@ void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags,
> return ret;
> }
> EXPORT_SYMBOL(kmalloc_node_trace);
> -#endif /* !CONFIG_SLOB */
>
> gfp_t kmalloc_fix_flags(gfp_t flags)
> {

Looks good to me,

Reviewed-by: Hyeonggon Yoo <[email protected]>

> --
> 2.39.2
>

2023-03-14 09:34:20

by Hyeonggon Yoo

[permalink] [raw]
Subject: Re: [PATCH 6/7] mm/slob: remove slob.c

On Fri, Mar 10, 2023 at 11:32:08AM +0100, Vlastimil Babka wrote:
> Remove the SLOB implementation.
>
> RIP SLOB allocator (2006 - 2023)
>
> Signed-off-by: Vlastimil Babka <[email protected]>

Goodbye to SLOB.

Acked-by: Hyeonggon Yoo <[email protected]>

> ---
> mm/slob.c | 757 ------------------------------------------------------
> 1 file changed, 757 deletions(-)
> delete mode 100644 mm/slob.c
>
> diff --git a/mm/slob.c b/mm/slob.c
> deleted file mode 100644
> index fe567fcfa3a3..000000000000
> --- a/mm/slob.c
> +++ /dev/null

2023-03-14 22:13:27

by Lorenzo Stoakes

[permalink] [raw]
Subject: Re: [PATCH 0/7] remove SLOB and allow kfree() with kmem_cache_alloc()

On Mon, Mar 13, 2023 at 05:36:44PM +0100, Vlastimil Babka wrote:
> > git grep -in slob still gives a couple of matches. I've dropped the
> > irrelevant ones it it left me with these:

I see an #ifdef in security/tomoyo/common.h which I guess is not really
relevant? And certainly not harmful in practice. Thought might be nice to
eliminate the last reference to CONFIG_SLOB in the kernel :)

2023-03-14 22:14:30

by Lorenzo Stoakes

[permalink] [raw]
Subject: Re: [PATCH 1/7] mm/slob: remove CONFIG_SLOB

On Fri, Mar 10, 2023 at 11:32:03AM +0100, Vlastimil Babka wrote:
> Remove SLOB from Kconfig and Makefile. Everything under #ifdef
> CONFIG_SLOB, and mm/slob.c is now dead code.
>
> Signed-off-by: Vlastimil Babka <[email protected]>
> ---
> init/Kconfig | 2 +-
> kernel/configs/tiny.config | 1 -
> mm/Kconfig | 22 ----------------------
> mm/Makefile | 1 -
> 4 files changed, 1 insertion(+), 25 deletions(-)
>
> diff --git a/init/Kconfig b/init/Kconfig
> index 1fb5f313d18f..72ac3f66bc27 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -973,7 +973,7 @@ config MEMCG
>
> config MEMCG_KMEM
> bool
> - depends on MEMCG && !SLOB
> + depends on MEMCG
> default y
>
> config BLK_CGROUP
> diff --git a/kernel/configs/tiny.config b/kernel/configs/tiny.config
> index c2f9c912df1c..144b2bd86b14 100644
> --- a/kernel/configs/tiny.config
> +++ b/kernel/configs/tiny.config
> @@ -7,6 +7,5 @@ CONFIG_KERNEL_XZ=y
> # CONFIG_KERNEL_LZO is not set
> # CONFIG_KERNEL_LZ4 is not set
> # CONFIG_SLAB is not set
> -# CONFIG_SLOB_DEPRECATED is not set
> CONFIG_SLUB=y
> CONFIG_SLUB_TINY=y
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 4751031f3f05..669399ab693c 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -238,30 +238,8 @@ config SLUB
> and has enhanced diagnostics. SLUB is the default choice for
> a slab allocator.
>
> -config SLOB_DEPRECATED
> - depends on EXPERT
> - bool "SLOB (Simple Allocator - DEPRECATED)"
> - depends on !PREEMPT_RT
> - help
> - Deprecated and scheduled for removal in a few cycles. SLUB
> - recommended as replacement. CONFIG_SLUB_TINY can be considered
> - on systems with 16MB or less RAM.
> -
> - If you need SLOB to stay, please contact [email protected] and
> - people listed in the SLAB ALLOCATOR section of MAINTAINERS file,
> - with your use case.
> -
> - SLOB replaces the stock allocator with a drastically simpler
> - allocator. SLOB is generally more space efficient but
> - does not perform as well on large systems.
> -
> endchoice
>
> -config SLOB
> - bool
> - default y
> - depends on SLOB_DEPRECATED
> -
> config SLUB_TINY
> bool "Configure SLUB for minimal memory footprint"
> depends on SLUB && EXPERT
> diff --git a/mm/Makefile b/mm/Makefile
> index 8e105e5b3e29..2d9c1e7f6085 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -81,7 +81,6 @@ obj-$(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP) += hugetlb_vmemmap.o
> obj-$(CONFIG_NUMA) += mempolicy.o
> obj-$(CONFIG_SPARSEMEM) += sparse.o
> obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
> -obj-$(CONFIG_SLOB) += slob.o
> obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
> obj-$(CONFIG_KSM) += ksm.o
> obj-$(CONFIG_PAGE_POISONING) += page_poison.o
> --
> 2.39.2
>

Looks good to me too,

Acked-by: Lorenzo Stoakes <[email protected]>

2023-03-14 22:15:24

by Lorenzo Stoakes

[permalink] [raw]
Subject: Re: [PATCH 3/7] mm, page_flags: remove PG_slob_free

On Fri, Mar 10, 2023 at 11:32:05AM +0100, Vlastimil Babka wrote:
> With SLOB removed we no longer need the PG_slob_free alias for
> PG_private. Also update tools/mm/page-types.
>
> Signed-off-by: Vlastimil Babka <[email protected]>
> ---
> include/linux/page-flags.h | 4 ----
> tools/mm/page-types.c | 6 +-----
> 2 files changed, 1 insertion(+), 9 deletions(-)
>
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index a7e3a3405520..2bdc41cb0594 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -174,9 +174,6 @@ enum pageflags {
> /* Remapped by swiotlb-xen. */
> PG_xen_remapped = PG_owner_priv_1,
>
> - /* SLOB */
> - PG_slob_free = PG_private,
> -
> #ifdef CONFIG_MEMORY_FAILURE
> /*
> * Compound pages. Stored in first tail page's flags.
> @@ -483,7 +480,6 @@ PAGEFLAG(Active, active, PF_HEAD) __CLEARPAGEFLAG(Active, active, PF_HEAD)
> PAGEFLAG(Workingset, workingset, PF_HEAD)
> TESTCLEARFLAG(Workingset, workingset, PF_HEAD)
> __PAGEFLAG(Slab, slab, PF_NO_TAIL)
> -__PAGEFLAG(SlobFree, slob_free, PF_NO_TAIL)
> PAGEFLAG(Checked, checked, PF_NO_COMPOUND) /* Used by some filesystems */
>
> /* Xen */
> diff --git a/tools/mm/page-types.c b/tools/mm/page-types.c
> index 381dcc00cb62..8d5595b6c59f 100644
> --- a/tools/mm/page-types.c
> +++ b/tools/mm/page-types.c
> @@ -85,7 +85,6 @@
> */
> #define KPF_ANON_EXCLUSIVE 47
> #define KPF_READAHEAD 48
> -#define KPF_SLOB_FREE 49
> #define KPF_SLUB_FROZEN 50
> #define KPF_SLUB_DEBUG 51
> #define KPF_FILE 61
> @@ -141,7 +140,6 @@ static const char * const page_flag_names[] = {
>
> [KPF_ANON_EXCLUSIVE] = "d:anon_exclusive",
> [KPF_READAHEAD] = "I:readahead",
> - [KPF_SLOB_FREE] = "P:slob_free",
> [KPF_SLUB_FROZEN] = "A:slub_frozen",
> [KPF_SLUB_DEBUG] = "E:slub_debug",
>
> @@ -478,10 +476,8 @@ static uint64_t expand_overloaded_flags(uint64_t flags, uint64_t pme)
> if ((flags & BIT(ANON)) && (flags & BIT(MAPPEDTODISK)))
> flags ^= BIT(MAPPEDTODISK) | BIT(ANON_EXCLUSIVE);
>
> - /* SLOB/SLUB overload several page flags */
> + /* SLUB overloads several page flags */
> if (flags & BIT(SLAB)) {
> - if (flags & BIT(PRIVATE))
> - flags ^= BIT(PRIVATE) | BIT(SLOB_FREE);
> if (flags & BIT(ACTIVE))
> flags ^= BIT(ACTIVE) | BIT(SLUB_FROZEN);
> if (flags & BIT(ERROR))
> --
> 2.39.2
>

Looks good to me too,

Acked-by: Lorenzo Stoakes <[email protected]>

2023-03-14 22:18:41

by Lorenzo Stoakes

[permalink] [raw]
Subject: Re: [PATCH 4/7] mm, pagemap: remove SLOB and SLQB from comments and documentation

On Fri, Mar 10, 2023 at 11:32:06AM +0100, Vlastimil Babka wrote:
> SLOB has been removed and SLQB never merged, so remove their mentions
> from comments and documentation of pagemap.
>
> Signed-off-by: Vlastimil Babka <[email protected]>
> ---
> Documentation/admin-guide/mm/pagemap.rst | 6 +++---
> fs/proc/page.c | 5 ++---
> 2 files changed, 5 insertions(+), 6 deletions(-)
>
> diff --git a/Documentation/admin-guide/mm/pagemap.rst b/Documentation/admin-guide/mm/pagemap.rst
> index b5f970dc91e7..bb4aa897a773 100644
> --- a/Documentation/admin-guide/mm/pagemap.rst
> +++ b/Documentation/admin-guide/mm/pagemap.rst
> @@ -91,9 +91,9 @@ Short descriptions to the page flags
> The page is being locked for exclusive access, e.g. by undergoing read/write
> IO.
> 7 - SLAB
> - The page is managed by the SLAB/SLOB/SLUB/SLQB kernel memory allocator.
> - When compound page is used, SLUB/SLQB will only set this flag on the head
> - page; SLOB will not flag it at all.
> + The page is managed by the SLAB/SLUB kernel memory allocator.
> + When compound page is used, either will only set this flag on the head
> + page..

I mean, perhaps the nittiest of nits but probably that '..' is unintended.

> 10 - BUDDY
> A free memory block managed by the buddy system allocator.
> The buddy system organizes free memory in blocks of various orders.
> diff --git a/fs/proc/page.c b/fs/proc/page.c
> index 6249c347809a..1356aeffd8dc 100644
> --- a/fs/proc/page.c
> +++ b/fs/proc/page.c
> @@ -125,7 +125,7 @@ u64 stable_page_flags(struct page *page)
> /*
> * pseudo flags for the well known (anonymous) memory mapped pages
> *
> - * Note that page->_mapcount is overloaded in SLOB/SLUB/SLQB, so the
> + * Note that page->_mapcount is overloaded in SLAB/SLUB, so the
> * simple test in page_mapped() is not enough.
> */
> if (!PageSlab(page) && page_mapped(page))
> @@ -166,8 +166,7 @@ u64 stable_page_flags(struct page *page)
>
> /*
> * Caveats on high order pages: page->_refcount will only be set
> - * -1 on the head page; SLUB/SLQB do the same for PG_slab;
> - * SLOB won't set PG_slab at all on compound pages.
> + * -1 on the head page; SLAB/SLUB do the same for PG_slab;

Nice catch on the redundant reference to the mysterous SLQB (+ above) :)

> */
> if (PageBuddy(page))
> u |= 1 << KPF_BUDDY;
> --
> 2.39.2
>

Otherwise looks good to me,

Acked-by: Lorenzo Stoakes <[email protected]>

2023-03-14 22:19:33

by Lorenzo Stoakes

[permalink] [raw]
Subject: Re: [PATCH 5/7] mm/slab: remove CONFIG_SLOB code from slab common code

On Tue, Mar 14, 2023 at 09:28:32AM +0000, Hyeonggon Yoo wrote:
> On Fri, Mar 10, 2023 at 11:32:07AM +0100, Vlastimil Babka wrote:
> > CONFIG_SLOB has been removed from Kconfig. Remove code and #ifdef's
> > specific to SLOB in the slab headers and common code.
> >
> > Signed-off-by: Vlastimil Babka <[email protected]>
> > ---
> > include/linux/slab.h | 39 ----------------------------
> > mm/slab.h | 61 --------------------------------------------
> > mm/slab_common.c | 2 --
> > 3 files changed, 102 deletions(-)
> >
> > diff --git a/include/linux/slab.h b/include/linux/slab.h
> > index 45af70315a94..7f645a4c1298 100644
> > --- a/include/linux/slab.h
> > +++ b/include/linux/slab.h
> > @@ -298,19 +298,6 @@ static inline unsigned int arch_slab_minalign(void)
> > #endif
> > #endif
> >
> > -#ifdef CONFIG_SLOB
> > -/*
> > - * SLOB passes all requests larger than one page to the page allocator.
> > - * No kmalloc array is necessary since objects of different sizes can
> > - * be allocated from the same page.
> > - */
> > -#define KMALLOC_SHIFT_HIGH PAGE_SHIFT
> > -#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1)
> > -#ifndef KMALLOC_SHIFT_LOW
> > -#define KMALLOC_SHIFT_LOW 3
> > -#endif
> > -#endif
> > -
> > /* Maximum allocatable size */
> > #define KMALLOC_MAX_SIZE (1UL << KMALLOC_SHIFT_MAX)
> > /* Maximum size for which we actually use a slab cache */
> > @@ -366,7 +353,6 @@ enum kmalloc_cache_type {
> > NR_KMALLOC_TYPES
> > };
> >
> > -#ifndef CONFIG_SLOB
> > extern struct kmem_cache *
> > kmalloc_caches[NR_KMALLOC_TYPES][KMALLOC_SHIFT_HIGH + 1];
> >
> > @@ -458,7 +444,6 @@ static __always_inline unsigned int __kmalloc_index(size_t size,
> > }
> > static_assert(PAGE_SHIFT <= 20);
> > #define kmalloc_index(s) __kmalloc_index(s, true)
> > -#endif /* !CONFIG_SLOB */
> >
> > void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __alloc_size(1);
> >
> > @@ -487,10 +472,6 @@ void kmem_cache_free(struct kmem_cache *s, void *objp);
> > void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p);
> > int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p);
> >
> > -/*
> > - * Caller must not use kfree_bulk() on memory not originally allocated
> > - * by kmalloc(), because the SLOB allocator cannot handle this.
> > - */
> > static __always_inline void kfree_bulk(size_t size, void **p)
> > {
> > kmem_cache_free_bulk(NULL, size, p);
> > @@ -567,7 +548,6 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_align
> > * Try really hard to succeed the allocation but fail
> > * eventually.
> > */
> > -#ifndef CONFIG_SLOB
> > static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags)
> > {
> > if (__builtin_constant_p(size) && size) {
> > @@ -583,17 +563,7 @@ static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags)
> > }
> > return __kmalloc(size, flags);
> > }
> > -#else
> > -static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags)
> > -{
> > - if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE)
> > - return kmalloc_large(size, flags);
> > -
> > - return __kmalloc(size, flags);
> > -}
> > -#endif
> >
> > -#ifndef CONFIG_SLOB
> > static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node)
> > {
> > if (__builtin_constant_p(size) && size) {
> > @@ -609,15 +579,6 @@ static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t fla
> > }
> > return __kmalloc_node(size, flags, node);
> > }
> > -#else
> > -static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node)
> > -{
> > - if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE)
> > - return kmalloc_large_node(size, flags, node);
> > -
> > - return __kmalloc_node(size, flags, node);
> > -}
> > -#endif
> >
> > /**
> > * kmalloc_array - allocate memory for an array.
> > diff --git a/mm/slab.h b/mm/slab.h
> > index 43966aa5fadf..399966b3ce52 100644
> > --- a/mm/slab.h
> > +++ b/mm/slab.h
> > @@ -51,14 +51,6 @@ struct slab {
> > };
> > unsigned int __unused;
> >
> > -#elif defined(CONFIG_SLOB)
> > -
> > - struct list_head slab_list;
> > - void *__unused_1;
> > - void *freelist; /* first free block */
> > - long units;
> > - unsigned int __unused_2;
> > -
> > #else
> > #error "Unexpected slab allocator configured"
> > #endif
> > @@ -72,11 +64,7 @@ struct slab {
> > #define SLAB_MATCH(pg, sl) \
> > static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl))
> > SLAB_MATCH(flags, __page_flags);
> > -#ifndef CONFIG_SLOB
> > SLAB_MATCH(compound_head, slab_cache); /* Ensure bit 0 is clear */
> > -#else
> > -SLAB_MATCH(compound_head, slab_list); /* Ensure bit 0 is clear */
> > -#endif
> > SLAB_MATCH(_refcount, __page_refcount);
> > #ifdef CONFIG_MEMCG
> > SLAB_MATCH(memcg_data, memcg_data);
> > @@ -200,31 +188,6 @@ static inline size_t slab_size(const struct slab *slab)
> > return PAGE_SIZE << slab_order(slab);
> > }
> >
> > -#ifdef CONFIG_SLOB
> > -/*
> > - * Common fields provided in kmem_cache by all slab allocators
> > - * This struct is either used directly by the allocator (SLOB)
> > - * or the allocator must include definitions for all fields
> > - * provided in kmem_cache_common in their definition of kmem_cache.
> > - *
> > - * Once we can do anonymous structs (C11 standard) we could put a
> > - * anonymous struct definition in these allocators so that the
> > - * separate allocations in the kmem_cache structure of SLAB and
> > - * SLUB is no longer needed.
> > - */
> > -struct kmem_cache {
> > - unsigned int object_size;/* The original size of the object */
> > - unsigned int size; /* The aligned/padded/added on size */
> > - unsigned int align; /* Alignment as calculated */
> > - slab_flags_t flags; /* Active flags on the slab */
> > - const char *name; /* Slab name for sysfs */
> > - int refcount; /* Use counter */
> > - void (*ctor)(void *); /* Called on object slot creation */
> > - struct list_head list; /* List of all slab caches on the system */
> > -};
> > -
> > -#endif /* CONFIG_SLOB */
> > -
> > #ifdef CONFIG_SLAB
> > #include <linux/slab_def.h>
> > #endif
> > @@ -274,7 +237,6 @@ extern const struct kmalloc_info_struct {
> > unsigned int size;
> > } kmalloc_info[];
> >
> > -#ifndef CONFIG_SLOB
> > /* Kmalloc array related functions */
> > void setup_kmalloc_cache_index_table(void);
> > void create_kmalloc_caches(slab_flags_t);
> > @@ -286,7 +248,6 @@ void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags,
> > int node, size_t orig_size,
> > unsigned long caller);
> > void __kmem_cache_free(struct kmem_cache *s, void *x, unsigned long caller);
> > -#endif
> >
> > gfp_t kmalloc_fix_flags(gfp_t flags);
> >
> > @@ -303,33 +264,16 @@ extern void create_boot_cache(struct kmem_cache *, const char *name,
> > int slab_unmergeable(struct kmem_cache *s);
> > struct kmem_cache *find_mergeable(unsigned size, unsigned align,
> > slab_flags_t flags, const char *name, void (*ctor)(void *));
> > -#ifndef CONFIG_SLOB
> > struct kmem_cache *
> > __kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
> > slab_flags_t flags, void (*ctor)(void *));
> >
> > slab_flags_t kmem_cache_flags(unsigned int object_size,
> > slab_flags_t flags, const char *name);
> > -#else
> > -static inline struct kmem_cache *
> > -__kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
> > - slab_flags_t flags, void (*ctor)(void *))
> > -{ return NULL; }
> > -
> > -static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
> > - slab_flags_t flags, const char *name)
> > -{
> > - return flags;
> > -}
> > -#endif
> >
> > static inline bool is_kmalloc_cache(struct kmem_cache *s)
> > {
> > -#ifndef CONFIG_SLOB
> > return (s->flags & SLAB_KMALLOC);
> > -#else
> > - return false;
> > -#endif
> > }
> >
> > /* Legal flag mask for kmem_cache_create(), for various configurations */
> > @@ -634,7 +578,6 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab,
> > }
> > #endif /* CONFIG_MEMCG_KMEM */
> >
> > -#ifndef CONFIG_SLOB
> > static inline struct kmem_cache *virt_to_cache(const void *obj)
> > {
> > struct slab *slab;
> > @@ -684,8 +627,6 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
> >
> > void free_large_kmalloc(struct folio *folio, void *object);
> >
> > -#endif /* CONFIG_SLOB */
> > -
> > size_t __ksize(const void *objp);
> >
> > static inline size_t slab_ksize(const struct kmem_cache *s)
> > @@ -777,7 +718,6 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
> > memcg_slab_post_alloc_hook(s, objcg, flags, size, p);
> > }
> >
> > -#ifndef CONFIG_SLOB
> > /*
> > * The slab lists for all objects.
> > */
> > @@ -824,7 +764,6 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
> > for (__node = 0; __node < nr_node_ids; __node++) \
> > if ((__n = get_node(__s, __node)))
> >
> > -#endif
> >
> > #if defined(CONFIG_SLAB) || defined(CONFIG_SLUB_DEBUG)
> > void dump_unreclaimable_slab(void);
> > diff --git a/mm/slab_common.c b/mm/slab_common.c
> > index bf4e777cfe90..1522693295f5 100644
> > --- a/mm/slab_common.c
> > +++ b/mm/slab_common.c
> > @@ -625,7 +625,6 @@ void kmem_dump_obj(void *object)
> > EXPORT_SYMBOL_GPL(kmem_dump_obj);
> > #endif
> >
> > -#ifndef CONFIG_SLOB
> > /* Create a cache during boot when no slab services are available yet */
> > void __init create_boot_cache(struct kmem_cache *s, const char *name,
> > unsigned int size, slab_flags_t flags,
> > @@ -1079,7 +1078,6 @@ void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags,
> > return ret;
> > }
> > EXPORT_SYMBOL(kmalloc_node_trace);
> > -#endif /* !CONFIG_SLOB */
> >
> > gfp_t kmalloc_fix_flags(gfp_t flags)
> > {
>
> Looks good to me,
>
> Reviewed-by: Hyeonggon Yoo <[email protected]>
>
> > --
> > 2.39.2
> >

Looks good to me too,

Acked-by: Lorenzo Stoakes <[email protected]>

2023-03-14 22:20:18

by Lorenzo Stoakes

[permalink] [raw]
Subject: Re: [PATCH 6/7] mm/slob: remove slob.c

On Fri, Mar 10, 2023 at 11:32:08AM +0100, Vlastimil Babka wrote:
> Remove the SLOB implementation.
>
> RIP SLOB allocator (2006 - 2023)
>
> Signed-off-by: Vlastimil Babka <[email protected]>
> ---
> mm/slob.c | 757 ------------------------------------------------------
> 1 file changed, 757 deletions(-)
> delete mode 100644 mm/slob.c
>
> diff --git a/mm/slob.c b/mm/slob.c
> deleted file mode 100644
> index fe567fcfa3a3..000000000000
> --- a/mm/slob.c
> +++ /dev/null
> @@ -1,757 +0,0 @@
> -// SPDX-License-Identifier: GPL-2.0
> -/*
> - * SLOB Allocator: Simple List Of Blocks
> - *
> - * Matt Mackall <[email protected]> 12/30/03
> - *
> - * NUMA support by Paul Mundt, 2007.
> - *
> - * How SLOB works:
> - *
> - * The core of SLOB is a traditional K&R style heap allocator, with
> - * support for returning aligned objects. The granularity of this
> - * allocator is as little as 2 bytes, however typically most architectures
> - * will require 4 bytes on 32-bit and 8 bytes on 64-bit.
> - *
> - * The slob heap is a set of linked list of pages from alloc_pages(),
> - * and within each page, there is a singly-linked list of free blocks
> - * (slob_t). The heap is grown on demand. To reduce fragmentation,
> - * heap pages are segregated into three lists, with objects less than
> - * 256 bytes, objects less than 1024 bytes, and all other objects.
> - *
> - * Allocation from heap involves first searching for a page with
> - * sufficient free blocks (using a next-fit-like approach) followed by
> - * a first-fit scan of the page. Deallocation inserts objects back
> - * into the free list in address order, so this is effectively an
> - * address-ordered first fit.
> - *
> - * Above this is an implementation of kmalloc/kfree. Blocks returned
> - * from kmalloc are prepended with a 4-byte header with the kmalloc size.
> - * If kmalloc is asked for objects of PAGE_SIZE or larger, it calls
> - * alloc_pages() directly, allocating compound pages so the page order
> - * does not have to be separately tracked.
> - * These objects are detected in kfree() because folio_test_slab()
> - * is false for them.
> - *
> - * SLAB is emulated on top of SLOB by simply calling constructors and
> - * destructors for every SLAB allocation. Objects are returned with the
> - * 4-byte alignment unless the SLAB_HWCACHE_ALIGN flag is set, in which
> - * case the low-level allocator will fragment blocks to create the proper
> - * alignment. Again, objects of page-size or greater are allocated by
> - * calling alloc_pages(). As SLAB objects know their size, no separate
> - * size bookkeeping is necessary and there is essentially no allocation
> - * space overhead, and compound pages aren't needed for multi-page
> - * allocations.
> - *
> - * NUMA support in SLOB is fairly simplistic, pushing most of the real
> - * logic down to the page allocator, and simply doing the node accounting
> - * on the upper levels. In the event that a node id is explicitly
> - * provided, __alloc_pages_node() with the specified node id is used
> - * instead. The common case (or when the node id isn't explicitly provided)
> - * will default to the current node, as per numa_node_id().
> - *
> - * Node aware pages are still inserted in to the global freelist, and
> - * these are scanned for by matching against the node id encoded in the
> - * page flags. As a result, block allocations that can be satisfied from
> - * the freelist will only be done so on pages residing on the same node,
> - * in order to prevent random node placement.
> - */
> -
> -#include <linux/kernel.h>
> -#include <linux/slab.h>
> -
> -#include <linux/mm.h>
> -#include <linux/swap.h> /* struct reclaim_state */
> -#include <linux/cache.h>
> -#include <linux/init.h>
> -#include <linux/export.h>
> -#include <linux/rcupdate.h>
> -#include <linux/list.h>
> -#include <linux/kmemleak.h>
> -
> -#include <trace/events/kmem.h>
> -
> -#include <linux/atomic.h>
> -
> -#include "slab.h"
> -/*
> - * slob_block has a field 'units', which indicates size of block if +ve,
> - * or offset of next block if -ve (in SLOB_UNITs).
> - *
> - * Free blocks of size 1 unit simply contain the offset of the next block.
> - * Those with larger size contain their size in the first SLOB_UNIT of
> - * memory, and the offset of the next free block in the second SLOB_UNIT.
> - */
> -#if PAGE_SIZE <= (32767 * 2)
> -typedef s16 slobidx_t;
> -#else
> -typedef s32 slobidx_t;
> -#endif
> -
> -struct slob_block {
> - slobidx_t units;
> -};
> -typedef struct slob_block slob_t;
> -
> -/*
> - * All partially free slob pages go on these lists.
> - */
> -#define SLOB_BREAK1 256
> -#define SLOB_BREAK2 1024
> -static LIST_HEAD(free_slob_small);
> -static LIST_HEAD(free_slob_medium);
> -static LIST_HEAD(free_slob_large);
> -
> -/*
> - * slob_page_free: true for pages on free_slob_pages list.
> - */
> -static inline int slob_page_free(struct slab *slab)
> -{
> - return PageSlobFree(slab_page(slab));
> -}
> -
> -static void set_slob_page_free(struct slab *slab, struct list_head *list)
> -{
> - list_add(&slab->slab_list, list);
> - __SetPageSlobFree(slab_page(slab));
> -}
> -
> -static inline void clear_slob_page_free(struct slab *slab)
> -{
> - list_del(&slab->slab_list);
> - __ClearPageSlobFree(slab_page(slab));
> -}
> -
> -#define SLOB_UNIT sizeof(slob_t)
> -#define SLOB_UNITS(size) DIV_ROUND_UP(size, SLOB_UNIT)
> -
> -/*
> - * struct slob_rcu is inserted at the tail of allocated slob blocks, which
> - * were created with a SLAB_TYPESAFE_BY_RCU slab. slob_rcu is used to free
> - * the block using call_rcu.
> - */
> -struct slob_rcu {
> - struct rcu_head head;
> - int size;
> -};
> -
> -/*
> - * slob_lock protects all slob allocator structures.
> - */
> -static DEFINE_SPINLOCK(slob_lock);
> -
> -/*
> - * Encode the given size and next info into a free slob block s.
> - */
> -static void set_slob(slob_t *s, slobidx_t size, slob_t *next)
> -{
> - slob_t *base = (slob_t *)((unsigned long)s & PAGE_MASK);
> - slobidx_t offset = next - base;
> -
> - if (size > 1) {
> - s[0].units = size;
> - s[1].units = offset;
> - } else
> - s[0].units = -offset;
> -}
> -
> -/*
> - * Return the size of a slob block.
> - */
> -static slobidx_t slob_units(slob_t *s)
> -{
> - if (s->units > 0)
> - return s->units;
> - return 1;
> -}
> -
> -/*
> - * Return the next free slob block pointer after this one.
> - */
> -static slob_t *slob_next(slob_t *s)
> -{
> - slob_t *base = (slob_t *)((unsigned long)s & PAGE_MASK);
> - slobidx_t next;
> -
> - if (s[0].units < 0)
> - next = -s[0].units;
> - else
> - next = s[1].units;
> - return base+next;
> -}
> -
> -/*
> - * Returns true if s is the last free block in its page.
> - */
> -static int slob_last(slob_t *s)
> -{
> - return !((unsigned long)slob_next(s) & ~PAGE_MASK);
> -}
> -
> -static void *slob_new_pages(gfp_t gfp, int order, int node)
> -{
> - struct page *page;
> -
> -#ifdef CONFIG_NUMA
> - if (node != NUMA_NO_NODE)
> - page = __alloc_pages_node(node, gfp, order);
> - else
> -#endif
> - page = alloc_pages(gfp, order);
> -
> - if (!page)
> - return NULL;
> -
> - mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B,
> - PAGE_SIZE << order);
> - return page_address(page);
> -}
> -
> -static void slob_free_pages(void *b, int order)
> -{
> - struct page *sp = virt_to_page(b);
> -
> - if (current->reclaim_state)
> - current->reclaim_state->reclaimed_slab += 1 << order;
> -
> - mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
> - -(PAGE_SIZE << order));
> - __free_pages(sp, order);
> -}
> -
> -/*
> - * slob_page_alloc() - Allocate a slob block within a given slob_page sp.
> - * @sp: Page to look in.
> - * @size: Size of the allocation.
> - * @align: Allocation alignment.
> - * @align_offset: Offset in the allocated block that will be aligned.
> - * @page_removed_from_list: Return parameter.
> - *
> - * Tries to find a chunk of memory at least @size bytes big within @page.
> - *
> - * Return: Pointer to memory if allocated, %NULL otherwise. If the
> - * allocation fills up @page then the page is removed from the
> - * freelist, in this case @page_removed_from_list will be set to
> - * true (set to false otherwise).
> - */
> -static void *slob_page_alloc(struct slab *sp, size_t size, int align,
> - int align_offset, bool *page_removed_from_list)
> -{
> - slob_t *prev, *cur, *aligned = NULL;
> - int delta = 0, units = SLOB_UNITS(size);
> -
> - *page_removed_from_list = false;
> - for (prev = NULL, cur = sp->freelist; ; prev = cur, cur = slob_next(cur)) {
> - slobidx_t avail = slob_units(cur);
> -
> - /*
> - * 'aligned' will hold the address of the slob block so that the
> - * address 'aligned'+'align_offset' is aligned according to the
> - * 'align' parameter. This is for kmalloc() which prepends the
> - * allocated block with its size, so that the block itself is
> - * aligned when needed.
> - */
> - if (align) {
> - aligned = (slob_t *)
> - (ALIGN((unsigned long)cur + align_offset, align)
> - - align_offset);
> - delta = aligned - cur;
> - }
> - if (avail >= units + delta) { /* room enough? */
> - slob_t *next;
> -
> - if (delta) { /* need to fragment head to align? */
> - next = slob_next(cur);
> - set_slob(aligned, avail - delta, next);
> - set_slob(cur, delta, aligned);
> - prev = cur;
> - cur = aligned;
> - avail = slob_units(cur);
> - }
> -
> - next = slob_next(cur);
> - if (avail == units) { /* exact fit? unlink. */
> - if (prev)
> - set_slob(prev, slob_units(prev), next);
> - else
> - sp->freelist = next;
> - } else { /* fragment */
> - if (prev)
> - set_slob(prev, slob_units(prev), cur + units);
> - else
> - sp->freelist = cur + units;
> - set_slob(cur + units, avail - units, next);
> - }
> -
> - sp->units -= units;
> - if (!sp->units) {
> - clear_slob_page_free(sp);
> - *page_removed_from_list = true;
> - }
> - return cur;
> - }
> - if (slob_last(cur))
> - return NULL;
> - }
> -}
> -
> -/*
> - * slob_alloc: entry point into the slob allocator.
> - */
> -static void *slob_alloc(size_t size, gfp_t gfp, int align, int node,
> - int align_offset)
> -{
> - struct folio *folio;
> - struct slab *sp;
> - struct list_head *slob_list;
> - slob_t *b = NULL;
> - unsigned long flags;
> - bool _unused;
> -
> - if (size < SLOB_BREAK1)
> - slob_list = &free_slob_small;
> - else if (size < SLOB_BREAK2)
> - slob_list = &free_slob_medium;
> - else
> - slob_list = &free_slob_large;
> -
> - spin_lock_irqsave(&slob_lock, flags);
> - /* Iterate through each partially free page, try to find room */
> - list_for_each_entry(sp, slob_list, slab_list) {
> - bool page_removed_from_list = false;
> -#ifdef CONFIG_NUMA
> - /*
> - * If there's a node specification, search for a partial
> - * page with a matching node id in the freelist.
> - */
> - if (node != NUMA_NO_NODE && slab_nid(sp) != node)
> - continue;
> -#endif
> - /* Enough room on this page? */
> - if (sp->units < SLOB_UNITS(size))
> - continue;
> -
> - b = slob_page_alloc(sp, size, align, align_offset, &page_removed_from_list);
> - if (!b)
> - continue;
> -
> - /*
> - * If slob_page_alloc() removed sp from the list then we
> - * cannot call list functions on sp. If so allocation
> - * did not fragment the page anyway so optimisation is
> - * unnecessary.
> - */
> - if (!page_removed_from_list) {
> - /*
> - * Improve fragment distribution and reduce our average
> - * search time by starting our next search here. (see
> - * Knuth vol 1, sec 2.5, pg 449)
> - */
> - if (!list_is_first(&sp->slab_list, slob_list))
> - list_rotate_to_front(&sp->slab_list, slob_list);
> - }
> - break;
> - }
> - spin_unlock_irqrestore(&slob_lock, flags);
> -
> - /* Not enough space: must allocate a new page */
> - if (!b) {
> - b = slob_new_pages(gfp & ~__GFP_ZERO, 0, node);
> - if (!b)
> - return NULL;
> - folio = virt_to_folio(b);
> - __folio_set_slab(folio);
> - sp = folio_slab(folio);
> -
> - spin_lock_irqsave(&slob_lock, flags);
> - sp->units = SLOB_UNITS(PAGE_SIZE);
> - sp->freelist = b;
> - INIT_LIST_HEAD(&sp->slab_list);
> - set_slob(b, SLOB_UNITS(PAGE_SIZE), b + SLOB_UNITS(PAGE_SIZE));
> - set_slob_page_free(sp, slob_list);
> - b = slob_page_alloc(sp, size, align, align_offset, &_unused);
> - BUG_ON(!b);
> - spin_unlock_irqrestore(&slob_lock, flags);
> - }
> - if (unlikely(gfp & __GFP_ZERO))
> - memset(b, 0, size);
> - return b;
> -}
> -
> -/*
> - * slob_free: entry point into the slob allocator.
> - */
> -static void slob_free(void *block, int size)
> -{
> - struct slab *sp;
> - slob_t *prev, *next, *b = (slob_t *)block;
> - slobidx_t units;
> - unsigned long flags;
> - struct list_head *slob_list;
> -
> - if (unlikely(ZERO_OR_NULL_PTR(block)))
> - return;
> - BUG_ON(!size);
> -
> - sp = virt_to_slab(block);
> - units = SLOB_UNITS(size);
> -
> - spin_lock_irqsave(&slob_lock, flags);
> -
> - if (sp->units + units == SLOB_UNITS(PAGE_SIZE)) {
> - /* Go directly to page allocator. Do not pass slob allocator */
> - if (slob_page_free(sp))
> - clear_slob_page_free(sp);
> - spin_unlock_irqrestore(&slob_lock, flags);
> - __folio_clear_slab(slab_folio(sp));
> - slob_free_pages(b, 0);
> - return;
> - }
> -
> - if (!slob_page_free(sp)) {
> - /* This slob page is about to become partially free. Easy! */
> - sp->units = units;
> - sp->freelist = b;
> - set_slob(b, units,
> - (void *)((unsigned long)(b +
> - SLOB_UNITS(PAGE_SIZE)) & PAGE_MASK));
> - if (size < SLOB_BREAK1)
> - slob_list = &free_slob_small;
> - else if (size < SLOB_BREAK2)
> - slob_list = &free_slob_medium;
> - else
> - slob_list = &free_slob_large;
> - set_slob_page_free(sp, slob_list);
> - goto out;
> - }
> -
> - /*
> - * Otherwise the page is already partially free, so find reinsertion
> - * point.
> - */
> - sp->units += units;
> -
> - if (b < (slob_t *)sp->freelist) {
> - if (b + units == sp->freelist) {
> - units += slob_units(sp->freelist);
> - sp->freelist = slob_next(sp->freelist);
> - }
> - set_slob(b, units, sp->freelist);
> - sp->freelist = b;
> - } else {
> - prev = sp->freelist;
> - next = slob_next(prev);
> - while (b > next) {
> - prev = next;
> - next = slob_next(prev);
> - }
> -
> - if (!slob_last(prev) && b + units == next) {
> - units += slob_units(next);
> - set_slob(b, units, slob_next(next));
> - } else
> - set_slob(b, units, next);
> -
> - if (prev + slob_units(prev) == b) {
> - units = slob_units(b) + slob_units(prev);
> - set_slob(prev, units, slob_next(b));
> - } else
> - set_slob(prev, slob_units(prev), b);
> - }
> -out:
> - spin_unlock_irqrestore(&slob_lock, flags);
> -}
> -
> -#ifdef CONFIG_PRINTK
> -void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
> -{
> - kpp->kp_ptr = object;
> - kpp->kp_slab = slab;
> -}
> -#endif
> -
> -/*
> - * End of slob allocator proper. Begin kmem_cache_alloc and kmalloc frontend.
> - */
> -
> -static __always_inline void *
> -__do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller)
> -{
> - unsigned int *m;
> - unsigned int minalign;
> - void *ret;
> -
> - minalign = max_t(unsigned int, ARCH_KMALLOC_MINALIGN,
> - arch_slab_minalign());
> - gfp &= gfp_allowed_mask;
> -
> - might_alloc(gfp);
> -
> - if (size < PAGE_SIZE - minalign) {
> - int align = minalign;
> -
> - /*
> - * For power of two sizes, guarantee natural alignment for
> - * kmalloc()'d objects.
> - */
> - if (is_power_of_2(size))
> - align = max_t(unsigned int, minalign, size);
> -
> - if (!size)
> - return ZERO_SIZE_PTR;
> -
> - m = slob_alloc(size + minalign, gfp, align, node, minalign);
> -
> - if (!m)
> - return NULL;
> - *m = size;
> - ret = (void *)m + minalign;
> -
> - trace_kmalloc(caller, ret, size, size + minalign, gfp, node);
> - } else {
> - unsigned int order = get_order(size);
> -
> - if (likely(order))
> - gfp |= __GFP_COMP;
> - ret = slob_new_pages(gfp, order, node);
> -
> - trace_kmalloc(caller, ret, size, PAGE_SIZE << order, gfp, node);
> - }
> -
> - kmemleak_alloc(ret, size, 1, gfp);
> - return ret;
> -}
> -
> -void *__kmalloc(size_t size, gfp_t gfp)
> -{
> - return __do_kmalloc_node(size, gfp, NUMA_NO_NODE, _RET_IP_);
> -}
> -EXPORT_SYMBOL(__kmalloc);
> -
> -void *__kmalloc_node_track_caller(size_t size, gfp_t gfp,
> - int node, unsigned long caller)
> -{
> - return __do_kmalloc_node(size, gfp, node, caller);
> -}
> -EXPORT_SYMBOL(__kmalloc_node_track_caller);
> -
> -void kfree(const void *block)
> -{
> - struct folio *sp;
> -
> - trace_kfree(_RET_IP_, block);
> -
> - if (unlikely(ZERO_OR_NULL_PTR(block)))
> - return;
> - kmemleak_free(block);
> -
> - sp = virt_to_folio(block);
> - if (folio_test_slab(sp)) {
> - unsigned int align = max_t(unsigned int,
> - ARCH_KMALLOC_MINALIGN,
> - arch_slab_minalign());
> - unsigned int *m = (unsigned int *)(block - align);
> -
> - slob_free(m, *m + align);
> - } else {
> - unsigned int order = folio_order(sp);
> -
> - mod_node_page_state(folio_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
> - -(PAGE_SIZE << order));
> - __free_pages(folio_page(sp, 0), order);
> -
> - }
> -}
> -EXPORT_SYMBOL(kfree);
> -
> -size_t kmalloc_size_roundup(size_t size)
> -{
> - /* Short-circuit the 0 size case. */
> - if (unlikely(size == 0))
> - return 0;
> - /* Short-circuit saturated "too-large" case. */
> - if (unlikely(size == SIZE_MAX))
> - return SIZE_MAX;
> -
> - return ALIGN(size, ARCH_KMALLOC_MINALIGN);
> -}
> -
> -EXPORT_SYMBOL(kmalloc_size_roundup);
> -
> -/* can't use ksize for kmem_cache_alloc memory, only kmalloc */
> -size_t __ksize(const void *block)
> -{
> - struct folio *folio;
> - unsigned int align;
> - unsigned int *m;
> -
> - BUG_ON(!block);
> - if (unlikely(block == ZERO_SIZE_PTR))
> - return 0;
> -
> - folio = virt_to_folio(block);
> - if (unlikely(!folio_test_slab(folio)))
> - return folio_size(folio);
> -
> - align = max_t(unsigned int, ARCH_KMALLOC_MINALIGN,
> - arch_slab_minalign());
> - m = (unsigned int *)(block - align);
> - return SLOB_UNITS(*m) * SLOB_UNIT;
> -}
> -
> -int __kmem_cache_create(struct kmem_cache *c, slab_flags_t flags)
> -{
> - if (flags & SLAB_TYPESAFE_BY_RCU) {
> - /* leave room for rcu footer at the end of object */
> - c->size += sizeof(struct slob_rcu);
> - }
> -
> - /* Actual size allocated */
> - c->size = SLOB_UNITS(c->size) * SLOB_UNIT;
> - c->flags = flags;
> - return 0;
> -}
> -
> -static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
> -{
> - void *b;
> -
> - flags &= gfp_allowed_mask;
> -
> - might_alloc(flags);
> -
> - if (c->size < PAGE_SIZE) {
> - b = slob_alloc(c->size, flags, c->align, node, 0);
> - trace_kmem_cache_alloc(_RET_IP_, b, c, flags, node);
> - } else {
> - b = slob_new_pages(flags, get_order(c->size), node);
> - trace_kmem_cache_alloc(_RET_IP_, b, c, flags, node);
> - }
> -
> - if (b && c->ctor) {
> - WARN_ON_ONCE(flags & __GFP_ZERO);
> - c->ctor(b);
> - }
> -
> - kmemleak_alloc_recursive(b, c->size, 1, c->flags, flags);
> - return b;
> -}
> -
> -void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags)
> -{
> - return slob_alloc_node(cachep, flags, NUMA_NO_NODE);
> -}
> -EXPORT_SYMBOL(kmem_cache_alloc);
> -
> -
> -void *kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags)
> -{
> - return slob_alloc_node(cachep, flags, NUMA_NO_NODE);
> -}
> -EXPORT_SYMBOL(kmem_cache_alloc_lru);
> -
> -void *__kmalloc_node(size_t size, gfp_t gfp, int node)
> -{
> - return __do_kmalloc_node(size, gfp, node, _RET_IP_);
> -}
> -EXPORT_SYMBOL(__kmalloc_node);
> -
> -void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t gfp, int node)
> -{
> - return slob_alloc_node(cachep, gfp, node);
> -}
> -EXPORT_SYMBOL(kmem_cache_alloc_node);
> -
> -static void __kmem_cache_free(void *b, int size)
> -{
> - if (size < PAGE_SIZE)
> - slob_free(b, size);
> - else
> - slob_free_pages(b, get_order(size));
> -}
> -
> -static void kmem_rcu_free(struct rcu_head *head)
> -{
> - struct slob_rcu *slob_rcu = (struct slob_rcu *)head;
> - void *b = (void *)slob_rcu - (slob_rcu->size - sizeof(struct slob_rcu));
> -
> - __kmem_cache_free(b, slob_rcu->size);
> -}
> -
> -void kmem_cache_free(struct kmem_cache *c, void *b)
> -{
> - kmemleak_free_recursive(b, c->flags);
> - trace_kmem_cache_free(_RET_IP_, b, c);
> - if (unlikely(c->flags & SLAB_TYPESAFE_BY_RCU)) {
> - struct slob_rcu *slob_rcu;
> - slob_rcu = b + (c->size - sizeof(struct slob_rcu));
> - slob_rcu->size = c->size;
> - call_rcu(&slob_rcu->head, kmem_rcu_free);
> - } else {
> - __kmem_cache_free(b, c->size);
> - }
> -}
> -EXPORT_SYMBOL(kmem_cache_free);
> -
> -void kmem_cache_free_bulk(struct kmem_cache *s, size_t nr, void **p)
> -{
> - size_t i;
> -
> - for (i = 0; i < nr; i++) {
> - if (s)
> - kmem_cache_free(s, p[i]);
> - else
> - kfree(p[i]);
> - }
> -}
> -EXPORT_SYMBOL(kmem_cache_free_bulk);
> -
> -int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t nr,
> - void **p)
> -{
> - size_t i;
> -
> - for (i = 0; i < nr; i++) {
> - void *x = p[i] = kmem_cache_alloc(s, flags);
> -
> - if (!x) {
> - kmem_cache_free_bulk(s, i, p);
> - return 0;
> - }
> - }
> - return i;
> -}
> -EXPORT_SYMBOL(kmem_cache_alloc_bulk);
> -
> -int __kmem_cache_shutdown(struct kmem_cache *c)
> -{
> - /* No way to check for remaining objects */
> - return 0;
> -}
> -
> -void __kmem_cache_release(struct kmem_cache *c)
> -{
> -}
> -
> -int __kmem_cache_shrink(struct kmem_cache *d)
> -{
> - return 0;
> -}
> -
> -static struct kmem_cache kmem_cache_boot = {
> - .name = "kmem_cache",
> - .size = sizeof(struct kmem_cache),
> - .flags = SLAB_PANIC,
> - .align = ARCH_KMALLOC_MINALIGN,
> -};
> -
> -void __init kmem_cache_init(void)
> -{
> - kmem_cache = &kmem_cache_boot;
> - slab_state = UP;
> -}
> -
> -void __init kmem_cache_init_late(void)
> -{
> - slab_state = FULL;
> -}
> --
> 2.39.2
>

A momentous moment, congratulations and fantastic work!

Very much,

Acked-by: Lorenzo Stoakes <[email protected]>

2023-03-15 02:54:57

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH 6/7] mm/slob: remove slob.c

On Fri, Mar 10, 2023 at 11:32:08AM +0100, Vlastimil Babka wrote:
> Remove the SLOB implementation.
>
> RIP SLOB allocator (2006 - 2023)
>
> Signed-off-by: Vlastimil Babka <[email protected]>

Acked-by: Roman Gushchin <[email protected]>

Thanks!

2023-03-15 11:06:04

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH 4/7] mm, pagemap: remove SLOB and SLQB from comments and documentation

On 3/14/23 09:19, Hyeonggon Yoo wrote:
> On Fri, Mar 10, 2023 at 11:32:06AM +0100, Vlastimil Babka wrote:
>> SLOB has been removed and SLQB never merged, so remove their mentions
>> from comments and documentation of pagemap.
>>
>> Signed-off-by: Vlastimil Babka <[email protected]>
>> ---
>> Documentation/admin-guide/mm/pagemap.rst | 6 +++---
>> fs/proc/page.c | 5 ++---
>> 2 files changed, 5 insertions(+), 6 deletions(-)
>>
>> diff --git a/Documentation/admin-guide/mm/pagemap.rst b/Documentation/admin-guide/mm/pagemap.rst
>> index b5f970dc91e7..bb4aa897a773 100644
>> --- a/Documentation/admin-guide/mm/pagemap.rst
>> +++ b/Documentation/admin-guide/mm/pagemap.rst
>> @@ -91,9 +91,9 @@ Short descriptions to the page flags
>> The page is being locked for exclusive access, e.g. by undergoing read/write
>> IO.
>> 7 - SLAB
>> - The page is managed by the SLAB/SLOB/SLUB/SLQB kernel memory allocator.
>> - When compound page is used, SLUB/SLQB will only set this flag on the head
>> - page; SLOB will not flag it at all.
>> + The page is managed by the SLAB/SLUB kernel memory allocator.
>> + When compound page is used, either will only set this flag on the head
>> + page..
>> 10 - BUDDY
>> A free memory block managed by the buddy system allocator.
>> The buddy system organizes free memory in blocks of various orders.
>> diff --git a/fs/proc/page.c b/fs/proc/page.c
>> index 6249c347809a..1356aeffd8dc 100644
>> --- a/fs/proc/page.c
>> +++ b/fs/proc/page.c
>> @@ -125,7 +125,7 @@ u64 stable_page_flags(struct page *page)
>> /*
>> * pseudo flags for the well known (anonymous) memory mapped pages
>> *
>> - * Note that page->_mapcount is overloaded in SLOB/SLUB/SLQB, so the
>> + * Note that page->_mapcount is overloaded in SLAB/SLUB, so the
>
> SLUB does not overload _mapcount.

True, I overlooked that, thanks.

>> * simple test in page_mapped() is not enough.
>> */
>> if (!PageSlab(page) && page_mapped(page))
>> @@ -166,8 +166,7 @@ u64 stable_page_flags(struct page *page)
>>
>> /*
>> * Caveats on high order pages: page->_refcount will only be set
>> - * -1 on the head page; SLUB/SLQB do the same for PG_slab;
>> - * SLOB won't set PG_slab at all on compound pages.
>> + * -1 on the head page; SLAB/SLUB do the same for PG_slab;
>
> I think this comment could be just saying that PG_buddy is only set on
> head page, not saying
>
> _refcount is set to -1 on head page (is it even correct?)

It's not, that scheme is outdated. So I'll have it mention PG_buddy as you
suggest, but PG_slab also needs special care as it's not set on tail pages.
But I noticed the compound_head() is unnecessary as that's covered by
PageSlab() which is defined as PF_NO_TAIL. So the sum of modifications to
this patch:

diff --git a/Documentation/admin-guide/mm/pagemap.rst b/Documentation/admin-guide/mm/pagemap.rst
index bb4aa897a773..c8f380271cad 100644
--- a/Documentation/admin-guide/mm/pagemap.rst
+++ b/Documentation/admin-guide/mm/pagemap.rst
@@ -93,7 +93,7 @@ Short descriptions to the page flags
7 - SLAB
The page is managed by the SLAB/SLUB kernel memory allocator.
When compound page is used, either will only set this flag on the head
- page..
+ page.
10 - BUDDY
A free memory block managed by the buddy system allocator.
The buddy system organizes free memory in blocks of various orders.
diff --git a/fs/proc/page.c b/fs/proc/page.c
index 1356aeffd8dc..195b077c0fac 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -125,7 +125,7 @@ u64 stable_page_flags(struct page *page)
/*
* pseudo flags for the well known (anonymous) memory mapped pages
*
- * Note that page->_mapcount is overloaded in SLAB/SLUB, so the
+ * Note that page->_mapcount is overloaded in SLAB, so the
* simple test in page_mapped() is not enough.
*/
if (!PageSlab(page) && page_mapped(page))
@@ -165,8 +165,8 @@ u64 stable_page_flags(struct page *page)


/*
- * Caveats on high order pages: page->_refcount will only be set
- * -1 on the head page; SLAB/SLUB do the same for PG_slab;
+ * Caveats on high order pages: PG_buddy and PG_slab will only be set
+ * on the head page.
*/
if (PageBuddy(page))
u |= 1 << KPF_BUDDY;
@@ -184,7 +184,7 @@ u64 stable_page_flags(struct page *page)
u |= kpf_copy_bit(k, KPF_LOCKED, PG_locked);

u |= kpf_copy_bit(k, KPF_SLAB, PG_slab);
- if (PageTail(page) && PageSlab(compound_head(page)))
+ if (PageTail(page) && PageSlab(page))
u |= 1 << KPF_SLAB;

u |= kpf_copy_bit(k, KPF_ERROR, PG_error);




2023-03-15 13:38:58

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH 7/7] mm/slab: document kfree() as allowed for kmem_cache_alloc() objects

On 3/12/23 10:59, Mike Rapoport wrote:
> On Fri, Mar 10, 2023 at 11:32:09AM +0100, Vlastimil Babka wrote:
>> This will make it easier to free objects in situations when they can
>> come from either kmalloc() or kmem_cache_alloc(), and also allow
>> kfree_rcu() for freeing objects from kmem_cache_alloc().
>>
>> For the SLAB and SLUB allocators this was always possible so with SLOB
>> gone, we can document it as supported.
>>
>> Signed-off-by: Vlastimil Babka <[email protected]>
>> Cc: Mike Rapoport <[email protected]>
>> Cc: Jonathan Corbet <[email protected]>
>> Cc: "Paul E. McKenney" <[email protected]>
>> Cc: Frederic Weisbecker <[email protected]>
>> Cc: Neeraj Upadhyay <[email protected]>
>> Cc: Josh Triplett <[email protected]>
>> Cc: Steven Rostedt <[email protected]>
>> Cc: Mathieu Desnoyers <[email protected]>
>> Cc: Lai Jiangshan <[email protected]>
>> Cc: Joel Fernandes <[email protected]>
>> ---
>> Documentation/core-api/memory-allocation.rst | 15 +++++++++++----
>> include/linux/rcupdate.h | 6 ++++--
>> mm/slab_common.c | 5 +----
>> 3 files changed, 16 insertions(+), 10 deletions(-)
>>
>> diff --git a/Documentation/core-api/memory-allocation.rst b/Documentation/core-api/memory-allocation.rst
>> index 5954ddf6ee13..f9e8d352ed67 100644
>> --- a/Documentation/core-api/memory-allocation.rst
>> +++ b/Documentation/core-api/memory-allocation.rst
>> @@ -170,7 +170,14 @@ should be used if a part of the cache might be copied to the userspace.
>> After the cache is created kmem_cache_alloc() and its convenience
>> wrappers can allocate memory from that cache.
>>
>> -When the allocated memory is no longer needed it must be freed. You can
>> -use kvfree() for the memory allocated with `kmalloc`, `vmalloc` and
>> -`kvmalloc`. The slab caches should be freed with kmem_cache_free(). And
>> -don't forget to destroy the cache with kmem_cache_destroy().
>> +When the allocated memory is no longer needed it must be freed. Objects
>
> I'd add a line break before Objects ^
>
>> +allocated by `kmalloc` can be freed by `kfree` or `kvfree`.
>> +Objects allocated by `kmem_cache_alloc` can be freed with `kmem_cache_free`
>> +or also by `kfree` or `kvfree`, which can be more convenient as it does
>
> Maybe replace 'or also by' with a coma:
>
> Objects allocated by `kmem_cache_alloc` can be freed with `kmem_cache_free`,
> `kfree` or `kvfree`, which can be more convenient as it does

But then I need to clarify what the "which" applies to?

>
>> +not require the kmem_cache pointed.
>
> ^ pointer.
>
>> +The rules for _bulk and _rcu flavors of freeing functions are analogical.
>
> Maybe
>
> The same rules apply to _bulk and _rcu flavors of freeing functions.

So like this incremental diff?

diff --git a/Documentation/core-api/memory-allocation.rst b/Documentation/core-api/memory-allocation.rst
index f9e8d352ed67..1c58d883b273 100644
--- a/Documentation/core-api/memory-allocation.rst
+++ b/Documentation/core-api/memory-allocation.rst
@@ -170,12 +170,14 @@ should be used if a part of the cache might be copied to the userspace.
After the cache is created kmem_cache_alloc() and its convenience
wrappers can allocate memory from that cache.

-When the allocated memory is no longer needed it must be freed. Objects
-allocated by `kmalloc` can be freed by `kfree` or `kvfree`.
-Objects allocated by `kmem_cache_alloc` can be freed with `kmem_cache_free`
-or also by `kfree` or `kvfree`, which can be more convenient as it does
-not require the kmem_cache pointed.
-The rules for _bulk and _rcu flavors of freeing functions are analogical.
+When the allocated memory is no longer needed it must be freed.
+
+Objects allocated by `kmalloc` can be freed by `kfree` or `kvfree`. Objects
+allocated by `kmem_cache_alloc` can be freed with `kmem_cache_free`, `kfree`
+or `kvfree`, where the latter two might be more convenient thanks to not
+needing the kmem_cache pointer.
+
+The same rules apply to _bulk and _rcu flavors of freeing functions.

Memory allocated by `vmalloc` can be freed with `vfree` or `kvfree`.
Memory allocated by `kvmalloc` can be freed with `kvfree`.


2023-03-15 13:40:35

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH 0/7] remove SLOB and allow kfree() with kmem_cache_alloc()

On 3/14/23 23:10, Lorenzo Stoakes wrote:
> On Mon, Mar 13, 2023 at 05:36:44PM +0100, Vlastimil Babka wrote:
>> > git grep -in slob still gives a couple of matches. I've dropped the
>> > irrelevant ones it it left me with these:
>
> I see an #ifdef in security/tomoyo/common.h which I guess is not really
> relevant? And certainly not harmful in practice. Thought might be nice to
> eliminate the last reference to CONFIG_SLOB in the kernel :)

Yeah I wrote in the cover letter the tomoyo change is already going through
tomoyo tree. And based on Jakub's feedback the skbuff change will be also
posted separately.

2023-03-15 13:53:27

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH 0/7] remove SLOB and allow kfree() with kmem_cache_alloc()


On 3/13/23 17:31, Steven Rostedt wrote:
> Just remove that comment. And you could even add:
>
> Suggested-by: Steven Rostedt (Google) <[email protected]>
> Fixes: e4c2ce82ca27 ("ring_buffer: allocate buffer page pointer")

Thanks for the analysis. Want to take the following patch to your tree or
should I make it part of the series?

----8<----
From 297a8c8fda98dc5499cfe0eac6ffabfb19d1b70f Mon Sep 17 00:00:00 2001
From: Vlastimil Babka <[email protected]>
Date: Wed, 15 Mar 2023 14:45:15 +0100
Subject: [PATCH] ring-buffer: remove obsolete comment for free_buffer_page()

The comment refers to mm/slob.o which is being removed. It comes from
commit ed56829cb319 ("ring_buffer: reset buffer page when freeing") and
according to Steven the borrowed code was a page mapcount and mapping
reset, which was later removed by commit e4c2ce82ca27 ("ring_buffer:
allocate buffer page pointer"). Thus the comment is not accurate anyway,
remove it.

Reported-by: Mike Rapoport <[email protected]>
Suggested-by: Steven Rostedt (Google) <[email protected]>
Fixes: e4c2ce82ca27 ("ring_buffer: allocate buffer page pointer")
Signed-off-by: Vlastimil Babka <[email protected]>
---
kernel/trace/ring_buffer.c | 4 ----
1 file changed, 4 deletions(-)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index af50d931b020..c6f47b6cfd5f 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -354,10 +354,6 @@ static void rb_init_page(struct buffer_data_page *bpage)
local_set(&bpage->commit, 0);
}

-/*
- * Also stolen from mm/slob.c. Thanks to Mathieu Desnoyers for pointing
- * this issue out.
- */
static void free_buffer_page(struct buffer_page *bpage)
{
free_page((unsigned long)bpage->page);
--
2.39.2




2023-03-15 14:20:54

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH 0/7] remove SLOB and allow kfree() with kmem_cache_alloc()

On Wed, 15 Mar 2023 14:53:14 +0100
Vlastimil Babka <[email protected]> wrote:

> On 3/13/23 17:31, Steven Rostedt wrote:
> > Just remove that comment. And you could even add:
> >
> > Suggested-by: Steven Rostedt (Google) <[email protected]>
> > Fixes: e4c2ce82ca27 ("ring_buffer: allocate buffer page pointer")
>
> Thanks for the analysis. Want to take the following patch to your tree or
> should I make it part of the series?

I can take it if you send it as a proper patch and Cc
[email protected].

I'm guessing it's not required for stable.

-- Steve

2023-03-15 14:23:09

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH 0/7] remove SLOB and allow kfree() with kmem_cache_alloc()

On 3/15/23 15:20, Steven Rostedt wrote:
> On Wed, 15 Mar 2023 14:53:14 +0100
> Vlastimil Babka <[email protected]> wrote:
>
>> On 3/13/23 17:31, Steven Rostedt wrote:
>> > Just remove that comment. And you could even add:
>> >
>> > Suggested-by: Steven Rostedt (Google) <[email protected]>
>> > Fixes: e4c2ce82ca27 ("ring_buffer: allocate buffer page pointer")
>>
>> Thanks for the analysis. Want to take the following patch to your tree or
>> should I make it part of the series?
>
> I can take it if you send it as a proper patch and Cc
> [email protected].

OK, will do.

> I'm guessing it's not required for stable.

No, but maybe AUTOSEL will pick it anyway as it has Fixes: but that's their
problem ;)

> -- Steve


2023-03-15 14:51:02

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH 7/7] mm/slab: document kfree() as allowed for kmem_cache_alloc() objects

On Wed, Mar 15, 2023 at 02:38:47PM +0100, Vlastimil Babka wrote:
> On 3/12/23 10:59, Mike Rapoport wrote:
> > On Fri, Mar 10, 2023 at 11:32:09AM +0100, Vlastimil Babka wrote:
> >> This will make it easier to free objects in situations when they can
> >> come from either kmalloc() or kmem_cache_alloc(), and also allow
> >> kfree_rcu() for freeing objects from kmem_cache_alloc().
> >>
> >> For the SLAB and SLUB allocators this was always possible so with SLOB
> >> gone, we can document it as supported.
> >>
> >> Signed-off-by: Vlastimil Babka <[email protected]>
> >> Cc: Mike Rapoport <[email protected]>
> >> Cc: Jonathan Corbet <[email protected]>
> >> Cc: "Paul E. McKenney" <[email protected]>
> >> Cc: Frederic Weisbecker <[email protected]>
> >> Cc: Neeraj Upadhyay <[email protected]>
> >> Cc: Josh Triplett <[email protected]>
> >> Cc: Steven Rostedt <[email protected]>
> >> Cc: Mathieu Desnoyers <[email protected]>
> >> Cc: Lai Jiangshan <[email protected]>
> >> Cc: Joel Fernandes <[email protected]>
> >> ---
> >> Documentation/core-api/memory-allocation.rst | 15 +++++++++++----
> >> include/linux/rcupdate.h | 6 ++++--
> >> mm/slab_common.c | 5 +----
> >> 3 files changed, 16 insertions(+), 10 deletions(-)
> >>
> >> diff --git a/Documentation/core-api/memory-allocation.rst b/Documentation/core-api/memory-allocation.rst
> >> index 5954ddf6ee13..f9e8d352ed67 100644
> >> --- a/Documentation/core-api/memory-allocation.rst
> >> +++ b/Documentation/core-api/memory-allocation.rst
> >> @@ -170,7 +170,14 @@ should be used if a part of the cache might be copied to the userspace.
> >> After the cache is created kmem_cache_alloc() and its convenience
> >> wrappers can allocate memory from that cache.
> >>
> >> -When the allocated memory is no longer needed it must be freed. You can
> >> -use kvfree() for the memory allocated with `kmalloc`, `vmalloc` and
> >> -`kvmalloc`. The slab caches should be freed with kmem_cache_free(). And
> >> -don't forget to destroy the cache with kmem_cache_destroy().
> >> +When the allocated memory is no longer needed it must be freed. Objects
> >
> > I'd add a line break before Objects ^
> >
> >> +allocated by `kmalloc` can be freed by `kfree` or `kvfree`.
> >> +Objects allocated by `kmem_cache_alloc` can be freed with `kmem_cache_free`
> >> +or also by `kfree` or `kvfree`, which can be more convenient as it does
> >
> > Maybe replace 'or also by' with a coma:
> >
> > Objects allocated by `kmem_cache_alloc` can be freed with `kmem_cache_free`,
> > `kfree` or `kvfree`, which can be more convenient as it does
>
> But then I need to clarify what the "which" applies to?

Yeah, I kinda missed that...

> >
> >> +not require the kmem_cache pointed.
> >
> > ^ pointer.
> >
> >> +The rules for _bulk and _rcu flavors of freeing functions are analogical.
> >
> > Maybe
> >
> > The same rules apply to _bulk and _rcu flavors of freeing functions.
>
> So like this incremental diff?

> diff --git a/Documentation/core-api/memory-allocation.rst b/Documentation/core-api/memory-allocation.rst
> index f9e8d352ed67..1c58d883b273 100644
> --- a/Documentation/core-api/memory-allocation.rst
> +++ b/Documentation/core-api/memory-allocation.rst
> @@ -170,12 +170,14 @@ should be used if a part of the cache might be copied to the userspace.
> After the cache is created kmem_cache_alloc() and its convenience
> wrappers can allocate memory from that cache.
>
> -When the allocated memory is no longer needed it must be freed. Objects
> -allocated by `kmalloc` can be freed by `kfree` or `kvfree`.
> -Objects allocated by `kmem_cache_alloc` can be freed with `kmem_cache_free`
> -or also by `kfree` or `kvfree`, which can be more convenient as it does
> -not require the kmem_cache pointed.
> -The rules for _bulk and _rcu flavors of freeing functions are analogical.
> +When the allocated memory is no longer needed it must be freed.
> +
> +Objects allocated by `kmalloc` can be freed by `kfree` or `kvfree`. Objects
> +allocated by `kmem_cache_alloc` can be freed with `kmem_cache_free`, `kfree`
> +or `kvfree`, where the latter two might be more convenient thanks to not
> +needing the kmem_cache pointer.

... but this way it's more explicit that kfree and kvfree don't need
kmem_cache pointer.

> +
> +The same rules apply to _bulk and _rcu flavors of freeing functions.
>
> Memory allocated by `vmalloc` can be freed with `vfree` or `kvfree`.
> Memory allocated by `kvmalloc` can be freed with `kvfree`.
>

--
Sincerely yours,
Mike.