This is a third version of the slab cgroup controller rework.
The patchset moves the accounting from the page level to the object
level. It allows to share slab pages between memory cgroups.
This leads to a significant win in the slab utilization (up to 45%)
and the corresponding drop in the total kernel memory footprint.
The reduced number of unmovable slab pages should also have a positive
effect on the memory fragmentation.
The patchset makes the slab accounting code simpler: there is no more
need in the complicated dynamic creation and destruction of per-cgroup
slab caches, all memory cgroups use a global set of shared slab caches.
The lifetime of slab caches is not more connected to the lifetime
of memory cgroups.
The more precise accounting does require more CPU, however in practice
the difference seems to be negligible. We've been using the new slab
controller in Facebook production for several months with different
workloads and haven't seen any noticeable regressions. What we've seen
were memory savings in order of 1 GB per host (it varied heavily depending
on the actual workload, size of RAM, number of CPUs, memory pressure, etc).
The third version of the patchset added yet another step towards
the simplification of the code: sharing of slab caches between
accounted and non-accounted allocations. It comes with significant
upsides (most noticeable, a complete elimination of dynamic slab caches
creation) but not without some regression risks, so this change sits
on top of the patchset and is not completely merged in. So in the unlikely
event of a noticeable performance regression it can be reverted separately.
v3:
1) added a patch that switches to a global single set of kmem_caches
2) kmem API clean up dropped, because if has been already merged
3) byte-sized slab vmstat API over page-sized global counters and
bytes-sized memcg/lruvec counters
3) obj_cgroup refcounting simplifications and other minor fixes
4) other minor changes
v2:
1) implemented re-layering and renaming suggested by Johannes,
added his patch to the set. Thanks!
2) fixed the issue discovered by Bharata B Rao. Thanks!
3) added kmem API clean up part
4) added slab/memcg follow-up clean up part
5) fixed a couple of issues discovered by internal testing on FB fleet.
6) added kselftests
7) included metadata into the charge calculation
8) refreshed commit logs, regrouped patches, rebased onto mm tree, etc
v1:
1) fixed a bug in zoneinfo_show_print()
2) added some comments to the subpage charging API, a minor fix
3) separated memory.kmem.slabinfo deprecation into a separate patch,
provided a drgn-based replacement
4) rebased on top of the current mm tree
RFC:
https://lwn.net/Articles/798605/
Johannes Weiner (1):
mm: memcontrol: decouple reference counting from page accounting
Roman Gushchin (18):
mm: memcg: factor out memcg- and lruvec-level changes out of
__mod_lruvec_state()
mm: memcg: prepare for byte-sized vmstat items
mm: memcg: convert vmstat slab counters to bytes
mm: slub: implement SLUB version of obj_to_index()
mm: memcg/slab: obj_cgroup API
mm: memcg/slab: allocate obj_cgroups for non-root slab pages
mm: memcg/slab: save obj_cgroup for non-root slab objects
mm: memcg/slab: charge individual slab objects instead of pages
mm: memcg/slab: deprecate memory.kmem.slabinfo
mm: memcg/slab: move memcg_kmem_bypass() to memcontrol.h
mm: memcg/slab: use a single set of kmem_caches for all accounted
allocations
mm: memcg/slab: simplify memcg cache creation
mm: memcg/slab: deprecate memcg_kmem_get_cache()
mm: memcg/slab: deprecate slab_root_caches
mm: memcg/slab: remove redundant check in memcg_accumulate_slabinfo()
mm: memcg/slab: use a single set of kmem_caches for all allocations
kselftests: cgroup: add kernel memory accounting tests
tools/cgroup: add memcg_slabinfo.py tool
drivers/base/node.c | 6 +-
fs/proc/meminfo.c | 4 +-
include/linux/memcontrol.h | 80 ++-
include/linux/mm_types.h | 5 +-
include/linux/mmzone.h | 19 +-
include/linux/slab.h | 5 -
include/linux/slab_def.h | 8 +-
include/linux/slub_def.h | 20 +-
include/linux/vmstat.h | 16 +-
kernel/power/snapshot.c | 2 +-
mm/memcontrol.c | 569 ++++++++++--------
mm/oom_kill.c | 2 +-
mm/page_alloc.c | 8 +-
mm/slab.c | 39 +-
mm/slab.h | 365 +++++-------
mm/slab_common.c | 643 +--------------------
mm/slob.c | 12 +-
mm/slub.c | 183 +-----
mm/vmscan.c | 3 +-
mm/vmstat.c | 33 +-
mm/workingset.c | 6 +-
tools/cgroup/memcg_slabinfo.py | 226 ++++++++
tools/testing/selftests/cgroup/.gitignore | 1 +
tools/testing/selftests/cgroup/Makefile | 2 +
tools/testing/selftests/cgroup/test_kmem.c | 382 ++++++++++++
25 files changed, 1322 insertions(+), 1317 deletions(-)
create mode 100755 tools/cgroup/memcg_slabinfo.py
create mode 100644 tools/testing/selftests/cgroup/test_kmem.c
--
2.25.3
This is fairly big but mostly red patch, which makes all accounted
slab allocations use a single set of kmem_caches instead of
creating a separate set for each memory cgroup.
Because the number of non-root kmem_caches is now capped by the number
of root kmem_caches, there is no need to shrink or destroy them
prematurely. They can be perfectly destroyed together with their
root counterparts. This allows to dramatically simplify the
management of non-root kmem_caches and delete a ton of code.
This patch performs the following changes:
1) introduces memcg_params.memcg_cache pointer to represent the
kmem_cache which will be used for all non-root allocations
2) reuses the existing memcg kmem_cache creation mechanism
to create memcg kmem_cache on the first allocation attempt
3) memcg kmem_caches are named <kmemcache_name>-memcg,
e.g. dentry-memcg
4) simplifies memcg_kmem_get_cache() to just return memcg kmem_cache
or schedule it's creation and return the root cache
5) removes almost all non-root kmem_cache management code
(separate refcounter, reparenting, shrinking, etc)
6) makes slab debugfs to display root_mem_cgroup css id and never
show :dead and :deact flags in the memcg_slabinfo attribute.
Following patches in the series will simplify the kmem_cache creation.
Signed-off-by: Roman Gushchin <[email protected]>
---
include/linux/memcontrol.h | 5 +-
include/linux/slab.h | 5 +-
mm/memcontrol.c | 163 +++-----------
mm/slab.c | 16 +-
mm/slab.h | 145 ++++---------
mm/slab_common.c | 426 ++++---------------------------------
mm/slub.c | 38 +---
7 files changed, 128 insertions(+), 670 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 840eb8d486a8..698b92d60da5 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -322,7 +322,6 @@ struct mem_cgroup {
/* Index in the kmem_cache->memcg_params.memcg_caches array */
int kmemcg_id;
enum memcg_kmem_state kmem_state;
- struct list_head kmem_caches;
struct obj_cgroup __rcu *objcg;
struct list_head objcg_list;
#endif
@@ -1426,9 +1425,7 @@ static inline void memcg_set_shrinker_bit(struct mem_cgroup *memcg,
}
#endif
-struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep,
- struct obj_cgroup **objcgp);
-void memcg_kmem_put_cache(struct kmem_cache *cachep);
+struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep);
#ifdef CONFIG_MEMCG_KMEM
int __memcg_kmem_charge(struct mem_cgroup *memcg, gfp_t gfp,
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 6d454886bcaf..310768bfa8d2 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -155,8 +155,7 @@ struct kmem_cache *kmem_cache_create_usercopy(const char *name,
void kmem_cache_destroy(struct kmem_cache *);
int kmem_cache_shrink(struct kmem_cache *);
-void memcg_create_kmem_cache(struct mem_cgroup *, struct kmem_cache *);
-void memcg_deactivate_kmem_caches(struct mem_cgroup *, struct mem_cgroup *);
+void memcg_create_kmem_cache(struct kmem_cache *cachep);
/*
* Please use this macro to create slab caches. Simply specify the
@@ -578,8 +577,6 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
return __kmalloc_node(size, flags, node);
}
-int memcg_update_all_caches(int num_memcgs);
-
/**
* kmalloc_array - allocate memory for an array.
* @n: number of elements.
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 06a5929f4872..9fe2433fbe67 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -330,7 +330,7 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
}
/*
- * This will be the memcg's index in each cache's ->memcg_params.memcg_caches.
+ * This will be used as a shrinker list's index.
* The main reason for not using cgroup id for this:
* this works better in sparse environments, where we have a lot of memcgs,
* but only a few kmem-limited. Or also, if we have, for instance, 200
@@ -549,20 +549,16 @@ ino_t page_cgroup_ino(struct page *page)
unsigned long ino = 0;
rcu_read_lock();
- if (PageSlab(page) && !PageTail(page)) {
- memcg = memcg_from_slab_page(page);
- } else {
- memcg = page->mem_cgroup;
+ memcg = page->mem_cgroup;
- /*
- * The lowest bit set means that memcg isn't a valid
- * memcg pointer, but a obj_cgroups pointer.
- * In this case the page is shared and doesn't belong
- * to any specific memory cgroup.
- */
- if ((unsigned long) memcg & 0x1UL)
- memcg = NULL;
- }
+ /*
+ * The lowest bit set means that memcg isn't a valid
+ * memcg pointer, but a obj_cgroups pointer.
+ * In this case the page is shared and doesn't belong
+ * to any specific memory cgroup.
+ */
+ if ((unsigned long) memcg & 0x1UL)
+ memcg = NULL;
while (memcg && !(memcg->css.flags & CSS_ONLINE))
memcg = parent_mem_cgroup(memcg);
@@ -2820,12 +2816,18 @@ struct mem_cgroup *mem_cgroup_from_obj(void *p)
page = virt_to_head_page(p);
/*
- * Slab pages don't have page->mem_cgroup set because corresponding
- * kmem caches can be reparented during the lifetime. That's why
- * memcg_from_slab_page() should be used instead.
+ * Slab objects are accounted individually, not per-page.
+ * Memcg membership data for each individual object is saved in
+ * the page->obj_cgroups.
*/
- if (PageSlab(page))
- return memcg_from_slab_page(page);
+ if (page_has_obj_cgroups(page)) {
+ struct obj_cgroup *objcg;
+ unsigned int off;
+
+ off = obj_to_index(page->slab_cache, page, p);
+ objcg = page_obj_cgroups(page)[off];
+ return obj_cgroup_memcg(objcg);
+ }
/* All other pages use page->mem_cgroup */
return page->mem_cgroup;
@@ -2880,9 +2882,7 @@ static int memcg_alloc_cache_id(void)
else if (size > MEMCG_CACHES_MAX_SIZE)
size = MEMCG_CACHES_MAX_SIZE;
- err = memcg_update_all_caches(size);
- if (!err)
- err = memcg_update_all_list_lrus(size);
+ err = memcg_update_all_list_lrus(size);
if (!err)
memcg_nr_cache_ids = size;
@@ -2901,7 +2901,6 @@ static void memcg_free_cache_id(int id)
}
struct memcg_kmem_cache_create_work {
- struct mem_cgroup *memcg;
struct kmem_cache *cachep;
struct work_struct work;
};
@@ -2910,31 +2909,24 @@ static void memcg_kmem_cache_create_func(struct work_struct *w)
{
struct memcg_kmem_cache_create_work *cw =
container_of(w, struct memcg_kmem_cache_create_work, work);
- struct mem_cgroup *memcg = cw->memcg;
struct kmem_cache *cachep = cw->cachep;
- memcg_create_kmem_cache(memcg, cachep);
+ memcg_create_kmem_cache(cachep);
- css_put(&memcg->css);
kfree(cw);
}
/*
* Enqueue the creation of a per-memcg kmem_cache.
*/
-static void memcg_schedule_kmem_cache_create(struct mem_cgroup *memcg,
- struct kmem_cache *cachep)
+static void memcg_schedule_kmem_cache_create(struct kmem_cache *cachep)
{
struct memcg_kmem_cache_create_work *cw;
- if (!css_tryget_online(&memcg->css))
- return;
-
cw = kmalloc(sizeof(*cw), GFP_NOWAIT | __GFP_NOWARN);
if (!cw)
return;
- cw->memcg = memcg;
cw->cachep = cachep;
INIT_WORK(&cw->work, memcg_kmem_cache_create_func);
@@ -2942,102 +2934,26 @@ static void memcg_schedule_kmem_cache_create(struct mem_cgroup *memcg,
}
/**
- * memcg_kmem_get_cache: select the correct per-memcg cache for allocation
+ * memcg_kmem_get_cache: select memcg or root cache for allocation
* @cachep: the original global kmem cache
*
* Return the kmem_cache we're supposed to use for a slab allocation.
- * We try to use the current memcg's version of the cache.
*
* If the cache does not exist yet, if we are the first user of it, we
* create it asynchronously in a workqueue and let the current allocation
* go through with the original cache.
- *
- * This function takes a reference to the cache it returns to assure it
- * won't get destroyed while we are working with it. Once the caller is
- * done with it, memcg_kmem_put_cache() must be called to release the
- * reference.
*/
-struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep,
- struct obj_cgroup **objcgp)
+struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep)
{
- struct mem_cgroup *memcg;
struct kmem_cache *memcg_cachep;
- struct memcg_cache_array *arr;
- int kmemcg_id;
-
- VM_BUG_ON(!is_root_cache(cachep));
- if (memcg_kmem_bypass())
+ memcg_cachep = READ_ONCE(cachep->memcg_params.memcg_cache);
+ if (unlikely(!memcg_cachep)) {
+ memcg_schedule_kmem_cache_create(cachep);
return cachep;
-
- rcu_read_lock();
-
- if (unlikely(current->active_memcg))
- memcg = current->active_memcg;
- else
- memcg = mem_cgroup_from_task(current);
-
- if (!memcg || memcg == root_mem_cgroup)
- goto out_unlock;
-
- kmemcg_id = READ_ONCE(memcg->kmemcg_id);
- if (kmemcg_id < 0)
- goto out_unlock;
-
- arr = rcu_dereference(cachep->memcg_params.memcg_caches);
-
- /*
- * Make sure we will access the up-to-date value. The code updating
- * memcg_caches issues a write barrier to match the data dependency
- * barrier inside READ_ONCE() (see memcg_create_kmem_cache()).
- */
- memcg_cachep = READ_ONCE(arr->entries[kmemcg_id]);
-
- /*
- * If we are in a safe context (can wait, and not in interrupt
- * context), we could be be predictable and return right away.
- * This would guarantee that the allocation being performed
- * already belongs in the new cache.
- *
- * However, there are some clashes that can arrive from locking.
- * For instance, because we acquire the slab_mutex while doing
- * memcg_create_kmem_cache, this means no further allocation
- * could happen with the slab_mutex held. So it's better to
- * defer everything.
- *
- * If the memcg is dying or memcg_cache is about to be released,
- * don't bother creating new kmem_caches. Because memcg_cachep
- * is ZEROed as the fist step of kmem offlining, we don't need
- * percpu_ref_tryget_live() here. css_tryget_online() check in
- * memcg_schedule_kmem_cache_create() will prevent us from
- * creation of a new kmem_cache.
- */
- if (unlikely(!memcg_cachep))
- memcg_schedule_kmem_cache_create(memcg, cachep);
- else if (percpu_ref_tryget(&memcg_cachep->memcg_params.refcnt)) {
- struct obj_cgroup *objcg = rcu_dereference(memcg->objcg);
-
- if (!objcg || !obj_cgroup_tryget(objcg)) {
- percpu_ref_put(&memcg_cachep->memcg_params.refcnt);
- goto out_unlock;
- }
-
- *objcgp = objcg;
- cachep = memcg_cachep;
}
-out_unlock:
- rcu_read_unlock();
- return cachep;
-}
-/**
- * memcg_kmem_put_cache: drop reference taken by memcg_kmem_get_cache
- * @cachep: the cache returned by memcg_kmem_get_cache
- */
-void memcg_kmem_put_cache(struct kmem_cache *cachep)
-{
- if (!is_root_cache(cachep))
- percpu_ref_put(&cachep->memcg_params.refcnt);
+ return memcg_cachep;
}
/**
@@ -3708,7 +3624,6 @@ static int memcg_online_kmem(struct mem_cgroup *memcg)
*/
memcg->kmemcg_id = memcg_id;
memcg->kmem_state = KMEM_ONLINE;
- INIT_LIST_HEAD(&memcg->kmem_caches);
return 0;
}
@@ -3721,22 +3636,13 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg)
if (memcg->kmem_state != KMEM_ONLINE)
return;
- /*
- * Clear the online state before clearing memcg_caches array
- * entries. The slab_mutex in memcg_deactivate_kmem_caches()
- * guarantees that no cache will be created for this cgroup
- * after we are done (see memcg_create_kmem_cache()).
- */
+
memcg->kmem_state = KMEM_ALLOCATED;
parent = parent_mem_cgroup(memcg);
if (!parent)
parent = root_mem_cgroup;
- /*
- * Deactivate and reparent kmem_caches and objcgs.
- */
- memcg_deactivate_kmem_caches(memcg, parent);
memcg_reparent_objcgs(memcg, parent);
kmemcg_id = memcg->kmemcg_id;
@@ -3771,10 +3677,8 @@ static void memcg_free_kmem(struct mem_cgroup *memcg)
if (unlikely(memcg->kmem_state == KMEM_ONLINE))
memcg_offline_kmem(memcg);
- if (memcg->kmem_state == KMEM_ALLOCATED) {
- WARN_ON(!list_empty(&memcg->kmem_caches));
+ if (memcg->kmem_state == KMEM_ALLOCATED)
static_branch_dec(&memcg_kmem_enabled_key);
- }
}
#else
static int memcg_online_kmem(struct mem_cgroup *memcg)
@@ -5363,9 +5267,6 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
/* The following stuff does not apply to the root */
if (!parent) {
-#ifdef CONFIG_MEMCG_KMEM
- INIT_LIST_HEAD(&memcg->kmem_caches);
-#endif
root_mem_cgroup = memcg;
return &memcg->css;
}
diff --git a/mm/slab.c b/mm/slab.c
index ad38fbae4042..17f781a5b62c 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1239,7 +1239,7 @@ void __init kmem_cache_init(void)
nr_node_ids * sizeof(struct kmem_cache_node *),
SLAB_HWCACHE_ALIGN, 0, 0);
list_add(&kmem_cache->list, &slab_caches);
- memcg_link_cache(kmem_cache, NULL);
+ memcg_link_cache(kmem_cache);
slab_state = PARTIAL;
/*
@@ -2244,17 +2244,6 @@ int __kmem_cache_shrink(struct kmem_cache *cachep)
return (ret ? 1 : 0);
}
-#ifdef CONFIG_MEMCG
-void __kmemcg_cache_deactivate(struct kmem_cache *cachep)
-{
- __kmem_cache_shrink(cachep);
-}
-
-void __kmemcg_cache_deactivate_after_rcu(struct kmem_cache *s)
-{
-}
-#endif
-
int __kmem_cache_shutdown(struct kmem_cache *cachep)
{
return __kmem_cache_shrink(cachep);
@@ -3862,7 +3851,8 @@ static int do_tune_cpucache(struct kmem_cache *cachep, int limit,
return ret;
lockdep_assert_held(&slab_mutex);
- for_each_memcg_cache(c, cachep) {
+ c = memcg_cache(cachep);
+ if (c) {
/* return value determined by the root cache only */
__do_tune_cpucache(c, limit, batchcount, shared, gfp);
}
diff --git a/mm/slab.h b/mm/slab.h
index 0ecf14bec6a2..28c582ec997a 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -32,66 +32,25 @@ struct kmem_cache {
#else /* !CONFIG_SLOB */
-struct memcg_cache_array {
- struct rcu_head rcu;
- struct kmem_cache *entries[];
-};
-
/*
* This is the main placeholder for memcg-related information in kmem caches.
- * Both the root cache and the child caches will have it. For the root cache,
- * this will hold a dynamically allocated array large enough to hold
- * information about the currently limited memcgs in the system. To allow the
- * array to be accessed without taking any locks, on relocation we free the old
- * version only after a grace period.
- *
- * Root and child caches hold different metadata.
+ * Both the root cache and the child cache will have it. Some fields are used
+ * in both cases, other are specific to root caches.
*
* @root_cache: Common to root and child caches. NULL for root, pointer to
* the root cache for children.
*
* The following fields are specific to root caches.
*
- * @memcg_caches: kmemcg ID indexed table of child caches. This table is
- * used to index child cachces during allocation and cleared
- * early during shutdown.
- *
- * @root_caches_node: List node for slab_root_caches list.
- *
- * @children: List of all child caches. While the child caches are also
- * reachable through @memcg_caches, a child cache remains on
- * this list until it is actually destroyed.
- *
- * The following fields are specific to child caches.
- *
- * @memcg: Pointer to the memcg this cache belongs to.
- *
- * @children_node: List node for @root_cache->children list.
- *
- * @kmem_caches_node: List node for @memcg->kmem_caches list.
+ * @memcg_cache: pointer to memcg kmem cache, used by all non-root memory
+ * cgroups.
+ * @root_caches_node: list node for slab_root_caches list.
*/
struct memcg_cache_params {
struct kmem_cache *root_cache;
- union {
- struct {
- struct memcg_cache_array __rcu *memcg_caches;
- struct list_head __root_caches_node;
- struct list_head children;
- bool dying;
- };
- struct {
- struct mem_cgroup *memcg;
- struct list_head children_node;
- struct list_head kmem_caches_node;
- struct percpu_ref refcnt;
-
- void (*work_fn)(struct kmem_cache *);
- union {
- struct rcu_head rcu_head;
- struct work_struct work;
- };
- };
- };
+
+ struct kmem_cache *memcg_cache;
+ struct list_head __root_caches_node;
};
#endif /* CONFIG_SLOB */
@@ -234,8 +193,6 @@ bool __kmem_cache_empty(struct kmem_cache *);
int __kmem_cache_shutdown(struct kmem_cache *);
void __kmem_cache_release(struct kmem_cache *);
int __kmem_cache_shrink(struct kmem_cache *);
-void __kmemcg_cache_deactivate(struct kmem_cache *s);
-void __kmemcg_cache_deactivate_after_rcu(struct kmem_cache *s);
void slab_kmem_cache_release(struct kmem_cache *);
void kmem_cache_shrink_all(struct kmem_cache *s);
@@ -281,14 +238,6 @@ static inline int cache_vmstat_idx(struct kmem_cache *s)
extern struct list_head slab_root_caches;
#define root_caches_node memcg_params.__root_caches_node
-/*
- * Iterate over all memcg caches of the given root cache. The caller must hold
- * slab_mutex.
- */
-#define for_each_memcg_cache(iter, root) \
- list_for_each_entry(iter, &(root)->memcg_params.children, \
- memcg_params.children_node)
-
static inline bool is_root_cache(struct kmem_cache *s)
{
return !s->memcg_params.root_cache;
@@ -319,6 +268,13 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s)
return s->memcg_params.root_cache;
}
+static inline struct kmem_cache *memcg_cache(struct kmem_cache *s)
+{
+ if (is_root_cache(s))
+ return s->memcg_params.memcg_cache;
+ return NULL;
+}
+
static inline struct obj_cgroup **page_obj_cgroups(struct page *page)
{
/*
@@ -331,25 +287,9 @@ static inline struct obj_cgroup **page_obj_cgroups(struct page *page)
((unsigned long)page->obj_cgroups & ~0x1UL);
}
-/*
- * Expects a pointer to a slab page. Please note, that PageSlab() check
- * isn't sufficient, as it returns true also for tail compound slab pages,
- * which do not have slab_cache pointer set.
- * So this function assumes that the page can pass PageSlab() && !PageTail()
- * check.
- *
- * The kmem_cache can be reparented asynchronously. The caller must ensure
- * the memcg lifetime, e.g. by taking rcu_read_lock() or cgroup_mutex.
- */
-static inline struct mem_cgroup *memcg_from_slab_page(struct page *page)
+static inline bool page_has_obj_cgroups(struct page *page)
{
- struct kmem_cache *s;
-
- s = READ_ONCE(page->slab_cache);
- if (s && !is_root_cache(s))
- return READ_ONCE(s->memcg_params.memcg);
-
- return NULL;
+ return ((unsigned long)page->obj_cgroups & 0x1UL);
}
static inline int memcg_alloc_page_obj_cgroups(struct page *page, gfp_t gfp,
@@ -385,16 +325,25 @@ static inline struct kmem_cache *memcg_slab_pre_alloc_hook(struct kmem_cache *s,
size_t objects, gfp_t flags)
{
struct kmem_cache *cachep;
+ struct obj_cgroup *objcg;
+
+ if (memcg_kmem_bypass())
+ return s;
- cachep = memcg_kmem_get_cache(s, objcgp);
+ cachep = memcg_kmem_get_cache(s);
if (is_root_cache(cachep))
return s;
- if (obj_cgroup_charge(*objcgp, flags, objects * obj_full_size(s))) {
- memcg_kmem_put_cache(cachep);
+ objcg = get_obj_cgroup_from_current();
+ if (!objcg)
+ return s;
+
+ if (obj_cgroup_charge(objcg, flags, objects * obj_full_size(s))) {
+ obj_cgroup_put(objcg);
cachep = NULL;
}
+ *objcgp = objcg;
return cachep;
}
@@ -433,7 +382,6 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
}
}
obj_cgroup_put(objcg);
- memcg_kmem_put_cache(s);
}
static inline void memcg_slab_free_hook(struct kmem_cache *s, struct page *page,
@@ -457,7 +405,7 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct page *page,
}
extern void slab_init_memcg_params(struct kmem_cache *);
-extern void memcg_link_cache(struct kmem_cache *s, struct mem_cgroup *memcg);
+extern void memcg_link_cache(struct kmem_cache *s);
#else /* CONFIG_MEMCG_KMEM */
@@ -465,9 +413,6 @@ extern void memcg_link_cache(struct kmem_cache *s, struct mem_cgroup *memcg);
#define slab_root_caches slab_caches
#define root_caches_node list
-#define for_each_memcg_cache(iter, root) \
- for ((void)(iter), (void)(root); 0; )
-
static inline bool is_root_cache(struct kmem_cache *s)
{
return true;
@@ -489,7 +434,17 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s)
return s;
}
-static inline struct mem_cgroup *memcg_from_slab_page(struct page *page)
+static inline struct kmem_cache *memcg_cache(struct kmem_cache *s)
+{
+ return NULL;
+}
+
+static inline bool page_has_obj_cgroups(struct page *page)
+{
+ return false;
+}
+
+static inline struct mem_cgroup *memcg_from_slab_obj(void *ptr)
{
return NULL;
}
@@ -526,8 +481,7 @@ static inline void slab_init_memcg_params(struct kmem_cache *s)
{
}
-static inline void memcg_link_cache(struct kmem_cache *s,
- struct mem_cgroup *memcg)
+static inline void memcg_link_cache(struct kmem_cache *s)
{
}
@@ -548,17 +502,14 @@ static __always_inline int charge_slab_page(struct page *page,
gfp_t gfp, int order,
struct kmem_cache *s)
{
-#ifdef CONFIG_MEMCG_KMEM
if (!is_root_cache(s)) {
int ret;
ret = memcg_alloc_page_obj_cgroups(page, gfp, objs_per_slab(s));
if (ret)
return ret;
-
- percpu_ref_get_many(&s->memcg_params.refcnt, 1 << order);
}
-#endif
+
mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
PAGE_SIZE << order);
return 0;
@@ -567,12 +518,9 @@ static __always_inline int charge_slab_page(struct page *page,
static __always_inline void uncharge_slab_page(struct page *page, int order,
struct kmem_cache *s)
{
-#ifdef CONFIG_MEMCG_KMEM
- if (!is_root_cache(s)) {
+ if (!is_root_cache(s))
memcg_free_page_obj_cgroups(page);
- percpu_ref_put_many(&s->memcg_params.refcnt, 1 << order);
- }
-#endif
+
mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
-(PAGE_SIZE << order));
}
@@ -721,9 +669,6 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
void *slab_start(struct seq_file *m, loff_t *pos);
void *slab_next(struct seq_file *m, void *p, loff_t *pos);
void slab_stop(struct seq_file *m, void *p);
-void *memcg_slab_start(struct seq_file *m, loff_t *pos);
-void *memcg_slab_next(struct seq_file *m, void *p, loff_t *pos);
-void memcg_slab_stop(struct seq_file *m, void *p);
int memcg_slab_show(struct seq_file *m, void *p);
#if defined(CONFIG_SLAB) || defined(CONFIG_SLUB_DEBUG)
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 3c89c2adc930..e9deaafddbb6 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -131,141 +131,36 @@ int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t nr,
#ifdef CONFIG_MEMCG_KMEM
LIST_HEAD(slab_root_caches);
-static DEFINE_SPINLOCK(memcg_kmem_wq_lock);
-
-static void kmemcg_cache_shutdown(struct percpu_ref *percpu_ref);
void slab_init_memcg_params(struct kmem_cache *s)
{
s->memcg_params.root_cache = NULL;
- RCU_INIT_POINTER(s->memcg_params.memcg_caches, NULL);
- INIT_LIST_HEAD(&s->memcg_params.children);
- s->memcg_params.dying = false;
+ s->memcg_params.memcg_cache = NULL;
}
-static int init_memcg_params(struct kmem_cache *s,
- struct kmem_cache *root_cache)
+static void init_memcg_params(struct kmem_cache *s,
+ struct kmem_cache *root_cache)
{
- struct memcg_cache_array *arr;
-
- if (root_cache) {
- int ret = percpu_ref_init(&s->memcg_params.refcnt,
- kmemcg_cache_shutdown,
- 0, GFP_KERNEL);
- if (ret)
- return ret;
-
+ if (root_cache)
s->memcg_params.root_cache = root_cache;
- INIT_LIST_HEAD(&s->memcg_params.children_node);
- INIT_LIST_HEAD(&s->memcg_params.kmem_caches_node);
- return 0;
- }
-
- slab_init_memcg_params(s);
-
- if (!memcg_nr_cache_ids)
- return 0;
-
- arr = kvzalloc(sizeof(struct memcg_cache_array) +
- memcg_nr_cache_ids * sizeof(void *),
- GFP_KERNEL);
- if (!arr)
- return -ENOMEM;
-
- RCU_INIT_POINTER(s->memcg_params.memcg_caches, arr);
- return 0;
-}
-
-static void destroy_memcg_params(struct kmem_cache *s)
-{
- if (is_root_cache(s)) {
- kvfree(rcu_access_pointer(s->memcg_params.memcg_caches));
- } else {
- mem_cgroup_put(s->memcg_params.memcg);
- WRITE_ONCE(s->memcg_params.memcg, NULL);
- percpu_ref_exit(&s->memcg_params.refcnt);
- }
-}
-
-static void free_memcg_params(struct rcu_head *rcu)
-{
- struct memcg_cache_array *old;
-
- old = container_of(rcu, struct memcg_cache_array, rcu);
- kvfree(old);
-}
-
-static int update_memcg_params(struct kmem_cache *s, int new_array_size)
-{
- struct memcg_cache_array *old, *new;
-
- new = kvzalloc(sizeof(struct memcg_cache_array) +
- new_array_size * sizeof(void *), GFP_KERNEL);
- if (!new)
- return -ENOMEM;
-
- old = rcu_dereference_protected(s->memcg_params.memcg_caches,
- lockdep_is_held(&slab_mutex));
- if (old)
- memcpy(new->entries, old->entries,
- memcg_nr_cache_ids * sizeof(void *));
-
- rcu_assign_pointer(s->memcg_params.memcg_caches, new);
- if (old)
- call_rcu(&old->rcu, free_memcg_params);
- return 0;
-}
-
-int memcg_update_all_caches(int num_memcgs)
-{
- struct kmem_cache *s;
- int ret = 0;
-
- mutex_lock(&slab_mutex);
- list_for_each_entry(s, &slab_root_caches, root_caches_node) {
- ret = update_memcg_params(s, num_memcgs);
- /*
- * Instead of freeing the memory, we'll just leave the caches
- * up to this point in an updated state.
- */
- if (ret)
- break;
- }
- mutex_unlock(&slab_mutex);
- return ret;
+ else
+ slab_init_memcg_params(s);
}
-void memcg_link_cache(struct kmem_cache *s, struct mem_cgroup *memcg)
+void memcg_link_cache(struct kmem_cache *s)
{
- if (is_root_cache(s)) {
+ if (is_root_cache(s))
list_add(&s->root_caches_node, &slab_root_caches);
- } else {
- css_get(&memcg->css);
- s->memcg_params.memcg = memcg;
- list_add(&s->memcg_params.children_node,
- &s->memcg_params.root_cache->memcg_params.children);
- list_add(&s->memcg_params.kmem_caches_node,
- &s->memcg_params.memcg->kmem_caches);
- }
}
static void memcg_unlink_cache(struct kmem_cache *s)
{
- if (is_root_cache(s)) {
+ if (is_root_cache(s))
list_del(&s->root_caches_node);
- } else {
- list_del(&s->memcg_params.children_node);
- list_del(&s->memcg_params.kmem_caches_node);
- }
}
#else
-static inline int init_memcg_params(struct kmem_cache *s,
- struct kmem_cache *root_cache)
-{
- return 0;
-}
-
-static inline void destroy_memcg_params(struct kmem_cache *s)
+static inline void init_memcg_params(struct kmem_cache *s,
+ struct kmem_cache *root_cache)
{
}
@@ -380,7 +275,7 @@ static struct kmem_cache *create_cache(const char *name,
unsigned int object_size, unsigned int align,
slab_flags_t flags, unsigned int useroffset,
unsigned int usersize, void (*ctor)(void *),
- struct mem_cgroup *memcg, struct kmem_cache *root_cache)
+ struct kmem_cache *root_cache)
{
struct kmem_cache *s;
int err;
@@ -400,24 +295,20 @@ static struct kmem_cache *create_cache(const char *name,
s->useroffset = useroffset;
s->usersize = usersize;
- err = init_memcg_params(s, root_cache);
- if (err)
- goto out_free_cache;
-
+ init_memcg_params(s, root_cache);
err = __kmem_cache_create(s, flags);
if (err)
goto out_free_cache;
s->refcount = 1;
list_add(&s->list, &slab_caches);
- memcg_link_cache(s, memcg);
+ memcg_link_cache(s);
out:
if (err)
return ERR_PTR(err);
return s;
out_free_cache:
- destroy_memcg_params(s);
kmem_cache_free(kmem_cache, s);
goto out;
}
@@ -504,7 +395,7 @@ kmem_cache_create_usercopy(const char *name,
s = create_cache(cache_name, size,
calculate_alignment(flags, align, size),
- flags, useroffset, usersize, ctor, NULL, NULL);
+ flags, useroffset, usersize, ctor, NULL);
if (IS_ERR(s)) {
err = PTR_ERR(s);
kfree_const(cache_name);
@@ -629,51 +520,27 @@ static int shutdown_cache(struct kmem_cache *s)
#ifdef CONFIG_MEMCG_KMEM
/*
- * memcg_create_kmem_cache - Create a cache for a memory cgroup.
- * @memcg: The memory cgroup the new cache is for.
+ * memcg_create_kmem_cache - Create a cache for non-root memory cgroups.
* @root_cache: The parent of the new cache.
*
* This function attempts to create a kmem cache that will serve allocation
- * requests going from @memcg to @root_cache. The new cache inherits properties
- * from its parent.
+ * requests going all non-root memory cgroups to @root_cache. The new cache
+ * inherits properties from its parent.
*/
-void memcg_create_kmem_cache(struct mem_cgroup *memcg,
- struct kmem_cache *root_cache)
+void memcg_create_kmem_cache(struct kmem_cache *root_cache)
{
- static char memcg_name_buf[NAME_MAX + 1]; /* protected by slab_mutex */
- struct cgroup_subsys_state *css = &memcg->css;
- struct memcg_cache_array *arr;
struct kmem_cache *s = NULL;
char *cache_name;
- int idx;
get_online_cpus();
get_online_mems();
mutex_lock(&slab_mutex);
- /*
- * The memory cgroup could have been offlined while the cache
- * creation work was pending.
- */
- if (memcg->kmem_state != KMEM_ONLINE)
+ if (root_cache->memcg_params.memcg_cache)
goto out_unlock;
- idx = memcg_cache_id(memcg);
- arr = rcu_dereference_protected(root_cache->memcg_params.memcg_caches,
- lockdep_is_held(&slab_mutex));
-
- /*
- * Since per-memcg caches are created asynchronously on first
- * allocation (see memcg_kmem_get_cache()), several threads can try to
- * create the same cache, but only one of them may succeed.
- */
- if (arr->entries[idx])
- goto out_unlock;
-
- cgroup_name(css->cgroup, memcg_name_buf, sizeof(memcg_name_buf));
- cache_name = kasprintf(GFP_KERNEL, "%s(%llu:%s)", root_cache->name,
- css->serial_nr, memcg_name_buf);
+ cache_name = kasprintf(GFP_KERNEL, "%s-memcg", root_cache->name);
if (!cache_name)
goto out_unlock;
@@ -681,7 +548,7 @@ void memcg_create_kmem_cache(struct mem_cgroup *memcg,
root_cache->align,
root_cache->flags & CACHE_CREATE_MASK,
root_cache->useroffset, root_cache->usersize,
- root_cache->ctor, memcg, root_cache);
+ root_cache->ctor, root_cache);
/*
* If we could not create a memcg cache, do not complain, because
* that's not critical at all as we can always proceed with the root
@@ -698,7 +565,7 @@ void memcg_create_kmem_cache(struct mem_cgroup *memcg,
* initialized.
*/
smp_wmb();
- arr->entries[idx] = s;
+ root_cache->memcg_params.memcg_cache = s;
out_unlock:
mutex_unlock(&slab_mutex);
@@ -707,197 +574,18 @@ void memcg_create_kmem_cache(struct mem_cgroup *memcg,
put_online_cpus();
}
-static void kmemcg_workfn(struct work_struct *work)
-{
- struct kmem_cache *s = container_of(work, struct kmem_cache,
- memcg_params.work);
-
- get_online_cpus();
- get_online_mems();
-
- mutex_lock(&slab_mutex);
- s->memcg_params.work_fn(s);
- mutex_unlock(&slab_mutex);
-
- put_online_mems();
- put_online_cpus();
-}
-
-static void kmemcg_rcufn(struct rcu_head *head)
-{
- struct kmem_cache *s = container_of(head, struct kmem_cache,
- memcg_params.rcu_head);
-
- /*
- * We need to grab blocking locks. Bounce to ->work. The
- * work item shares the space with the RCU head and can't be
- * initialized earlier.
- */
- INIT_WORK(&s->memcg_params.work, kmemcg_workfn);
- queue_work(memcg_kmem_cache_wq, &s->memcg_params.work);
-}
-
-static void kmemcg_cache_shutdown_fn(struct kmem_cache *s)
-{
- WARN_ON(shutdown_cache(s));
-}
-
-static void kmemcg_cache_shutdown(struct percpu_ref *percpu_ref)
-{
- struct kmem_cache *s = container_of(percpu_ref, struct kmem_cache,
- memcg_params.refcnt);
- unsigned long flags;
-
- spin_lock_irqsave(&memcg_kmem_wq_lock, flags);
- if (s->memcg_params.root_cache->memcg_params.dying)
- goto unlock;
-
- s->memcg_params.work_fn = kmemcg_cache_shutdown_fn;
- INIT_WORK(&s->memcg_params.work, kmemcg_workfn);
- queue_work(memcg_kmem_cache_wq, &s->memcg_params.work);
-
-unlock:
- spin_unlock_irqrestore(&memcg_kmem_wq_lock, flags);
-}
-
-static void kmemcg_cache_deactivate_after_rcu(struct kmem_cache *s)
-{
- __kmemcg_cache_deactivate_after_rcu(s);
- percpu_ref_kill(&s->memcg_params.refcnt);
-}
-
-static void kmemcg_cache_deactivate(struct kmem_cache *s)
-{
- if (WARN_ON_ONCE(is_root_cache(s)))
- return;
-
- __kmemcg_cache_deactivate(s);
- s->flags |= SLAB_DEACTIVATED;
-
- /*
- * memcg_kmem_wq_lock is used to synchronize memcg_params.dying
- * flag and make sure that no new kmem_cache deactivation tasks
- * are queued (see flush_memcg_workqueue() ).
- */
- spin_lock_irq(&memcg_kmem_wq_lock);
- if (s->memcg_params.root_cache->memcg_params.dying)
- goto unlock;
-
- s->memcg_params.work_fn = kmemcg_cache_deactivate_after_rcu;
- call_rcu(&s->memcg_params.rcu_head, kmemcg_rcufn);
-unlock:
- spin_unlock_irq(&memcg_kmem_wq_lock);
-}
-
-void memcg_deactivate_kmem_caches(struct mem_cgroup *memcg,
- struct mem_cgroup *parent)
-{
- int idx;
- struct memcg_cache_array *arr;
- struct kmem_cache *s, *c;
- unsigned int nr_reparented;
-
- idx = memcg_cache_id(memcg);
-
- get_online_cpus();
- get_online_mems();
-
- mutex_lock(&slab_mutex);
- list_for_each_entry(s, &slab_root_caches, root_caches_node) {
- arr = rcu_dereference_protected(s->memcg_params.memcg_caches,
- lockdep_is_held(&slab_mutex));
- c = arr->entries[idx];
- if (!c)
- continue;
-
- kmemcg_cache_deactivate(c);
- arr->entries[idx] = NULL;
- }
- nr_reparented = 0;
- list_for_each_entry(s, &memcg->kmem_caches,
- memcg_params.kmem_caches_node) {
- WRITE_ONCE(s->memcg_params.memcg, parent);
- css_put(&memcg->css);
- nr_reparented++;
- }
- if (nr_reparented) {
- list_splice_init(&memcg->kmem_caches,
- &parent->kmem_caches);
- css_get_many(&parent->css, nr_reparented);
- }
- mutex_unlock(&slab_mutex);
-
- put_online_mems();
- put_online_cpus();
-}
-
static int shutdown_memcg_caches(struct kmem_cache *s)
{
- struct memcg_cache_array *arr;
- struct kmem_cache *c, *c2;
- LIST_HEAD(busy);
- int i;
-
BUG_ON(!is_root_cache(s));
- /*
- * First, shutdown active caches, i.e. caches that belong to online
- * memory cgroups.
- */
- arr = rcu_dereference_protected(s->memcg_params.memcg_caches,
- lockdep_is_held(&slab_mutex));
- for_each_memcg_cache_index(i) {
- c = arr->entries[i];
- if (!c)
- continue;
- if (shutdown_cache(c))
- /*
- * The cache still has objects. Move it to a temporary
- * list so as not to try to destroy it for a second
- * time while iterating over inactive caches below.
- */
- list_move(&c->memcg_params.children_node, &busy);
- else
- /*
- * The cache is empty and will be destroyed soon. Clear
- * the pointer to it in the memcg_caches array so that
- * it will never be accessed even if the root cache
- * stays alive.
- */
- arr->entries[i] = NULL;
- }
-
- /*
- * Second, shutdown all caches left from memory cgroups that are now
- * offline.
- */
- list_for_each_entry_safe(c, c2, &s->memcg_params.children,
- memcg_params.children_node)
- shutdown_cache(c);
-
- list_splice(&busy, &s->memcg_params.children);
+ if (s->memcg_params.memcg_cache)
+ WARN_ON(shutdown_cache(s->memcg_params.memcg_cache));
- /*
- * A cache being destroyed must be empty. In particular, this means
- * that all per memcg caches attached to it must be empty too.
- */
- if (!list_empty(&s->memcg_params.children))
- return -EBUSY;
return 0;
}
static void flush_memcg_workqueue(struct kmem_cache *s)
{
- spin_lock_irq(&memcg_kmem_wq_lock);
- s->memcg_params.dying = true;
- spin_unlock_irq(&memcg_kmem_wq_lock);
-
- /*
- * SLAB and SLUB deactivate the kmem_caches through call_rcu. Make
- * sure all registered rcu callbacks have been invoked.
- */
- rcu_barrier();
-
/*
* SLAB and SLUB create memcg kmem_caches through workqueue and SLUB
* deactivates the memcg kmem_caches through workqueue. Make sure all
@@ -905,18 +593,6 @@ static void flush_memcg_workqueue(struct kmem_cache *s)
*/
if (likely(memcg_kmem_cache_wq))
flush_workqueue(memcg_kmem_cache_wq);
-
- /*
- * If we're racing with children kmem_cache deactivation, it might
- * take another rcu grace period to complete their destruction.
- * At this moment the corresponding percpu_ref_kill() call should be
- * done, but it might take another rcu grace period to complete
- * switching to the atomic mode.
- * Please, note that we check without grabbing the slab_mutex. It's safe
- * because at this moment the children list can't grow.
- */
- if (!list_empty(&s->memcg_params.children))
- rcu_barrier();
}
#else
static inline int shutdown_memcg_caches(struct kmem_cache *s)
@@ -932,7 +608,6 @@ static inline void flush_memcg_workqueue(struct kmem_cache *s)
void slab_kmem_cache_release(struct kmem_cache *s)
{
__kmem_cache_release(s);
- destroy_memcg_params(s);
kfree_const(s->name);
kmem_cache_free(kmem_cache, s);
}
@@ -996,7 +671,7 @@ int kmem_cache_shrink(struct kmem_cache *cachep)
EXPORT_SYMBOL(kmem_cache_shrink);
/**
- * kmem_cache_shrink_all - shrink a cache and all memcg caches for root cache
+ * kmem_cache_shrink_all - shrink root and memcg caches
* @s: The cache pointer
*/
void kmem_cache_shrink_all(struct kmem_cache *s)
@@ -1013,21 +688,11 @@ void kmem_cache_shrink_all(struct kmem_cache *s)
kasan_cache_shrink(s);
__kmem_cache_shrink(s);
- /*
- * We have to take the slab_mutex to protect from the memcg list
- * modification.
- */
- mutex_lock(&slab_mutex);
- for_each_memcg_cache(c, s) {
- /*
- * Don't need to shrink deactivated memcg caches.
- */
- if (s->flags & SLAB_DEACTIVATED)
- continue;
+ c = memcg_cache(s);
+ if (c) {
kasan_cache_shrink(c);
__kmem_cache_shrink(c);
}
- mutex_unlock(&slab_mutex);
put_online_mems();
put_online_cpus();
}
@@ -1082,7 +747,7 @@ struct kmem_cache *__init create_kmalloc_cache(const char *name,
create_boot_cache(s, name, size, flags, useroffset, usersize);
list_add(&s->list, &slab_caches);
- memcg_link_cache(s, NULL);
+ memcg_link_cache(s);
s->refcount = 1;
return s;
}
@@ -1445,7 +1110,8 @@ memcg_accumulate_slabinfo(struct kmem_cache *s, struct slabinfo *info)
if (!is_root_cache(s))
return;
- for_each_memcg_cache(c, s) {
+ c = memcg_cache(s);
+ if (c) {
memset(&sinfo, 0, sizeof(sinfo));
get_slabinfo(c, &sinfo);
@@ -1576,7 +1242,7 @@ module_init(slab_proc_init);
#if defined(CONFIG_DEBUG_FS) && defined(CONFIG_MEMCG_KMEM)
/*
- * Display information about kmem caches that have child memcg caches.
+ * Display information about kmem caches that have memcg cache.
*/
static int memcg_slabinfo_show(struct seq_file *m, void *unused)
{
@@ -1588,9 +1254,9 @@ static int memcg_slabinfo_show(struct seq_file *m, void *unused)
seq_puts(m, " <active_slabs> <num_slabs>\n");
list_for_each_entry(s, &slab_root_caches, root_caches_node) {
/*
- * Skip kmem caches that don't have any memcg children.
+ * Skip kmem caches that don't have the memcg cache.
*/
- if (list_empty(&s->memcg_params.children))
+ if (!s->memcg_params.memcg_cache)
continue;
memset(&sinfo, 0, sizeof(sinfo));
@@ -1599,23 +1265,13 @@ static int memcg_slabinfo_show(struct seq_file *m, void *unused)
cache_name(s), sinfo.active_objs, sinfo.num_objs,
sinfo.active_slabs, sinfo.num_slabs);
- for_each_memcg_cache(c, s) {
- struct cgroup_subsys_state *css;
- char *status = "";
-
- css = &c->memcg_params.memcg->css;
- if (!(css->flags & CSS_ONLINE))
- status = ":dead";
- else if (c->flags & SLAB_DEACTIVATED)
- status = ":deact";
-
- memset(&sinfo, 0, sizeof(sinfo));
- get_slabinfo(c, &sinfo);
- seq_printf(m, "%-17s %4d%-6s %6lu %6lu %6lu %6lu\n",
- cache_name(c), css->id, status,
- sinfo.active_objs, sinfo.num_objs,
- sinfo.active_slabs, sinfo.num_slabs);
- }
+ c = s->memcg_params.memcg_cache;
+ memset(&sinfo, 0, sizeof(sinfo));
+ get_slabinfo(c, &sinfo);
+ seq_printf(m, "%-17s %4d %6lu %6lu %6lu %6lu\n",
+ cache_name(c), root_mem_cgroup->css.id,
+ sinfo.active_objs, sinfo.num_objs,
+ sinfo.active_slabs, sinfo.num_slabs);
}
mutex_unlock(&slab_mutex);
return 0;
diff --git a/mm/slub.c b/mm/slub.c
index 67ae40fcfcda..3e4cb081af5d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4117,36 +4117,6 @@ int __kmem_cache_shrink(struct kmem_cache *s)
return ret;
}
-#ifdef CONFIG_MEMCG
-void __kmemcg_cache_deactivate_after_rcu(struct kmem_cache *s)
-{
- /*
- * Called with all the locks held after a sched RCU grace period.
- * Even if @s becomes empty after shrinking, we can't know that @s
- * doesn't have allocations already in-flight and thus can't
- * destroy @s until the associated memcg is released.
- *
- * However, let's remove the sysfs files for empty caches here.
- * Each cache has a lot of interface files which aren't
- * particularly useful for empty draining caches; otherwise, we can
- * easily end up with millions of unnecessary sysfs files on
- * systems which have a lot of memory and transient cgroups.
- */
- if (!__kmem_cache_shrink(s))
- sysfs_slab_remove(s);
-}
-
-void __kmemcg_cache_deactivate(struct kmem_cache *s)
-{
- /*
- * Disable empty slabs caching. Used to avoid pinning offline
- * memory cgroups by kmem pages that can be freed.
- */
- slub_set_cpu_partial(s, 0);
- s->min_partial = 0;
-}
-#endif /* CONFIG_MEMCG */
-
static int slab_mem_going_offline_callback(void *arg)
{
struct kmem_cache *s;
@@ -4303,7 +4273,7 @@ static struct kmem_cache * __init bootstrap(struct kmem_cache *static_cache)
}
slab_init_memcg_params(s);
list_add(&s->list, &slab_caches);
- memcg_link_cache(s, NULL);
+ memcg_link_cache(s);
return s;
}
@@ -4371,7 +4341,8 @@ __kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
s->object_size = max(s->object_size, size);
s->inuse = max(s->inuse, ALIGN(size, sizeof(void *)));
- for_each_memcg_cache(c, s) {
+ c = memcg_cache(s);
+ if (c) {
c->object_size = s->object_size;
c->inuse = max(c->inuse, ALIGN(size, sizeof(void *)));
}
@@ -5626,7 +5597,8 @@ static ssize_t slab_attr_store(struct kobject *kobj,
* directly either failed or succeeded, in which case we loop
* through the descendants with best-effort propagation.
*/
- for_each_memcg_cache(c, s)
+ c = memcg_cache(s);
+ if (c)
attribute->store(c, buf, len);
mutex_unlock(&slab_mutex);
}
--
2.25.3
To implement per-object slab memory accounting, we need to
convert slab vmstat counters to bytes. Actually, out of
4 levels of counters: global, per-node, per-memcg and per-lruvec
only two last levels will require byte-sized counters.
It's because global and per-node counters will be counting the
number of slab pages, and per-memcg and per-lruvec will be
counting the amount of memory taken by charged slab objects.
Converting all vmstat counters to bytes or even all slab
counters to bytes would introduce an additional overhead.
So instead let's store global and per-node counters
in pages, and memcg and lruvec counters in bytes.
To make the API clean all access helpers (both on the read
and write sides) are dealing with bytes.
To avoid back-and-forth conversions a new flavor of helpers
is introduced, which always returns values in pages:
node_page_state_pages() and global_node_page_state_pages().
Actually new helpers are just reading raw values. Old helpers are
simple wrappers, which perform a conversion if the vmstat items are
in bytes. Because at the moment no one actually need bytes,
there are WARN_ON_ONCE() macroses inside to warn about inappropriate
use cases.
Thanks to Johannes Weiner for the idea of having the byte-sized API
on top of the page-sized internal storage.
Signed-off-by: Roman Gushchin <[email protected]>
---
drivers/base/node.c | 2 +-
include/linux/mmzone.h | 5 +++++
include/linux/vmstat.h | 16 +++++++++++++++-
mm/memcontrol.c | 14 ++++++++++----
mm/vmstat.c | 33 +++++++++++++++++++++++++++++----
5 files changed, 60 insertions(+), 10 deletions(-)
diff --git a/drivers/base/node.c b/drivers/base/node.c
index 10d7e818e118..9d6afb7d2ccd 100644
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -507,7 +507,7 @@ static ssize_t node_read_vmstat(struct device *dev,
for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
n += sprintf(buf+n, "%s %lu\n", node_stat_name(i),
- node_page_state(pgdat, i));
+ node_page_state_pages(pgdat, i));
return n;
}
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index c1fbda9ddd1f..22fe65edf425 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -204,6 +204,11 @@ enum node_stat_item {
NR_VM_NODE_STAT_ITEMS
};
+static __always_inline bool vmstat_item_in_bytes(enum node_stat_item item)
+{
+ return false;
+}
+
/*
* We do arithmetic on the LRU lists in various places in the code,
* so it is important to keep the active lists LRU_ACTIVE higher in
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index 292485f3d24d..117763827cd0 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -190,7 +190,8 @@ static inline unsigned long global_zone_page_state(enum zone_stat_item item)
return x;
}
-static inline unsigned long global_node_page_state(enum node_stat_item item)
+static inline
+unsigned long global_node_page_state_pages(enum node_stat_item item)
{
long x = atomic_long_read(&vm_node_stat[item]);
#ifdef CONFIG_SMP
@@ -200,6 +201,16 @@ static inline unsigned long global_node_page_state(enum node_stat_item item)
return x;
}
+static inline unsigned long global_node_page_state(enum node_stat_item item)
+{
+ unsigned long x = global_node_page_state_pages(item);
+
+ if (WARN_ON_ONCE(vmstat_item_in_bytes(item)))
+ return x << PAGE_SHIFT;
+
+ return x;
+}
+
static inline unsigned long zone_page_state(struct zone *zone,
enum zone_stat_item item)
{
@@ -240,9 +251,12 @@ extern unsigned long sum_zone_node_page_state(int node,
extern unsigned long sum_zone_numa_state(int node, enum numa_stat_item item);
extern unsigned long node_page_state(struct pglist_data *pgdat,
enum node_stat_item item);
+extern unsigned long node_page_state_pages(struct pglist_data *pgdat,
+ enum node_stat_item item);
#else
#define sum_zone_node_page_state(node, item) global_zone_page_state(item)
#define node_page_state(node, item) global_node_page_state(item)
+#define node_page_state_pages(node, item) global_node_page_state_pages(item)
#endif /* CONFIG_NUMA */
#ifdef CONFIG_SMP
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index f6ff20095105..5f700fa8b78c 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -681,13 +681,16 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz)
*/
void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val)
{
- long x;
+ long x, threshold = MEMCG_CHARGE_BATCH;
if (mem_cgroup_disabled())
return;
+ if (vmstat_item_in_bytes(idx))
+ threshold <<= PAGE_SHIFT;
+
x = val + __this_cpu_read(memcg->vmstats_percpu->stat[idx]);
- if (unlikely(abs(x) > MEMCG_CHARGE_BATCH)) {
+ if (unlikely(abs(x) > threshold)) {
struct mem_cgroup *mi;
/*
@@ -719,7 +722,7 @@ void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
pg_data_t *pgdat = lruvec_pgdat(lruvec);
struct mem_cgroup_per_node *pn;
struct mem_cgroup *memcg;
- long x;
+ long x, threshold = MEMCG_CHARGE_BATCH;
pn = container_of(lruvec, struct mem_cgroup_per_node, lruvec);
memcg = pn->memcg;
@@ -730,8 +733,11 @@ void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
/* Update lruvec */
__this_cpu_add(pn->lruvec_stat_local->count[idx], val);
+ if (vmstat_item_in_bytes(idx))
+ threshold <<= PAGE_SHIFT;
+
x = val + __this_cpu_read(pn->lruvec_stat_cpu->count[idx]);
- if (unlikely(abs(x) > MEMCG_CHARGE_BATCH)) {
+ if (unlikely(abs(x) > threshold)) {
struct mem_cgroup_per_node *pi;
for (pi = pn; pi; pi = parent_nodeinfo(pi, pgdat->node_id))
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 6fd1407f4632..7ac13f6d189a 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -341,6 +341,11 @@ void __mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item,
long x;
long t;
+ if (vmstat_item_in_bytes(item)) {
+ WARN_ON(delta & (PAGE_SIZE - 1));
+ delta >>= PAGE_SHIFT;
+ }
+
x = delta + __this_cpu_read(*p);
t = __this_cpu_read(pcp->stat_threshold);
@@ -398,6 +403,8 @@ void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item)
s8 __percpu *p = pcp->vm_node_stat_diff + item;
s8 v, t;
+ WARN_ON_ONCE(vmstat_item_in_bytes(item));
+
v = __this_cpu_inc_return(*p);
t = __this_cpu_read(pcp->stat_threshold);
if (unlikely(v > t)) {
@@ -442,6 +449,8 @@ void __dec_node_state(struct pglist_data *pgdat, enum node_stat_item item)
s8 __percpu *p = pcp->vm_node_stat_diff + item;
s8 v, t;
+ WARN_ON_ONCE(vmstat_item_in_bytes(item));
+
v = __this_cpu_dec_return(*p);
t = __this_cpu_read(pcp->stat_threshold);
if (unlikely(v < - t)) {
@@ -541,6 +550,11 @@ static inline void mod_node_state(struct pglist_data *pgdat,
s8 __percpu *p = pcp->vm_node_stat_diff + item;
long o, n, t, z;
+ if (vmstat_item_in_bytes(item)) {
+ WARN_ON_ONCE(delta & (PAGE_SIZE - 1));
+ delta >>= PAGE_SHIFT;
+ }
+
do {
z = 0; /* overflow to node counters */
@@ -989,8 +1003,8 @@ unsigned long sum_zone_numa_state(int node,
/*
* Determine the per node value of a stat item.
*/
-unsigned long node_page_state(struct pglist_data *pgdat,
- enum node_stat_item item)
+unsigned long node_page_state_pages(struct pglist_data *pgdat,
+ enum node_stat_item item)
{
long x = atomic_long_read(&pgdat->vm_stat[item]);
#ifdef CONFIG_SMP
@@ -999,6 +1013,17 @@ unsigned long node_page_state(struct pglist_data *pgdat,
#endif
return x;
}
+
+unsigned long node_page_state(struct pglist_data *pgdat,
+ enum node_stat_item item)
+{
+ unsigned long x = node_page_state_pages(pgdat, item);
+
+ if (WARN_ON_ONCE(vmstat_item_in_bytes(item)))
+ return x << PAGE_SHIFT;
+
+ return x;
+}
#endif
#ifdef CONFIG_COMPACTION
@@ -1571,7 +1596,7 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
seq_printf(m, "\n per-node stats");
for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) {
seq_printf(m, "\n %-12s %lu", node_stat_name(i),
- node_page_state(pgdat, i));
+ node_page_state_pages(pgdat, i));
}
}
seq_printf(m,
@@ -1692,7 +1717,7 @@ static void *vmstat_start(struct seq_file *m, loff_t *pos)
#endif
for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
- v[i] = global_node_page_state(i);
+ v[i] = global_node_page_state_pages(i);
v += NR_VM_NODE_STAT_ITEMS;
global_dirty_limits(v + NR_DIRTY_BG_THRESHOLD,
--
2.25.3
Deprecate memory.kmem.slabinfo.
An empty file will be presented if corresponding config options are
enabled.
The interface is implementation dependent, isn't present in cgroup v2,
and is generally useful only for core mm debugging purposes. In other
words, it doesn't provide any value for the absolute majority of users.
A drgn-based replacement can be found in tools/cgroup/slabinfo.py .
It does support cgroup v1 and v2, mimics memory.kmem.slabinfo output
and also allows to get any additional information without a need
to recompile the kernel.
If a drgn-based solution is too slow for a task, a bpf-based tracing
tool can be used, which can easily keep track of all slab allocations
belonging to a memory cgroup.
Signed-off-by: Roman Gushchin <[email protected]>
---
mm/memcontrol.c | 3 ---
mm/slab_common.c | 31 ++++---------------------------
2 files changed, 4 insertions(+), 30 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index deb6ceae7577..f957b029a62f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -5089,9 +5089,6 @@ static struct cftype mem_cgroup_legacy_files[] = {
(defined(CONFIG_SLAB) || defined(CONFIG_SLUB_DEBUG))
{
.name = "kmem.slabinfo",
- .seq_start = memcg_slab_start,
- .seq_next = memcg_slab_next,
- .seq_stop = memcg_slab_stop,
.seq_show = memcg_slab_show,
},
#endif
diff --git a/mm/slab_common.c b/mm/slab_common.c
index b578ae29c743..3c89c2adc930 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1523,35 +1523,12 @@ void dump_unreclaimable_slab(void)
}
#if defined(CONFIG_MEMCG_KMEM)
-void *memcg_slab_start(struct seq_file *m, loff_t *pos)
-{
- struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
-
- mutex_lock(&slab_mutex);
- return seq_list_start(&memcg->kmem_caches, *pos);
-}
-
-void *memcg_slab_next(struct seq_file *m, void *p, loff_t *pos)
-{
- struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
-
- return seq_list_next(p, &memcg->kmem_caches, pos);
-}
-
-void memcg_slab_stop(struct seq_file *m, void *p)
-{
- mutex_unlock(&slab_mutex);
-}
-
int memcg_slab_show(struct seq_file *m, void *p)
{
- struct kmem_cache *s = list_entry(p, struct kmem_cache,
- memcg_params.kmem_caches_node);
- struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
-
- if (p == memcg->kmem_caches.next)
- print_slabinfo_header(m);
- cache_show(s, m);
+ /*
+ * Deprecated.
+ * Please, take a look at tools/cgroup/slabinfo.py .
+ */
return 0;
}
#endif
--
2.25.3
Currently there are two lists of kmem_caches:
1) slab_caches, which contains all kmem_caches,
2) slab_root_caches, which contains only root kmem_caches.
And there is some preprocessor magic to have a single list
if CONFIG_MEMCG_KMEM isn't enabled.
It was required earlier because the number of non-root kmem_caches
was proportional to the number of memory cgroups and could reach
really big values. Now, when it cannot exceed the number of root
kmem_caches, there is really no reason to maintain two lists.
We never iterate over the slab_root_caches list on any hot paths,
so it's perfectly fine to iterate over slab_caches and filter out
non-root kmem_caches.
It allows to remove a lot of config-dependent code and two pointers
from the kmem_cache structure.
Signed-off-by: Roman Gushchin <[email protected]>
---
mm/slab.c | 1 -
mm/slab.h | 17 -----------------
mm/slab_common.c | 37 ++++++++-----------------------------
mm/slub.c | 1 -
4 files changed, 8 insertions(+), 48 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index 17f781a5b62c..5e933f5e24db 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1239,7 +1239,6 @@ void __init kmem_cache_init(void)
nr_node_ids * sizeof(struct kmem_cache_node *),
SLAB_HWCACHE_ALIGN, 0, 0);
list_add(&kmem_cache->list, &slab_caches);
- memcg_link_cache(kmem_cache);
slab_state = PARTIAL;
/*
diff --git a/mm/slab.h b/mm/slab.h
index cbee6cb0a331..2958ca8d3159 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -44,14 +44,12 @@ struct kmem_cache {
*
* @memcg_cache: pointer to memcg kmem cache, used by all non-root memory
* cgroups.
- * @root_caches_node: list node for slab_root_caches list.
* @work: work struct used to create the non-root cache.
*/
struct memcg_cache_params {
struct kmem_cache *root_cache;
struct kmem_cache *memcg_cache;
- struct list_head __root_caches_node;
struct work_struct work;
};
#endif /* CONFIG_SLOB */
@@ -235,11 +233,6 @@ static inline int cache_vmstat_idx(struct kmem_cache *s)
}
#ifdef CONFIG_MEMCG_KMEM
-
-/* List of all root caches. */
-extern struct list_head slab_root_caches;
-#define root_caches_node memcg_params.__root_caches_node
-
static inline bool is_root_cache(struct kmem_cache *s)
{
return !s->memcg_params.root_cache;
@@ -414,14 +407,8 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct page *page,
}
extern void slab_init_memcg_params(struct kmem_cache *);
-extern void memcg_link_cache(struct kmem_cache *s);
#else /* CONFIG_MEMCG_KMEM */
-
-/* If !memcg, all caches are root. */
-#define slab_root_caches slab_caches
-#define root_caches_node list
-
static inline bool is_root_cache(struct kmem_cache *s)
{
return true;
@@ -490,10 +477,6 @@ static inline void slab_init_memcg_params(struct kmem_cache *s)
{
}
-static inline void memcg_link_cache(struct kmem_cache *s)
-{
-}
-
#endif /* CONFIG_MEMCG_KMEM */
static inline struct kmem_cache *virt_to_cache(const void *obj)
diff --git a/mm/slab_common.c b/mm/slab_common.c
index f8874a159637..c045afb9724e 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -129,9 +129,6 @@ int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t nr,
}
#ifdef CONFIG_MEMCG_KMEM
-
-LIST_HEAD(slab_root_caches);
-
static void memcg_kmem_cache_create_func(struct work_struct *work)
{
struct kmem_cache *cachep = container_of(work, struct kmem_cache,
@@ -154,27 +151,11 @@ static void init_memcg_params(struct kmem_cache *s,
else
slab_init_memcg_params(s);
}
-
-void memcg_link_cache(struct kmem_cache *s)
-{
- if (is_root_cache(s))
- list_add(&s->root_caches_node, &slab_root_caches);
-}
-
-static void memcg_unlink_cache(struct kmem_cache *s)
-{
- if (is_root_cache(s))
- list_del(&s->root_caches_node);
-}
#else
static inline void init_memcg_params(struct kmem_cache *s,
struct kmem_cache *root_cache)
{
}
-
-static inline void memcg_unlink_cache(struct kmem_cache *s)
-{
-}
#endif /* CONFIG_MEMCG_KMEM */
/*
@@ -251,7 +232,7 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align,
if (flags & SLAB_NEVER_MERGE)
return NULL;
- list_for_each_entry_reverse(s, &slab_root_caches, root_caches_node) {
+ list_for_each_entry_reverse(s, &slab_caches, list) {
if (slab_unmergeable(s))
continue;
@@ -310,7 +291,6 @@ static struct kmem_cache *create_cache(const char *name,
s->refcount = 1;
list_add(&s->list, &slab_caches);
- memcg_link_cache(s);
out:
if (err)
return ERR_PTR(err);
@@ -505,7 +485,6 @@ static int shutdown_cache(struct kmem_cache *s)
if (__kmem_cache_shutdown(s) != 0)
return -EBUSY;
- memcg_unlink_cache(s);
list_del(&s->list);
if (s->flags & SLAB_TYPESAFE_BY_RCU) {
@@ -749,7 +728,6 @@ struct kmem_cache *__init create_kmalloc_cache(const char *name,
create_boot_cache(s, name, size, flags, useroffset, usersize);
list_add(&s->list, &slab_caches);
- memcg_link_cache(s);
s->refcount = 1;
return s;
}
@@ -1090,12 +1068,12 @@ static void print_slabinfo_header(struct seq_file *m)
void *slab_start(struct seq_file *m, loff_t *pos)
{
mutex_lock(&slab_mutex);
- return seq_list_start(&slab_root_caches, *pos);
+ return seq_list_start(&slab_caches, *pos);
}
void *slab_next(struct seq_file *m, void *p, loff_t *pos)
{
- return seq_list_next(p, &slab_root_caches, pos);
+ return seq_list_next(p, &slab_caches, pos);
}
void slab_stop(struct seq_file *m, void *p)
@@ -1148,11 +1126,12 @@ static void cache_show(struct kmem_cache *s, struct seq_file *m)
static int slab_show(struct seq_file *m, void *p)
{
- struct kmem_cache *s = list_entry(p, struct kmem_cache, root_caches_node);
+ struct kmem_cache *s = list_entry(p, struct kmem_cache, list);
- if (p == slab_root_caches.next)
+ if (p == slab_caches.next)
print_slabinfo_header(m);
- cache_show(s, m);
+ if (is_root_cache(s))
+ cache_show(s, m);
return 0;
}
@@ -1254,7 +1233,7 @@ static int memcg_slabinfo_show(struct seq_file *m, void *unused)
mutex_lock(&slab_mutex);
seq_puts(m, "# <name> <css_id[:dead|deact]> <active_objs> <num_objs>");
seq_puts(m, " <active_slabs> <num_slabs>\n");
- list_for_each_entry(s, &slab_root_caches, root_caches_node) {
+ list_for_each_entry(s, &slab_caches, list) {
/*
* Skip kmem caches that don't have the memcg cache.
*/
diff --git a/mm/slub.c b/mm/slub.c
index 3e4cb081af5d..799082723e77 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4273,7 +4273,6 @@ static struct kmem_cache * __init bootstrap(struct kmem_cache *static_cache)
}
slab_init_memcg_params(s);
list_add(&s->list, &slab_caches);
- memcg_link_cache(s);
return s;
}
--
2.25.3
Store the obj_cgroup pointer in the corresponding place of
page->obj_cgroups for each allocated non-root slab object.
Make sure that each allocated object holds a reference to obj_cgroup.
Objcg pointer is obtained from the memcg->objcg dereferencing
in memcg_kmem_get_cache() and passed from pre_alloc_hook to
post_alloc_hook. Then in case of successful allocation(s) it's
getting stored in the page->obj_cgroups vector.
The objcg obtaining part look a bit bulky now, but it will be simplified
by next commits in the series.
Signed-off-by: Roman Gushchin <[email protected]>
---
include/linux/memcontrol.h | 3 +-
mm/memcontrol.c | 14 +++++++--
mm/slab.c | 18 +++++++-----
mm/slab.h | 60 ++++++++++++++++++++++++++++++++++----
mm/slub.c | 14 +++++----
5 files changed, 88 insertions(+), 21 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index bf1be842fd27..44b7d1244620 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1426,7 +1426,8 @@ static inline void memcg_set_shrinker_bit(struct mem_cgroup *memcg,
}
#endif
-struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep);
+struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep,
+ struct obj_cgroup **objcgp);
void memcg_kmem_put_cache(struct kmem_cache *cachep);
#ifdef CONFIG_MEMCG_KMEM
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 63826e460b3f..deb6ceae7577 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2964,7 +2964,8 @@ static inline bool memcg_kmem_bypass(void)
* done with it, memcg_kmem_put_cache() must be called to release the
* reference.
*/
-struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep)
+struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep,
+ struct obj_cgroup **objcgp)
{
struct mem_cgroup *memcg;
struct kmem_cache *memcg_cachep;
@@ -3020,8 +3021,17 @@ struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep)
*/
if (unlikely(!memcg_cachep))
memcg_schedule_kmem_cache_create(memcg, cachep);
- else if (percpu_ref_tryget(&memcg_cachep->memcg_params.refcnt))
+ else if (percpu_ref_tryget(&memcg_cachep->memcg_params.refcnt)) {
+ struct obj_cgroup *objcg = rcu_dereference(memcg->objcg);
+
+ if (!objcg || !obj_cgroup_tryget(objcg)) {
+ percpu_ref_put(&memcg_cachep->memcg_params.refcnt);
+ goto out_unlock;
+ }
+
+ *objcgp = objcg;
cachep = memcg_cachep;
+ }
out_unlock:
rcu_read_unlock();
return cachep;
diff --git a/mm/slab.c b/mm/slab.c
index f2d67984595b..ad38fbae4042 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3223,9 +3223,10 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid,
unsigned long save_flags;
void *ptr;
int slab_node = numa_mem_id();
+ struct obj_cgroup *objcg = NULL;
flags &= gfp_allowed_mask;
- cachep = slab_pre_alloc_hook(cachep, flags);
+ cachep = slab_pre_alloc_hook(cachep, &objcg, 1, flags);
if (unlikely(!cachep))
return NULL;
@@ -3261,7 +3262,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid,
if (unlikely(slab_want_init_on_alloc(flags, cachep)) && ptr)
memset(ptr, 0, cachep->object_size);
- slab_post_alloc_hook(cachep, flags, 1, &ptr);
+ slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr);
return ptr;
}
@@ -3302,9 +3303,10 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller)
{
unsigned long save_flags;
void *objp;
+ struct obj_cgroup *objcg = NULL;
flags &= gfp_allowed_mask;
- cachep = slab_pre_alloc_hook(cachep, flags);
+ cachep = slab_pre_alloc_hook(cachep, &objcg, 1, flags);
if (unlikely(!cachep))
return NULL;
@@ -3318,7 +3320,7 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller)
if (unlikely(slab_want_init_on_alloc(flags, cachep)) && objp)
memset(objp, 0, cachep->object_size);
- slab_post_alloc_hook(cachep, flags, 1, &objp);
+ slab_post_alloc_hook(cachep, objcg, flags, 1, &objp);
return objp;
}
@@ -3440,6 +3442,7 @@ void ___cache_free(struct kmem_cache *cachep, void *objp,
memset(objp, 0, cachep->object_size);
kmemleak_free_recursive(objp, cachep->flags);
objp = cache_free_debugcheck(cachep, objp, caller);
+ memcg_slab_free_hook(cachep, virt_to_head_page(objp), objp);
/*
* Skip calling cache_free_alien() when the platform is not numa.
@@ -3505,8 +3508,9 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
void **p)
{
size_t i;
+ struct obj_cgroup *objcg = NULL;
- s = slab_pre_alloc_hook(s, flags);
+ s = slab_pre_alloc_hook(s, &objcg, size, flags);
if (!s)
return 0;
@@ -3529,13 +3533,13 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
for (i = 0; i < size; i++)
memset(p[i], 0, s->object_size);
- slab_post_alloc_hook(s, flags, size, p);
+ slab_post_alloc_hook(s, objcg, flags, size, p);
/* FIXME: Trace call missing. Christoph would like a bulk variant */
return size;
error:
local_irq_enable();
cache_alloc_debugcheck_after_bulk(s, flags, i, p, _RET_IP_);
- slab_post_alloc_hook(s, flags, i, p);
+ slab_post_alloc_hook(s, objcg, flags, i, p);
__kmem_cache_free_bulk(s, i, p);
return 0;
}
diff --git a/mm/slab.h b/mm/slab.h
index 44def57f050e..525e09e05743 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -437,6 +437,41 @@ static inline void memcg_free_page_obj_cgroups(struct page *page)
page->obj_cgroups = NULL;
}
+static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
+ struct obj_cgroup *objcg,
+ size_t size, void **p)
+{
+ struct page *page;
+ unsigned long off;
+ size_t i;
+
+ for (i = 0; i < size; i++) {
+ if (likely(p[i])) {
+ page = virt_to_head_page(p[i]);
+ off = obj_to_index(s, page, p[i]);
+ obj_cgroup_get(objcg);
+ page_obj_cgroups(page)[off] = objcg;
+ }
+ }
+ obj_cgroup_put(objcg);
+ memcg_kmem_put_cache(s);
+}
+
+static inline void memcg_slab_free_hook(struct kmem_cache *s, struct page *page,
+ void *p)
+{
+ struct obj_cgroup *objcg;
+ unsigned int off;
+
+ if (!memcg_kmem_enabled() || is_root_cache(s))
+ return;
+
+ off = obj_to_index(s, page, p);
+ objcg = page_obj_cgroups(page)[off];
+ page_obj_cgroups(page)[off] = NULL;
+ obj_cgroup_put(objcg);
+}
+
extern void slab_init_memcg_params(struct kmem_cache *);
extern void memcg_link_cache(struct kmem_cache *s, struct mem_cgroup *memcg);
@@ -496,6 +531,17 @@ static inline void memcg_free_page_obj_cgroups(struct page *page)
{
}
+static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
+ struct obj_cgroup *objcg,
+ size_t size, void **p)
+{
+}
+
+static inline void memcg_slab_free_hook(struct kmem_cache *s, struct page *page,
+ void *p)
+{
+}
+
static inline void slab_init_memcg_params(struct kmem_cache *s)
{
}
@@ -604,7 +650,8 @@ static inline size_t slab_ksize(const struct kmem_cache *s)
}
static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s,
- gfp_t flags)
+ struct obj_cgroup **objcgp,
+ size_t size, gfp_t flags)
{
flags &= gfp_allowed_mask;
@@ -618,13 +665,14 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s,
if (memcg_kmem_enabled() &&
((flags & __GFP_ACCOUNT) || (s->flags & SLAB_ACCOUNT)))
- return memcg_kmem_get_cache(s);
+ return memcg_kmem_get_cache(s, objcgp);
return s;
}
-static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
- size_t size, void **p)
+static inline void slab_post_alloc_hook(struct kmem_cache *s,
+ struct obj_cgroup *objcg,
+ gfp_t flags, size_t size, void **p)
{
size_t i;
@@ -636,8 +684,8 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
s->flags, flags);
}
- if (memcg_kmem_enabled())
- memcg_kmem_put_cache(s);
+ if (!is_root_cache(s))
+ memcg_slab_post_alloc_hook(s, objcg, size, p);
}
#ifndef CONFIG_SLOB
diff --git a/mm/slub.c b/mm/slub.c
index 68c2c45dfac1..67ae40fcfcda 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2734,8 +2734,9 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,
struct kmem_cache_cpu *c;
struct page *page;
unsigned long tid;
+ struct obj_cgroup *objcg = NULL;
- s = slab_pre_alloc_hook(s, gfpflags);
+ s = slab_pre_alloc_hook(s, &objcg, 1, gfpflags);
if (!s)
return NULL;
redo:
@@ -2811,7 +2812,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,
if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object)
memset(object, 0, s->object_size);
- slab_post_alloc_hook(s, gfpflags, 1, &object);
+ slab_post_alloc_hook(s, objcg, gfpflags, 1, &object);
return object;
}
@@ -3016,6 +3017,8 @@ static __always_inline void do_slab_free(struct kmem_cache *s,
void *tail_obj = tail ? : head;
struct kmem_cache_cpu *c;
unsigned long tid;
+
+ memcg_slab_free_hook(s, page, head);
redo:
/*
* Determine the currently cpus per cpu slab.
@@ -3195,9 +3198,10 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
{
struct kmem_cache_cpu *c;
int i;
+ struct obj_cgroup *objcg = NULL;
/* memcg and kmem_cache debug support */
- s = slab_pre_alloc_hook(s, flags);
+ s = slab_pre_alloc_hook(s, &objcg, size, flags);
if (unlikely(!s))
return false;
/*
@@ -3251,11 +3255,11 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
}
/* memcg and kmem_cache debug support */
- slab_post_alloc_hook(s, flags, size, p);
+ slab_post_alloc_hook(s, objcg, flags, size, p);
return i;
error:
local_irq_enable();
- slab_post_alloc_hook(s, flags, i, p);
+ slab_post_alloc_hook(s, objcg, flags, i, p);
__kmem_cache_free_bulk(s, i, p);
return 0;
}
--
2.25.3
Switch to per-object accounting of non-root slab objects.
Charging is performed using obj_cgroup API in the pre_alloc hook.
Obj_cgroup is charged with the size of the object and the size
of metadata: as now it's the size of an obj_cgroup pointer.
If the amount of memory has been charged successfully, the actual
allocation code is executed. Otherwise, -ENOMEM is returned.
In the post_alloc hook if the actual allocation succeeded,
corresponding vmstats are bumped and the obj_cgroup pointer is saved.
Otherwise, the charge is canceled.
On the free path obj_cgroup pointer is obtained and used to uncharge
the size of the releasing object.
Memcg and lruvec counters are now representing only memory used
by active slab objects and do not include the free space. The free
space is shared and doesn't belong to any specific cgroup.
Global per-node slab vmstats are still modified from (un)charge_slab_page()
functions. The idea is to keep all slab pages accounted as slab pages
on system level.
Signed-off-by: Roman Gushchin <[email protected]>
---
mm/slab.h | 173 ++++++++++++++++++++++++------------------------------
1 file changed, 77 insertions(+), 96 deletions(-)
diff --git a/mm/slab.h b/mm/slab.h
index 525e09e05743..0ecf14bec6a2 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -352,72 +352,6 @@ static inline struct mem_cgroup *memcg_from_slab_page(struct page *page)
return NULL;
}
-/*
- * Charge the slab page belonging to the non-root kmem_cache.
- * Can be called for non-root kmem_caches only.
- */
-static __always_inline int memcg_charge_slab(struct page *page,
- gfp_t gfp, int order,
- struct kmem_cache *s)
-{
- unsigned int nr_pages = 1 << order;
- struct mem_cgroup *memcg;
- struct lruvec *lruvec;
- int ret;
-
- rcu_read_lock();
- memcg = READ_ONCE(s->memcg_params.memcg);
- while (memcg && !css_tryget_online(&memcg->css))
- memcg = parent_mem_cgroup(memcg);
- rcu_read_unlock();
-
- if (unlikely(!memcg || mem_cgroup_is_root(memcg))) {
- mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
- nr_pages << PAGE_SHIFT);
- percpu_ref_get_many(&s->memcg_params.refcnt, nr_pages);
- return 0;
- }
-
- ret = memcg_kmem_charge(memcg, gfp, nr_pages);
- if (ret)
- goto out;
-
- lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
- mod_lruvec_state(lruvec, cache_vmstat_idx(s), nr_pages << PAGE_SHIFT);
-
- percpu_ref_get_many(&s->memcg_params.refcnt, nr_pages);
-out:
- css_put(&memcg->css);
- return ret;
-}
-
-/*
- * Uncharge a slab page belonging to a non-root kmem_cache.
- * Can be called for non-root kmem_caches only.
- */
-static __always_inline void memcg_uncharge_slab(struct page *page, int order,
- struct kmem_cache *s)
-{
- unsigned int nr_pages = 1 << order;
- struct mem_cgroup *memcg;
- struct lruvec *lruvec;
-
- rcu_read_lock();
- memcg = READ_ONCE(s->memcg_params.memcg);
- if (likely(!mem_cgroup_is_root(memcg))) {
- lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
- mod_lruvec_state(lruvec, cache_vmstat_idx(s),
- -(nr_pages << PAGE_SHIFT));
- memcg_kmem_uncharge(memcg, nr_pages);
- } else {
- mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
- -(nr_pages << PAGE_SHIFT));
- }
- rcu_read_unlock();
-
- percpu_ref_put_many(&s->memcg_params.refcnt, nr_pages);
-}
-
static inline int memcg_alloc_page_obj_cgroups(struct page *page, gfp_t gfp,
unsigned int objects)
{
@@ -437,6 +371,47 @@ static inline void memcg_free_page_obj_cgroups(struct page *page)
page->obj_cgroups = NULL;
}
+static inline size_t obj_full_size(struct kmem_cache *s)
+{
+ /*
+ * For each accounted object there is an extra space which is used
+ * to store obj_cgroup membership. Charge it too.
+ */
+ return s->size + sizeof(struct obj_cgroup *);
+}
+
+static inline struct kmem_cache *memcg_slab_pre_alloc_hook(struct kmem_cache *s,
+ struct obj_cgroup **objcgp,
+ size_t objects, gfp_t flags)
+{
+ struct kmem_cache *cachep;
+
+ cachep = memcg_kmem_get_cache(s, objcgp);
+ if (is_root_cache(cachep))
+ return s;
+
+ if (obj_cgroup_charge(*objcgp, flags, objects * obj_full_size(s))) {
+ memcg_kmem_put_cache(cachep);
+ cachep = NULL;
+ }
+
+ return cachep;
+}
+
+static inline void mod_objcg_state(struct obj_cgroup *objcg,
+ struct pglist_data *pgdat,
+ int idx, int nr)
+{
+ struct mem_cgroup *memcg;
+ struct lruvec *lruvec;
+
+ rcu_read_lock();
+ memcg = obj_cgroup_memcg(objcg);
+ lruvec = mem_cgroup_lruvec(memcg, pgdat);
+ mod_memcg_lruvec_state(lruvec, idx, nr);
+ rcu_read_unlock();
+}
+
static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
struct obj_cgroup *objcg,
size_t size, void **p)
@@ -451,6 +426,10 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
off = obj_to_index(s, page, p[i]);
obj_cgroup_get(objcg);
page_obj_cgroups(page)[off] = objcg;
+ mod_objcg_state(objcg, page_pgdat(page),
+ cache_vmstat_idx(s), obj_full_size(s));
+ } else {
+ obj_cgroup_uncharge(objcg, obj_full_size(s));
}
}
obj_cgroup_put(objcg);
@@ -469,6 +448,11 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct page *page,
off = obj_to_index(s, page, p);
objcg = page_obj_cgroups(page)[off];
page_obj_cgroups(page)[off] = NULL;
+
+ obj_cgroup_uncharge(objcg, obj_full_size(s));
+ mod_objcg_state(objcg, page_pgdat(page), cache_vmstat_idx(s),
+ -obj_full_size(s));
+
obj_cgroup_put(objcg);
}
@@ -510,17 +494,6 @@ static inline struct mem_cgroup *memcg_from_slab_page(struct page *page)
return NULL;
}
-static inline int memcg_charge_slab(struct page *page, gfp_t gfp, int order,
- struct kmem_cache *s)
-{
- return 0;
-}
-
-static inline void memcg_uncharge_slab(struct page *page, int order,
- struct kmem_cache *s)
-{
-}
-
static inline int memcg_alloc_page_obj_cgroups(struct page *page, gfp_t gfp,
unsigned int objects)
{
@@ -531,6 +504,13 @@ static inline void memcg_free_page_obj_cgroups(struct page *page)
{
}
+static inline struct kmem_cache *memcg_slab_pre_alloc_hook(struct kmem_cache *s,
+ struct obj_cgroup **objcgp,
+ size_t objects, gfp_t flags)
+{
+ return NULL;
+}
+
static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
struct obj_cgroup *objcg,
size_t size, void **p)
@@ -568,32 +548,33 @@ static __always_inline int charge_slab_page(struct page *page,
gfp_t gfp, int order,
struct kmem_cache *s)
{
- int ret;
-
- if (is_root_cache(s)) {
- mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
- PAGE_SIZE << order);
- return 0;
- }
+#ifdef CONFIG_MEMCG_KMEM
+ if (!is_root_cache(s)) {
+ int ret;
- ret = memcg_alloc_page_obj_cgroups(page, gfp, objs_per_slab(s));
- if (ret)
- return ret;
+ ret = memcg_alloc_page_obj_cgroups(page, gfp, objs_per_slab(s));
+ if (ret)
+ return ret;
- return memcg_charge_slab(page, gfp, order, s);
+ percpu_ref_get_many(&s->memcg_params.refcnt, 1 << order);
+ }
+#endif
+ mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
+ PAGE_SIZE << order);
+ return 0;
}
static __always_inline void uncharge_slab_page(struct page *page, int order,
struct kmem_cache *s)
{
- if (is_root_cache(s)) {
- mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
- -(PAGE_SIZE << order));
- return;
+#ifdef CONFIG_MEMCG_KMEM
+ if (!is_root_cache(s)) {
+ memcg_free_page_obj_cgroups(page);
+ percpu_ref_put_many(&s->memcg_params.refcnt, 1 << order);
}
-
- memcg_free_page_obj_cgroups(page);
- memcg_uncharge_slab(page, order, s);
+#endif
+ mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
+ -(PAGE_SIZE << order));
}
static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
@@ -665,7 +646,7 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s,
if (memcg_kmem_enabled() &&
((flags & __GFP_ACCOUNT) || (s->flags & SLAB_ACCOUNT)))
- return memcg_kmem_get_cache(s, objcgp);
+ return memcg_slab_pre_alloc_hook(s, objcgp, size, flags);
return s;
}
--
2.25.3
From: Johannes Weiner <[email protected]>
The reference counting of a memcg is currently coupled directly to how
many 4k pages are charged to it. This doesn't work well with Roman's
new slab controller, which maintains pools of objects and doesn't want
to keep an extra balance sheet for the pages backing those objects.
This unusual refcounting design (reference counts usually track
pointers to an object) is only for historical reasons: memcg used to
not take any css references and simply stalled offlining until all
charges had been reparented and the page counters had dropped to
zero. When we got rid of the reparenting requirement, the simple
mechanical translation was to take a reference for every charge.
More historical context can be found in commit e8ea14cc6ead ("mm:
memcontrol: take a css reference for each charged page"),
commit 64f219938941 ("mm: memcontrol: remove obsolete kmemcg pinning
tricks") and commit b2052564e66d ("mm: memcontrol: continue cache
reclaim from offlined groups").
The new slab controller exposes the limitations in this scheme, so
let's switch it to a more idiomatic reference counting model based on
actual kernel pointers to the memcg:
- The per-cpu stock holds a reference to the memcg its caching
- User pages hold a reference for their page->mem_cgroup. Transparent
huge pages will no longer acquire tail references in advance, we'll
get them if needed during the split.
- Kernel pages hold a reference for their page->mem_cgroup
- mem_cgroup_try_charge(), if successful, will return one reference to
be consumed by page->mem_cgroup during commit, or put during cancel
- Pages allocated in the root cgroup will acquire and release css
references for simplicity. css_get() and css_put() optimize that.
- The current memcg_charge_slab() already hacked around the per-charge
references; this change gets rid of that as well.
Roman: I've reformatted commit references in the commit log to make
checkpatch.pl happy.
Signed-off-by: Johannes Weiner <[email protected]>
Signed-off-by: Roman Gushchin <[email protected]>
Acked-by: Roman Gushchin <[email protected]>
---
mm/memcontrol.c | 45 ++++++++++++++++++++++++++-------------------
mm/slab.h | 2 --
2 files changed, 26 insertions(+), 21 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6cbc1f4829fc..83805b48817d 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2111,13 +2111,17 @@ static void drain_stock(struct memcg_stock_pcp *stock)
{
struct mem_cgroup *old = stock->cached;
+ if (!old)
+ return;
+
if (stock->nr_pages) {
page_counter_uncharge(&old->memory, stock->nr_pages);
if (do_memsw_account())
page_counter_uncharge(&old->memsw, stock->nr_pages);
- css_put_many(&old->css, stock->nr_pages);
stock->nr_pages = 0;
}
+
+ css_put(&old->css);
stock->cached = NULL;
}
@@ -2153,6 +2157,7 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
stock = this_cpu_ptr(&memcg_stock);
if (stock->cached != memcg) { /* reset if necessary */
drain_stock(stock);
+ css_get(&memcg->css);
stock->cached = memcg;
}
stock->nr_pages += nr_pages;
@@ -2583,12 +2588,10 @@ static int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
page_counter_charge(&memcg->memory, nr_pages);
if (do_memsw_account())
page_counter_charge(&memcg->memsw, nr_pages);
- css_get_many(&memcg->css, nr_pages);
return 0;
done_restock:
- css_get_many(&memcg->css, batch);
if (batch > nr_pages)
refill_stock(memcg, batch - nr_pages);
@@ -2625,8 +2628,6 @@ static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages)
page_counter_uncharge(&memcg->memory, nr_pages);
if (do_memsw_account())
page_counter_uncharge(&memcg->memsw, nr_pages);
-
- css_put_many(&memcg->css, nr_pages);
}
static void lock_page_lru(struct page *page, int *isolated)
@@ -2977,6 +2978,7 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)
if (!ret) {
page->mem_cgroup = memcg;
__SetPageKmemcg(page);
+ return 0;
}
}
css_put(&memcg->css);
@@ -2999,12 +3001,11 @@ void __memcg_kmem_uncharge_page(struct page *page, int order)
VM_BUG_ON_PAGE(mem_cgroup_is_root(memcg), page);
__memcg_kmem_uncharge(memcg, nr_pages);
page->mem_cgroup = NULL;
+ css_put(&memcg->css);
/* slab pages do not have PageKmemcg flag set */
if (PageKmemcg(page))
__ClearPageKmemcg(page);
-
- css_put_many(&memcg->css, nr_pages);
}
#endif /* CONFIG_MEMCG_KMEM */
@@ -3016,15 +3017,18 @@ void __memcg_kmem_uncharge_page(struct page *page, int order)
*/
void mem_cgroup_split_huge_fixup(struct page *head)
{
+ struct mem_cgroup *memcg = head->mem_cgroup;
int i;
if (mem_cgroup_disabled())
return;
- for (i = 1; i < HPAGE_PMD_NR; i++)
- head[i].mem_cgroup = head->mem_cgroup;
+ for (i = 1; i < HPAGE_PMD_NR; i++) {
+ css_get(&memcg->css);
+ head[i].mem_cgroup = memcg;
+ }
- __mod_memcg_state(head->mem_cgroup, MEMCG_RSS_HUGE, -HPAGE_PMD_NR);
+ __mod_memcg_state(memcg, MEMCG_RSS_HUGE, -HPAGE_PMD_NR);
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
@@ -5443,7 +5447,9 @@ static int mem_cgroup_move_account(struct page *page,
* uncharging, charging, migration, or LRU putback.
*/
- /* caller should have done css_get */
+ css_get(&to->css);
+ css_put(&from->css);
+
page->mem_cgroup = to;
spin_unlock_irqrestore(&from->move_lock, flags);
@@ -6537,8 +6543,10 @@ int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm,
memcg = get_mem_cgroup_from_mm(mm);
ret = try_charge(memcg, gfp_mask, nr_pages);
-
- css_put(&memcg->css);
+ if (ret) {
+ css_put(&memcg->css);
+ memcg = NULL;
+ }
out:
*memcgp = memcg;
return ret;
@@ -6634,6 +6642,8 @@ void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg,
return;
cancel_charge(memcg, nr_pages);
+
+ css_put(&memcg->css);
}
struct uncharge_gather {
@@ -6675,9 +6685,6 @@ static void uncharge_batch(const struct uncharge_gather *ug)
__this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, nr_pages);
memcg_check_events(ug->memcg, ug->dummy_page);
local_irq_restore(flags);
-
- if (!mem_cgroup_is_root(ug->memcg))
- css_put_many(&ug->memcg->css, nr_pages);
}
static void uncharge_page(struct page *page, struct uncharge_gather *ug)
@@ -6725,6 +6732,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
ug->dummy_page = page;
page->mem_cgroup = NULL;
+ css_put(&ug->memcg->css);
}
static void uncharge_list(struct list_head *page_list)
@@ -6831,8 +6839,8 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage)
page_counter_charge(&memcg->memory, nr_pages);
if (do_memsw_account())
page_counter_charge(&memcg->memsw, nr_pages);
- css_get_many(&memcg->css, nr_pages);
+ css_get(&memcg->css);
commit_charge(newpage, memcg, false);
local_irq_save(flags);
@@ -7071,8 +7079,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
-nr_entries);
memcg_check_events(memcg, page);
- if (!mem_cgroup_is_root(memcg))
- css_put_many(&memcg->css, nr_entries);
+ css_put(&memcg->css);
}
/**
diff --git a/mm/slab.h b/mm/slab.h
index 633eedb6bad1..8a574d9361c1 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -373,9 +373,7 @@ static __always_inline int memcg_charge_slab(struct page *page,
lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
mod_lruvec_state(lruvec, cache_vmstat_idx(s), nr_pages << PAGE_SHIFT);
- /* transer try_charge() page references to kmem_cache */
percpu_ref_get_many(&s->memcg_params.refcnt, nr_pages);
- css_put_many(&memcg->css, nr_pages);
out:
css_put(&memcg->css);
return ret;
--
2.25.3
In order to prepare for per-object slab memory accounting, convert
NR_SLAB_RECLAIMABLE and NR_SLAB_UNRECLAIMABLE vmstat items to bytes.
To make it obvious, rename them to NR_SLAB_RECLAIMABLE_B and
NR_SLAB_UNRECLAIMABLE_B (similar to NR_KERNEL_STACK_KB).
Internally global and per-node counters are stored in pages,
however memcg and lruvec counters are stored in bytes.
This scheme may look weird, but only for now. As soon as slab
pages will be shared between multiple cgroups, global and
node counters will reflect the total number of slab pages.
However memcg and lruvec counters will be used for per-memcg
slab memory tracking, which will take separate kernel objects
in the account. Keeping global and node counters in pages helps
to avoid additional overhead.
The size of slab memory shouldn't exceed 4Gb on 32-bit machines,
so it will fit into atomic_long_t we use for vmstats.
Signed-off-by: Roman Gushchin <[email protected]>
---
drivers/base/node.c | 4 ++--
fs/proc/meminfo.c | 4 ++--
include/linux/mmzone.h | 16 +++++++++++++---
kernel/power/snapshot.c | 2 +-
mm/memcontrol.c | 11 ++++-------
mm/oom_kill.c | 2 +-
mm/page_alloc.c | 8 ++++----
mm/slab.h | 15 ++++++++-------
mm/slab_common.c | 4 ++--
mm/slob.c | 12 ++++++------
mm/slub.c | 8 ++++----
mm/vmscan.c | 3 ++-
mm/workingset.c | 6 ++++--
13 files changed, 53 insertions(+), 42 deletions(-)
diff --git a/drivers/base/node.c b/drivers/base/node.c
index 9d6afb7d2ccd..b3d13fa715ad 100644
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -368,8 +368,8 @@ static ssize_t node_read_meminfo(struct device *dev,
unsigned long sreclaimable, sunreclaimable;
si_meminfo_node(&i, nid);
- sreclaimable = node_page_state(pgdat, NR_SLAB_RECLAIMABLE);
- sunreclaimable = node_page_state(pgdat, NR_SLAB_UNRECLAIMABLE);
+ sreclaimable = node_page_state_pages(pgdat, NR_SLAB_RECLAIMABLE_B);
+ sunreclaimable = node_page_state_pages(pgdat, NR_SLAB_UNRECLAIMABLE_B);
n = sprintf(buf,
"Node %d MemTotal: %8lu kB\n"
"Node %d MemFree: %8lu kB\n"
diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
index 8c1f1bb1a5ce..0811e4100084 100644
--- a/fs/proc/meminfo.c
+++ b/fs/proc/meminfo.c
@@ -53,8 +53,8 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
pages[lru] = global_node_page_state(NR_LRU_BASE + lru);
available = si_mem_available();
- sreclaimable = global_node_page_state(NR_SLAB_RECLAIMABLE);
- sunreclaim = global_node_page_state(NR_SLAB_UNRECLAIMABLE);
+ sreclaimable = global_node_page_state_pages(NR_SLAB_RECLAIMABLE_B);
+ sunreclaim = global_node_page_state_pages(NR_SLAB_UNRECLAIMABLE_B);
show_val_kb(m, "MemTotal: ", i.totalram);
show_val_kb(m, "MemFree: ", i.freeram);
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 22fe65edf425..1c68c482df6f 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -171,8 +171,8 @@ enum node_stat_item {
NR_INACTIVE_FILE, /* " " " " " */
NR_ACTIVE_FILE, /* " " " " " */
NR_UNEVICTABLE, /* " " " " " */
- NR_SLAB_RECLAIMABLE,
- NR_SLAB_UNRECLAIMABLE,
+ NR_SLAB_RECLAIMABLE_B,
+ NR_SLAB_UNRECLAIMABLE_B,
NR_ISOLATED_ANON, /* Temporary isolated pages from anon lru */
NR_ISOLATED_FILE, /* Temporary isolated pages from file lru */
WORKINGSET_NODES,
@@ -206,7 +206,17 @@ enum node_stat_item {
static __always_inline bool vmstat_item_in_bytes(enum node_stat_item item)
{
- return false;
+ /*
+ * Global and per-node slab counters track slab pages.
+ * It's expected that changes are multiples of PAGE_SIZE.
+ * Internally values are stored in pages.
+ *
+ * Per-memcg and per-lruvec counters track memory, consumed
+ * by individual slab objects. These counters are actually
+ * byte-precise.
+ */
+ return (item == NR_SLAB_RECLAIMABLE_B ||
+ item == NR_SLAB_UNRECLAIMABLE_B);
}
/*
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index 659800157b17..22da1728b9cb 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -1664,7 +1664,7 @@ static unsigned long minimum_image_size(unsigned long saveable)
{
unsigned long size;
- size = global_node_page_state(NR_SLAB_RECLAIMABLE)
+ size = global_node_page_state_pages(NR_SLAB_RECLAIMABLE_B)
+ global_node_page_state(NR_ACTIVE_ANON)
+ global_node_page_state(NR_INACTIVE_ANON)
+ global_node_page_state(NR_ACTIVE_FILE)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 5f700fa8b78c..6cbc1f4829fc 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1409,9 +1409,8 @@ static char *memory_stat_format(struct mem_cgroup *memcg)
(u64)memcg_page_state(memcg, MEMCG_KERNEL_STACK_KB) *
1024);
seq_buf_printf(&s, "slab %llu\n",
- (u64)(memcg_page_state(memcg, NR_SLAB_RECLAIMABLE) +
- memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE)) *
- PAGE_SIZE);
+ (u64)(memcg_page_state(memcg, NR_SLAB_RECLAIMABLE_B) +
+ memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE_B)));
seq_buf_printf(&s, "sock %llu\n",
(u64)memcg_page_state(memcg, MEMCG_SOCK) *
PAGE_SIZE);
@@ -1445,11 +1444,9 @@ static char *memory_stat_format(struct mem_cgroup *memcg)
PAGE_SIZE);
seq_buf_printf(&s, "slab_reclaimable %llu\n",
- (u64)memcg_page_state(memcg, NR_SLAB_RECLAIMABLE) *
- PAGE_SIZE);
+ (u64)memcg_page_state(memcg, NR_SLAB_RECLAIMABLE_B));
seq_buf_printf(&s, "slab_unreclaimable %llu\n",
- (u64)memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE) *
- PAGE_SIZE);
+ (u64)memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE_B));
/* Accumulated memory events */
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 463b3d74a64a..eb0ccb8666b0 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -184,7 +184,7 @@ static bool is_dump_unreclaim_slabs(void)
global_node_page_state(NR_ISOLATED_FILE) +
global_node_page_state(NR_UNEVICTABLE);
- return (global_node_page_state(NR_SLAB_UNRECLAIMABLE) > nr_lru);
+ return (global_node_page_state_pages(NR_SLAB_UNRECLAIMABLE_B) > nr_lru);
}
/**
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b48336e20bdc..a4daae53b273 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5175,8 +5175,8 @@ long si_mem_available(void)
* items that are in use, and cannot be freed. Cap this estimate at the
* low watermark.
*/
- reclaimable = global_node_page_state(NR_SLAB_RECLAIMABLE) +
- global_node_page_state(NR_KERNEL_MISC_RECLAIMABLE);
+ reclaimable = global_node_page_state_pages(NR_SLAB_RECLAIMABLE_B) +
+ global_node_page_state(NR_KERNEL_MISC_RECLAIMABLE);
available += reclaimable - min(reclaimable / 2, wmark_low);
if (available < 0)
@@ -5320,8 +5320,8 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
global_node_page_state(NR_FILE_DIRTY),
global_node_page_state(NR_WRITEBACK),
global_node_page_state(NR_UNSTABLE_NFS),
- global_node_page_state(NR_SLAB_RECLAIMABLE),
- global_node_page_state(NR_SLAB_UNRECLAIMABLE),
+ global_node_page_state_pages(NR_SLAB_RECLAIMABLE_B),
+ global_node_page_state_pages(NR_SLAB_UNRECLAIMABLE_B),
global_node_page_state(NR_FILE_MAPPED),
global_node_page_state(NR_SHMEM),
global_zone_page_state(NR_PAGETABLE),
diff --git a/mm/slab.h b/mm/slab.h
index 815e4e9a94cd..633eedb6bad1 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -272,7 +272,7 @@ int __kmem_cache_alloc_bulk(struct kmem_cache *, gfp_t, size_t, void **);
static inline int cache_vmstat_idx(struct kmem_cache *s)
{
return (s->flags & SLAB_RECLAIM_ACCOUNT) ?
- NR_SLAB_RECLAIMABLE : NR_SLAB_UNRECLAIMABLE;
+ NR_SLAB_RECLAIMABLE_B : NR_SLAB_UNRECLAIMABLE_B;
}
#ifdef CONFIG_MEMCG_KMEM
@@ -361,7 +361,7 @@ static __always_inline int memcg_charge_slab(struct page *page,
if (unlikely(!memcg || mem_cgroup_is_root(memcg))) {
mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
- nr_pages);
+ nr_pages << PAGE_SHIFT);
percpu_ref_get_many(&s->memcg_params.refcnt, nr_pages);
return 0;
}
@@ -371,7 +371,7 @@ static __always_inline int memcg_charge_slab(struct page *page,
goto out;
lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
- mod_lruvec_state(lruvec, cache_vmstat_idx(s), nr_pages);
+ mod_lruvec_state(lruvec, cache_vmstat_idx(s), nr_pages << PAGE_SHIFT);
/* transer try_charge() page references to kmem_cache */
percpu_ref_get_many(&s->memcg_params.refcnt, nr_pages);
@@ -396,11 +396,12 @@ static __always_inline void memcg_uncharge_slab(struct page *page, int order,
memcg = READ_ONCE(s->memcg_params.memcg);
if (likely(!mem_cgroup_is_root(memcg))) {
lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
- mod_lruvec_state(lruvec, cache_vmstat_idx(s), -nr_pages);
+ mod_lruvec_state(lruvec, cache_vmstat_idx(s),
+ -(nr_pages << PAGE_SHIFT));
memcg_kmem_uncharge(memcg, nr_pages);
} else {
mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
- -nr_pages);
+ -(nr_pages << PAGE_SHIFT));
}
rcu_read_unlock();
@@ -484,7 +485,7 @@ static __always_inline int charge_slab_page(struct page *page,
{
if (is_root_cache(s)) {
mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
- 1 << order);
+ PAGE_SIZE << order);
return 0;
}
@@ -496,7 +497,7 @@ static __always_inline void uncharge_slab_page(struct page *page, int order,
{
if (is_root_cache(s)) {
mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
- -(1 << order));
+ -(PAGE_SIZE << order));
return;
}
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 9e72ba224175..b578ae29c743 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1325,8 +1325,8 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
page = alloc_pages(flags, order);
if (likely(page)) {
ret = page_address(page);
- mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE,
- 1 << order);
+ mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B,
+ PAGE_SIZE << order);
}
ret = kasan_kmalloc_large(ret, size, flags);
/* As ret might get tagged, call kmemleak hook after KASAN. */
diff --git a/mm/slob.c b/mm/slob.c
index ac2aecfbc7a8..7cc9805c8091 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -202,8 +202,8 @@ static void *slob_new_pages(gfp_t gfp, int order, int node)
if (!page)
return NULL;
- mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE,
- 1 << order);
+ mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B,
+ PAGE_SIZE << order);
return page_address(page);
}
@@ -214,8 +214,8 @@ static void slob_free_pages(void *b, int order)
if (current->reclaim_state)
current->reclaim_state->reclaimed_slab += 1 << order;
- mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE,
- -(1 << order));
+ mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
+ -(PAGE_SIZE << order));
__free_pages(sp, order);
}
@@ -552,8 +552,8 @@ void kfree(const void *block)
slob_free(m, *m + align);
} else {
unsigned int order = compound_order(sp);
- mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE,
- -(1 << order));
+ mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
+ -(PAGE_SIZE << order));
__free_pages(sp, order);
}
diff --git a/mm/slub.c b/mm/slub.c
index 914b7261e6b6..03071ae5ff07 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3898,8 +3898,8 @@ static void *kmalloc_large_node(size_t size, gfp_t flags, int node)
page = alloc_pages_node(node, flags, order);
if (page) {
ptr = page_address(page);
- mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE,
- 1 << order);
+ mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B,
+ PAGE_SIZE << order);
}
return kmalloc_large_node_hook(ptr, size, flags);
@@ -4030,8 +4030,8 @@ void kfree(const void *x)
BUG_ON(!PageCompound(page));
kfree_hook(object);
- mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE,
- -(1 << order));
+ mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B,
+ -(PAGE_SIZE << order));
__free_pages(page, order);
return;
}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 4c3a760c0522..88aa6656aaca 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4226,7 +4226,8 @@ int node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned int order)
* unmapped file backed pages.
*/
if (node_pagecache_reclaimable(pgdat) <= pgdat->min_unmapped_pages &&
- node_page_state(pgdat, NR_SLAB_RECLAIMABLE) <= pgdat->min_slab_pages)
+ node_page_state_pages(pgdat, NR_SLAB_RECLAIMABLE_B) <=
+ pgdat->min_slab_pages)
return NODE_RECLAIM_FULL;
/*
diff --git a/mm/workingset.c b/mm/workingset.c
index 474186b76ced..9358c1ee5bb6 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -467,8 +467,10 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
for (pages = 0, i = 0; i < NR_LRU_LISTS; i++)
pages += lruvec_page_state_local(lruvec,
NR_LRU_BASE + i);
- pages += lruvec_page_state_local(lruvec, NR_SLAB_RECLAIMABLE);
- pages += lruvec_page_state_local(lruvec, NR_SLAB_UNRECLAIMABLE);
+ pages += lruvec_page_state_local(
+ lruvec, NR_SLAB_RECLAIMABLE_B) >> PAGE_SHIFT;
+ pages += lruvec_page_state_local(
+ lruvec, NR_SLAB_UNRECLAIMABLE_B) >> PAGE_SHIFT;
} else
#endif
pages = node_present_pages(sc->nid);
--
2.25.3
Allocate and release memory to store obj_cgroup pointers for each
non-root slab page. Reuse page->mem_cgroup pointer to store a pointer
to the allocated space.
To distinguish between obj_cgroups and memcg pointers in case
when it's not obvious which one is used (as in page_cgroup_ino()),
let's always set the lowest bit in the obj_cgroup case.
Signed-off-by: Roman Gushchin <[email protected]>
---
include/linux/mm_types.h | 5 ++++-
include/linux/slab_def.h | 5 +++++
include/linux/slub_def.h | 2 ++
mm/memcontrol.c | 17 +++++++++++---
mm/slab.c | 3 ++-
mm/slab.h | 48 ++++++++++++++++++++++++++++++++++++++++
mm/slub.c | 5 +++++
7 files changed, 80 insertions(+), 5 deletions(-)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 4aba6c0c2ba8..0ad7e700f26d 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -198,7 +198,10 @@ struct page {
atomic_t _refcount;
#ifdef CONFIG_MEMCG
- struct mem_cgroup *mem_cgroup;
+ union {
+ struct mem_cgroup *mem_cgroup;
+ struct obj_cgroup **obj_cgroups;
+ };
#endif
/*
diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
index abc7de77b988..967a9a525eab 100644
--- a/include/linux/slab_def.h
+++ b/include/linux/slab_def.h
@@ -114,4 +114,9 @@ static inline unsigned int obj_to_index(const struct kmem_cache *cache,
return reciprocal_divide(offset, cache->reciprocal_buffer_size);
}
+static inline int objs_per_slab(const struct kmem_cache *cache)
+{
+ return cache->num;
+}
+
#endif /* _LINUX_SLAB_DEF_H */
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 200ea292f250..cbda7d55796a 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -191,4 +191,6 @@ static inline unsigned int obj_to_index(const struct kmem_cache *cache,
cache->reciprocal_size);
}
+extern int objs_per_slab(struct kmem_cache *cache);
+
#endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 7f87a0eeafec..63826e460b3f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -549,10 +549,21 @@ ino_t page_cgroup_ino(struct page *page)
unsigned long ino = 0;
rcu_read_lock();
- if (PageSlab(page) && !PageTail(page))
+ if (PageSlab(page) && !PageTail(page)) {
memcg = memcg_from_slab_page(page);
- else
- memcg = READ_ONCE(page->mem_cgroup);
+ } else {
+ memcg = page->mem_cgroup;
+
+ /*
+ * The lowest bit set means that memcg isn't a valid
+ * memcg pointer, but a obj_cgroups pointer.
+ * In this case the page is shared and doesn't belong
+ * to any specific memory cgroup.
+ */
+ if ((unsigned long) memcg & 0x1UL)
+ memcg = NULL;
+ }
+
while (memcg && !(memcg->css.flags & CSS_ONLINE))
memcg = parent_mem_cgroup(memcg);
if (memcg)
diff --git a/mm/slab.c b/mm/slab.c
index 9350062ffc1a..f2d67984595b 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1370,7 +1370,8 @@ static struct page *kmem_getpages(struct kmem_cache *cachep, gfp_t flags,
return NULL;
}
- if (charge_slab_page(page, flags, cachep->gfporder, cachep)) {
+ if (charge_slab_page(page, flags, cachep->gfporder, cachep,
+ cachep->num)) {
__free_pages(page, cachep->gfporder);
return NULL;
}
diff --git a/mm/slab.h b/mm/slab.h
index 8a574d9361c1..44def57f050e 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -319,6 +319,18 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s)
return s->memcg_params.root_cache;
}
+static inline struct obj_cgroup **page_obj_cgroups(struct page *page)
+{
+ /*
+ * page->mem_cgroup and page->obj_cgroups are sharing the same
+ * space. To distinguish between them in case we don't know for sure
+ * that the page is a slab page (e.g. page_cgroup_ino()), let's
+ * always set the lowest bit of obj_cgroups.
+ */
+ return (struct obj_cgroup **)
+ ((unsigned long)page->obj_cgroups & ~0x1UL);
+}
+
/*
* Expects a pointer to a slab page. Please note, that PageSlab() check
* isn't sufficient, as it returns true also for tail compound slab pages,
@@ -406,6 +418,25 @@ static __always_inline void memcg_uncharge_slab(struct page *page, int order,
percpu_ref_put_many(&s->memcg_params.refcnt, nr_pages);
}
+static inline int memcg_alloc_page_obj_cgroups(struct page *page, gfp_t gfp,
+ unsigned int objects)
+{
+ void *vec;
+
+ vec = kcalloc(objects, sizeof(struct obj_cgroup *), gfp);
+ if (!vec)
+ return -ENOMEM;
+
+ page->obj_cgroups = (struct obj_cgroup **) ((unsigned long)vec | 0x1UL);
+ return 0;
+}
+
+static inline void memcg_free_page_obj_cgroups(struct page *page)
+{
+ kfree(page_obj_cgroups(page));
+ page->obj_cgroups = NULL;
+}
+
extern void slab_init_memcg_params(struct kmem_cache *);
extern void memcg_link_cache(struct kmem_cache *s, struct mem_cgroup *memcg);
@@ -455,6 +486,16 @@ static inline void memcg_uncharge_slab(struct page *page, int order,
{
}
+static inline int memcg_alloc_page_obj_cgroups(struct page *page, gfp_t gfp,
+ unsigned int objects)
+{
+ return 0;
+}
+
+static inline void memcg_free_page_obj_cgroups(struct page *page)
+{
+}
+
static inline void slab_init_memcg_params(struct kmem_cache *s)
{
}
@@ -481,12 +522,18 @@ static __always_inline int charge_slab_page(struct page *page,
gfp_t gfp, int order,
struct kmem_cache *s)
{
+ int ret;
+
if (is_root_cache(s)) {
mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
PAGE_SIZE << order);
return 0;
}
+ ret = memcg_alloc_page_obj_cgroups(page, gfp, objs_per_slab(s));
+ if (ret)
+ return ret;
+
return memcg_charge_slab(page, gfp, order, s);
}
@@ -499,6 +546,7 @@ static __always_inline void uncharge_slab_page(struct page *page, int order,
return;
}
+ memcg_free_page_obj_cgroups(page);
memcg_uncharge_slab(page, order, s);
}
diff --git a/mm/slub.c b/mm/slub.c
index 8d16babe1829..68c2c45dfac1 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -5992,4 +5992,9 @@ ssize_t slabinfo_write(struct file *file, const char __user *buffer,
{
return -EIO;
}
+
+int objs_per_slab(struct kmem_cache *cache)
+{
+ return oo_objects(cache->oo);
+}
#endif /* CONFIG_SLUB_DEBUG */
--
2.25.3
On Wed, Apr 22, 2020 at 01:46:56PM -0700, Roman Gushchin wrote:
> Allocate and release memory to store obj_cgroup pointers for each
> non-root slab page. Reuse page->mem_cgroup pointer to store a pointer
> to the allocated space.
>
> To distinguish between obj_cgroups and memcg pointers in case
> when it's not obvious which one is used (as in page_cgroup_ino()),
> let's always set the lowest bit in the obj_cgroup case.
>
> Signed-off-by: Roman Gushchin <[email protected]>
> ---
> include/linux/mm_types.h | 5 ++++-
> include/linux/slab_def.h | 5 +++++
> include/linux/slub_def.h | 2 ++
> mm/memcontrol.c | 17 +++++++++++---
> mm/slab.c | 3 ++-
> mm/slab.h | 48 ++++++++++++++++++++++++++++++++++++++++
> mm/slub.c | 5 +++++
> 7 files changed, 80 insertions(+), 5 deletions(-)
>
...
> diff --git a/mm/slub.c b/mm/slub.c
> index 8d16babe1829..68c2c45dfac1 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -5992,4 +5992,9 @@ ssize_t slabinfo_write(struct file *file, const char __user *buffer,
> {
> return -EIO;
> }
> +
> +int objs_per_slab(struct kmem_cache *cache)
> +{
> + return oo_objects(cache->oo);
> +}
> #endif /* CONFIG_SLUB_DEBUG */
> --
> 2.25.3
>
Ooops, the build bot found that objs_per_slab() was accidentally guarded by
CONFIG_SLUB_DEBUG. An updated version below.
--
From 6b358e0157815535c3a73b4ce7b28f9c4c7804b3 Mon Sep 17 00:00:00 2001
From: Roman Gushchin <[email protected]>
Date: Wed, 10 Jul 2019 15:44:38 -0700
Subject: [PATCH v3.1 07/19] mm: memcg/slab: allocate obj_cgroups for non-root
slab pages
Allocate and release memory to store obj_cgroup pointers for each
non-root slab page. Reuse page->mem_cgroup pointer to store a pointer
to the allocated space.
To distinguish between obj_cgroups and memcg pointers in case
when it's not obvious which one is used (as in page_cgroup_ino()),
let's always set the lowest bit in the obj_cgroup case.
Signed-off-by: Roman Gushchin <[email protected]>
---
include/linux/mm_types.h | 5 ++++-
include/linux/slab_def.h | 5 +++++
include/linux/slub_def.h | 2 ++
mm/memcontrol.c | 17 +++++++++++---
mm/slab.c | 3 ++-
mm/slab.h | 48 ++++++++++++++++++++++++++++++++++++++++
mm/slub.c | 5 +++++
7 files changed, 80 insertions(+), 5 deletions(-)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 4aba6c0c2ba8..0ad7e700f26d 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -198,7 +198,10 @@ struct page {
atomic_t _refcount;
#ifdef CONFIG_MEMCG
- struct mem_cgroup *mem_cgroup;
+ union {
+ struct mem_cgroup *mem_cgroup;
+ struct obj_cgroup **obj_cgroups;
+ };
#endif
/*
diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
index abc7de77b988..967a9a525eab 100644
--- a/include/linux/slab_def.h
+++ b/include/linux/slab_def.h
@@ -114,4 +114,9 @@ static inline unsigned int obj_to_index(const struct kmem_cache *cache,
return reciprocal_divide(offset, cache->reciprocal_buffer_size);
}
+static inline int objs_per_slab(const struct kmem_cache *cache)
+{
+ return cache->num;
+}
+
#endif /* _LINUX_SLAB_DEF_H */
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 200ea292f250..cbda7d55796a 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -191,4 +191,6 @@ static inline unsigned int obj_to_index(const struct kmem_cache *cache,
cache->reciprocal_size);
}
+extern int objs_per_slab(struct kmem_cache *cache);
+
#endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 7f87a0eeafec..63826e460b3f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -549,10 +549,21 @@ ino_t page_cgroup_ino(struct page *page)
unsigned long ino = 0;
rcu_read_lock();
- if (PageSlab(page) && !PageTail(page))
+ if (PageSlab(page) && !PageTail(page)) {
memcg = memcg_from_slab_page(page);
- else
- memcg = READ_ONCE(page->mem_cgroup);
+ } else {
+ memcg = page->mem_cgroup;
+
+ /*
+ * The lowest bit set means that memcg isn't a valid
+ * memcg pointer, but a obj_cgroups pointer.
+ * In this case the page is shared and doesn't belong
+ * to any specific memory cgroup.
+ */
+ if ((unsigned long) memcg & 0x1UL)
+ memcg = NULL;
+ }
+
while (memcg && !(memcg->css.flags & CSS_ONLINE))
memcg = parent_mem_cgroup(memcg);
if (memcg)
diff --git a/mm/slab.c b/mm/slab.c
index 9350062ffc1a..f2d67984595b 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1370,7 +1370,8 @@ static struct page *kmem_getpages(struct kmem_cache *cachep, gfp_t flags,
return NULL;
}
- if (charge_slab_page(page, flags, cachep->gfporder, cachep)) {
+ if (charge_slab_page(page, flags, cachep->gfporder, cachep,
+ cachep->num)) {
__free_pages(page, cachep->gfporder);
return NULL;
}
diff --git a/mm/slab.h b/mm/slab.h
index 8a574d9361c1..44def57f050e 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -319,6 +319,18 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s)
return s->memcg_params.root_cache;
}
+static inline struct obj_cgroup **page_obj_cgroups(struct page *page)
+{
+ /*
+ * page->mem_cgroup and page->obj_cgroups are sharing the same
+ * space. To distinguish between them in case we don't know for sure
+ * that the page is a slab page (e.g. page_cgroup_ino()), let's
+ * always set the lowest bit of obj_cgroups.
+ */
+ return (struct obj_cgroup **)
+ ((unsigned long)page->obj_cgroups & ~0x1UL);
+}
+
/*
* Expects a pointer to a slab page. Please note, that PageSlab() check
* isn't sufficient, as it returns true also for tail compound slab pages,
@@ -406,6 +418,25 @@ static __always_inline void memcg_uncharge_slab(struct page *page, int order,
percpu_ref_put_many(&s->memcg_params.refcnt, nr_pages);
}
+static inline int memcg_alloc_page_obj_cgroups(struct page *page, gfp_t gfp,
+ unsigned int objects)
+{
+ void *vec;
+
+ vec = kcalloc(objects, sizeof(struct obj_cgroup *), gfp);
+ if (!vec)
+ return -ENOMEM;
+
+ page->obj_cgroups = (struct obj_cgroup **) ((unsigned long)vec | 0x1UL);
+ return 0;
+}
+
+static inline void memcg_free_page_obj_cgroups(struct page *page)
+{
+ kfree(page_obj_cgroups(page));
+ page->obj_cgroups = NULL;
+}
+
extern void slab_init_memcg_params(struct kmem_cache *);
extern void memcg_link_cache(struct kmem_cache *s, struct mem_cgroup *memcg);
@@ -455,6 +486,16 @@ static inline void memcg_uncharge_slab(struct page *page, int order,
{
}
+static inline int memcg_alloc_page_obj_cgroups(struct page *page, gfp_t gfp,
+ unsigned int objects)
+{
+ return 0;
+}
+
+static inline void memcg_free_page_obj_cgroups(struct page *page)
+{
+}
+
static inline void slab_init_memcg_params(struct kmem_cache *s)
{
}
@@ -481,12 +522,18 @@ static __always_inline int charge_slab_page(struct page *page,
gfp_t gfp, int order,
struct kmem_cache *s)
{
+ int ret;
+
if (is_root_cache(s)) {
mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
PAGE_SIZE << order);
return 0;
}
+ ret = memcg_alloc_page_obj_cgroups(page, gfp, objs_per_slab(s));
+ if (ret)
+ return ret;
+
return memcg_charge_slab(page, gfp, order, s);
}
@@ -499,6 +546,7 @@ static __always_inline void uncharge_slab_page(struct page *page, int order,
return;
}
+ memcg_free_page_obj_cgroups(page);
memcg_uncharge_slab(page, order, s);
}
diff --git a/mm/slub.c b/mm/slub.c
index 8d16babe1829..a5fb0bb5c77a 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -344,6 +344,11 @@ static inline unsigned int oo_objects(struct kmem_cache_order_objects x)
return x.x & OO_MASK;
}
+int objs_per_slab(struct kmem_cache *cache)
+{
+ return oo_objects(cache->oo);
+}
+
/*
* Per slab locking using the pagelock
*/
--
2.25.3
On Wed, Apr 22, 2020 at 01:46:51PM -0700, Roman Gushchin wrote:
> To implement per-object slab memory accounting, we need to
> convert slab vmstat counters to bytes. Actually, out of
> 4 levels of counters: global, per-node, per-memcg and per-lruvec
> only two last levels will require byte-sized counters.
> It's because global and per-node counters will be counting the
> number of slab pages, and per-memcg and per-lruvec will be
> counting the amount of memory taken by charged slab objects.
>
> Converting all vmstat counters to bytes or even all slab
> counters to bytes would introduce an additional overhead.
> So instead let's store global and per-node counters
> in pages, and memcg and lruvec counters in bytes.
>
> To make the API clean all access helpers (both on the read
> and write sides) are dealing with bytes.
>
> To avoid back-and-forth conversions a new flavor of helpers
> is introduced, which always returns values in pages:
> node_page_state_pages() and global_node_page_state_pages().
>
> Actually new helpers are just reading raw values. Old helpers are
> simple wrappers, which perform a conversion if the vmstat items are
> in bytes. Because at the moment no one actually need bytes,
> there are WARN_ON_ONCE() macroses inside to warn about inappropriate
> use cases.
>
> Thanks to Johannes Weiner for the idea of having the byte-sized API
> on top of the page-sized internal storage.
>
> Signed-off-by: Roman Gushchin <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
On Wed, Apr 22, 2020 at 01:46:52PM -0700, Roman Gushchin wrote:
> In order to prepare for per-object slab memory accounting, convert
> NR_SLAB_RECLAIMABLE and NR_SLAB_UNRECLAIMABLE vmstat items to bytes.
>
> To make it obvious, rename them to NR_SLAB_RECLAIMABLE_B and
> NR_SLAB_UNRECLAIMABLE_B (similar to NR_KERNEL_STACK_KB).
>
> Internally global and per-node counters are stored in pages,
> however memcg and lruvec counters are stored in bytes.
> This scheme may look weird, but only for now. As soon as slab
> pages will be shared between multiple cgroups, global and
> node counters will reflect the total number of slab pages.
> However memcg and lruvec counters will be used for per-memcg
> slab memory tracking, which will take separate kernel objects
> in the account. Keeping global and node counters in pages helps
> to avoid additional overhead.
>
> The size of slab memory shouldn't exceed 4Gb on 32-bit machines,
> so it will fit into atomic_long_t we use for vmstats.
>
> Signed-off-by: Roman Gushchin <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Thanks for splitting this out, it makes both this and the previous
patch easier to read.
On Wed, Apr 22, 2020 at 01:46:59PM -0700, Roman Gushchin wrote:
> Deprecate memory.kmem.slabinfo.
>
> An empty file will be presented if corresponding config options are
> enabled.
>
> The interface is implementation dependent, isn't present in cgroup v2,
> and is generally useful only for core mm debugging purposes. In other
> words, it doesn't provide any value for the absolute majority of users.
>
> A drgn-based replacement can be found in tools/cgroup/slabinfo.py .
> It does support cgroup v1 and v2, mimics memory.kmem.slabinfo output
> and also allows to get any additional information without a need
> to recompile the kernel.
>
> If a drgn-based solution is too slow for a task, a bpf-based tracing
> tool can be used, which can easily keep track of all slab allocations
> belonging to a memory cgroup.
>
> Signed-off-by: Roman Gushchin <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
On 4/22/20 10:46 PM, Roman Gushchin wrote:
> To implement per-object slab memory accounting, we need to
> convert slab vmstat counters to bytes. Actually, out of
> 4 levels of counters: global, per-node, per-memcg and per-lruvec
> only two last levels will require byte-sized counters.
> It's because global and per-node counters will be counting the
> number of slab pages, and per-memcg and per-lruvec will be
> counting the amount of memory taken by charged slab objects.
>
> Converting all vmstat counters to bytes or even all slab
> counters to bytes would introduce an additional overhead.
> So instead let's store global and per-node counters
> in pages, and memcg and lruvec counters in bytes.
>
> To make the API clean all access helpers (both on the read
> and write sides) are dealing with bytes.
>
> To avoid back-and-forth conversions a new flavor of helpers
> is introduced, which always returns values in pages:
> node_page_state_pages() and global_node_page_state_pages().
>
> Actually new helpers are just reading raw values. Old helpers are
> simple wrappers, which perform a conversion if the vmstat items are
> in bytes. Because at the moment no one actually need bytes,
> there are WARN_ON_ONCE() macroses inside to warn about inappropriate
> use cases.
>
> Thanks to Johannes Weiner for the idea of having the byte-sized API
> on top of the page-sized internal storage.
>
> Signed-off-by: Roman Gushchin <[email protected]>
Reviewed-By: Vlastimil Babka <[email protected]>
But it's somewhat complicated, so it would be great to document it in comments
of e.g. include/linux/vmstat.h that what the API returns as unsigned long, can
be either bytes or pages depending on vmstat_item_in_bytes().
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -204,6 +204,11 @@ enum node_stat_item {
> NR_VM_NODE_STAT_ITEMS
> };
>
> +static __always_inline bool vmstat_item_in_bytes(enum node_stat_item item)
This should also have a comment explaining if it's talking about API or storage,
as it's not immediately obvious.
> +{
> + return false;
> +}
> +
> /*
> * We do arithmetic on the LRU lists in various places in the code,
> * so it is important to keep the active lists LRU_ACTIVE higher in
On 5/20/20 1:31 PM, Vlastimil Babka wrote:
> On 4/22/20 10:46 PM, Roman Gushchin wrote:
>> To implement per-object slab memory accounting, we need to
>> convert slab vmstat counters to bytes. Actually, out of
>> 4 levels of counters: global, per-node, per-memcg and per-lruvec
>> only two last levels will require byte-sized counters.
>> It's because global and per-node counters will be counting the
>> number of slab pages, and per-memcg and per-lruvec will be
>> counting the amount of memory taken by charged slab objects.
>>
>> Converting all vmstat counters to bytes or even all slab
>> counters to bytes would introduce an additional overhead.
>> So instead let's store global and per-node counters
>> in pages, and memcg and lruvec counters in bytes.
>>
>> To make the API clean all access helpers (both on the read
>> and write sides) are dealing with bytes.
>>
>> To avoid back-and-forth conversions a new flavor of helpers
>> is introduced, which always returns values in pages:
>> node_page_state_pages() and global_node_page_state_pages().
>>
>> Actually new helpers are just reading raw values. Old helpers are
>> simple wrappers, which perform a conversion if the vmstat items are
>> in bytes. Because at the moment no one actually need bytes,
>> there are WARN_ON_ONCE() macroses inside to warn about inappropriate
>> use cases.
>>
>> Thanks to Johannes Weiner for the idea of having the byte-sized API
>> on top of the page-sized internal storage.
>>
>> Signed-off-by: Roman Gushchin <[email protected]>
>
> Reviewed-By: Vlastimil Babka <[email protected]>
>
> But it's somewhat complicated, so it would be great to document it in comments
> of e.g. include/linux/vmstat.h that what the API returns as unsigned long, can
> be either bytes or pages depending on vmstat_item_in_bytes().
Also forgot to add that if those WARN_ON_ONCEs are going to stay, they should
rather become VM_WARN_ON_ONCEs
On 4/22/20 10:46 PM, Roman Gushchin wrote:
> In order to prepare for per-object slab memory accounting, convert
> NR_SLAB_RECLAIMABLE and NR_SLAB_UNRECLAIMABLE vmstat items to bytes.
>
> To make it obvious, rename them to NR_SLAB_RECLAIMABLE_B and
> NR_SLAB_UNRECLAIMABLE_B (similar to NR_KERNEL_STACK_KB).
>
> Internally global and per-node counters are stored in pages,
> however memcg and lruvec counters are stored in bytes.
> This scheme may look weird, but only for now. As soon as slab
> pages will be shared between multiple cgroups, global and
> node counters will reflect the total number of slab pages.
> However memcg and lruvec counters will be used for per-memcg
> slab memory tracking, which will take separate kernel objects
> in the account. Keeping global and node counters in pages helps
> to avoid additional overhead.
>
> The size of slab memory shouldn't exceed 4Gb on 32-bit machines,
> so it will fit into atomic_long_t we use for vmstats.
>
> Signed-off-by: Roman Gushchin <[email protected]>
> ---
> drivers/base/node.c | 4 ++--
> fs/proc/meminfo.c | 4 ++--
> include/linux/mmzone.h | 16 +++++++++++++---
> kernel/power/snapshot.c | 2 +-
> mm/memcontrol.c | 11 ++++-------
> mm/oom_kill.c | 2 +-
> mm/page_alloc.c | 8 ++++----
> mm/slab.h | 15 ++++++++-------
> mm/slab_common.c | 4 ++--
> mm/slob.c | 12 ++++++------
> mm/slub.c | 8 ++++----
> mm/vmscan.c | 3 ++-
> mm/workingset.c | 6 ++++--
> 13 files changed, 53 insertions(+), 42 deletions(-)
> @@ -206,7 +206,17 @@ enum node_stat_item {
>
> static __always_inline bool vmstat_item_in_bytes(enum node_stat_item item)
> {
> - return false;
> + /*
> + * Global and per-node slab counters track slab pages.
> + * It's expected that changes are multiples of PAGE_SIZE.
> + * Internally values are stored in pages.
> + *
> + * Per-memcg and per-lruvec counters track memory, consumed
> + * by individual slab objects. These counters are actually
> + * byte-precise.
> + */
> + return (item == NR_SLAB_RECLAIMABLE_B ||
> + item == NR_SLAB_UNRECLAIMABLE_B);
> }
Ok, so this is no longer a no-op, but __always_inline here and inline in
global_node_page_state() should hopefully mean that for all users of
global_node_page_state(<constant>) the compiler will eliminate the branch for
non-slab counters. But there are also functions such as si_mem_available() that
use non-constant item. Maybe compiler is smart enough anyway, but perhaps it's
better to use global_node_page_state_pages() in such callers?
However __mod_node_page_state() and mode_node_state() will now branch always. I
wonder if the "API clean" goal is worth it...
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1409,9 +1409,8 @@ static char *memory_stat_format(struct mem_cgroup *memcg)
> (u64)memcg_page_state(memcg, MEMCG_KERNEL_STACK_KB) *
> 1024);
> seq_buf_printf(&s, "slab %llu\n",
> - (u64)(memcg_page_state(memcg, NR_SLAB_RECLAIMABLE) +
> - memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE)) *
> - PAGE_SIZE);
> + (u64)(memcg_page_state(memcg, NR_SLAB_RECLAIMABLE_B) +
> + memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE_B)));
> seq_buf_printf(&s, "sock %llu\n",
> (u64)memcg_page_state(memcg, MEMCG_SOCK) *
> PAGE_SIZE);
> @@ -1445,11 +1444,9 @@ static char *memory_stat_format(struct mem_cgroup *memcg)
> PAGE_SIZE);
>
> seq_buf_printf(&s, "slab_reclaimable %llu\n",
> - (u64)memcg_page_state(memcg, NR_SLAB_RECLAIMABLE) *
> - PAGE_SIZE);
> + (u64)memcg_page_state(memcg, NR_SLAB_RECLAIMABLE_B));
> seq_buf_printf(&s, "slab_unreclaimable %llu\n",
> - (u64)memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE) *
> - PAGE_SIZE);
> + (u64)memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE_B));
So here we are now printing in bytes instead of pages, right? That's fine for
OOM report, but in sysfs aren't we breaking existing users?
On Wed, May 20, 2020 at 02:25:22PM +0200, Vlastimil Babka wrote:
> On 4/22/20 10:46 PM, Roman Gushchin wrote:
> > In order to prepare for per-object slab memory accounting, convert
> > NR_SLAB_RECLAIMABLE and NR_SLAB_UNRECLAIMABLE vmstat items to bytes.
> >
> > To make it obvious, rename them to NR_SLAB_RECLAIMABLE_B and
> > NR_SLAB_UNRECLAIMABLE_B (similar to NR_KERNEL_STACK_KB).
> >
> > Internally global and per-node counters are stored in pages,
> > however memcg and lruvec counters are stored in bytes.
> > This scheme may look weird, but only for now. As soon as slab
> > pages will be shared between multiple cgroups, global and
> > node counters will reflect the total number of slab pages.
> > However memcg and lruvec counters will be used for per-memcg
> > slab memory tracking, which will take separate kernel objects
> > in the account. Keeping global and node counters in pages helps
> > to avoid additional overhead.
> >
> > The size of slab memory shouldn't exceed 4Gb on 32-bit machines,
> > so it will fit into atomic_long_t we use for vmstats.
> >
> > Signed-off-by: Roman Gushchin <[email protected]>
> > ---
> > drivers/base/node.c | 4 ++--
> > fs/proc/meminfo.c | 4 ++--
> > include/linux/mmzone.h | 16 +++++++++++++---
> > kernel/power/snapshot.c | 2 +-
> > mm/memcontrol.c | 11 ++++-------
> > mm/oom_kill.c | 2 +-
> > mm/page_alloc.c | 8 ++++----
> > mm/slab.h | 15 ++++++++-------
> > mm/slab_common.c | 4 ++--
> > mm/slob.c | 12 ++++++------
> > mm/slub.c | 8 ++++----
> > mm/vmscan.c | 3 ++-
> > mm/workingset.c | 6 ++++--
> > 13 files changed, 53 insertions(+), 42 deletions(-)
>
>
> > @@ -206,7 +206,17 @@ enum node_stat_item {
> >
> > static __always_inline bool vmstat_item_in_bytes(enum node_stat_item item)
> > {
> > - return false;
> > + /*
> > + * Global and per-node slab counters track slab pages.
> > + * It's expected that changes are multiples of PAGE_SIZE.
> > + * Internally values are stored in pages.
> > + *
> > + * Per-memcg and per-lruvec counters track memory, consumed
> > + * by individual slab objects. These counters are actually
> > + * byte-precise.
> > + */
> > + return (item == NR_SLAB_RECLAIMABLE_B ||
> > + item == NR_SLAB_UNRECLAIMABLE_B);
Hello, Vlastimil!
Thank you for looking into the patchset, appreciate it.
In the next version I'll add some comments based on your suggestions in previous
letters.
> > }
>
> Ok, so this is no longer a no-op, but __always_inline here and inline in
> global_node_page_state() should hopefully mean that for all users of
> global_node_page_state(<constant>) the compiler will eliminate the branch for
> non-slab counters. But there are also functions such as si_mem_available() that
> use non-constant item. Maybe compiler is smart enough anyway, but perhaps it's
> better to use global_node_page_state_pages() in such callers?
I'll take a look, thanks for the idea.
>
> However __mod_node_page_state() and mode_node_state() will now branch always. I
> wonder if the "API clean" goal is worth it...
You mean just adding a special write-side helper which will perform a conversion
and put VM_WARN_ON_ONCE() into generic write-side helpers?
>
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -1409,9 +1409,8 @@ static char *memory_stat_format(struct mem_cgroup *memcg)
> > (u64)memcg_page_state(memcg, MEMCG_KERNEL_STACK_KB) *
> > 1024);
> > seq_buf_printf(&s, "slab %llu\n",
> > - (u64)(memcg_page_state(memcg, NR_SLAB_RECLAIMABLE) +
> > - memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE)) *
> > - PAGE_SIZE);
> > + (u64)(memcg_page_state(memcg, NR_SLAB_RECLAIMABLE_B) +
> > + memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE_B)));
> > seq_buf_printf(&s, "sock %llu\n",
> > (u64)memcg_page_state(memcg, MEMCG_SOCK) *
> > PAGE_SIZE);
> > @@ -1445,11 +1444,9 @@ static char *memory_stat_format(struct mem_cgroup *memcg)
> > PAGE_SIZE);
> >
> > seq_buf_printf(&s, "slab_reclaimable %llu\n",
> > - (u64)memcg_page_state(memcg, NR_SLAB_RECLAIMABLE) *
> > - PAGE_SIZE);
> > + (u64)memcg_page_state(memcg, NR_SLAB_RECLAIMABLE_B));
> > seq_buf_printf(&s, "slab_unreclaimable %llu\n",
> > - (u64)memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE) *
> > - PAGE_SIZE);
> > + (u64)memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE_B));
>
> So here we are now printing in bytes instead of pages, right? That's fine for
> OOM report, but in sysfs aren't we breaking existing users?
>
Hm, but it was in bytes previously, look at that x * PAGE_SIZE.
Or do you mean that now it can be not rounded to PAGE_SIZE?
If so, I don't think it breaks any expectations.
Thanks!
On 5/20/20 9:26 PM, Roman Gushchin wrote:
> On Wed, May 20, 2020 at 02:25:22PM +0200, Vlastimil Babka wrote:
>>
>> However __mod_node_page_state() and mode_node_state() will now branch always. I
>> wonder if the "API clean" goal is worth it...
>
> You mean just adding a special write-side helper which will perform a conversion
> and put VM_WARN_ON_ONCE() into generic write-side helpers?
What I mean is that maybe node/global helpers should assume page granularity,
and lruvec/memcg helpers do the check is they should convert from bytes to pages
when calling node/global helpers. Then there would be no extra branches in
node/global helpers. But maybe it's not worth saving those branches, dunno.
>>
>> > --- a/mm/memcontrol.c
>> > +++ b/mm/memcontrol.c
>> > @@ -1409,9 +1409,8 @@ static char *memory_stat_format(struct mem_cgroup *memcg)
>> > (u64)memcg_page_state(memcg, MEMCG_KERNEL_STACK_KB) *
>> > 1024);
>> > seq_buf_printf(&s, "slab %llu\n",
>> > - (u64)(memcg_page_state(memcg, NR_SLAB_RECLAIMABLE) +
>> > - memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE)) *
>> > - PAGE_SIZE);
>> > + (u64)(memcg_page_state(memcg, NR_SLAB_RECLAIMABLE_B) +
>> > + memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE_B)));
>> > seq_buf_printf(&s, "sock %llu\n",
>> > (u64)memcg_page_state(memcg, MEMCG_SOCK) *
>> > PAGE_SIZE);
>> > @@ -1445,11 +1444,9 @@ static char *memory_stat_format(struct mem_cgroup *memcg)
>> > PAGE_SIZE);
>> >
>> > seq_buf_printf(&s, "slab_reclaimable %llu\n",
>> > - (u64)memcg_page_state(memcg, NR_SLAB_RECLAIMABLE) *
>> > - PAGE_SIZE);
>> > + (u64)memcg_page_state(memcg, NR_SLAB_RECLAIMABLE_B));
>> > seq_buf_printf(&s, "slab_unreclaimable %llu\n",
>> > - (u64)memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE) *
>> > - PAGE_SIZE);
>> > + (u64)memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE_B));
>>
>> So here we are now printing in bytes instead of pages, right? That's fine for
>> OOM report, but in sysfs aren't we breaking existing users?
>>
>
> Hm, but it was in bytes previously, look at that x * PAGE_SIZE.
Yeah, that's what I managed to overlook, sorry.
> Or do you mean that now it can be not rounded to PAGE_SIZE?
> If so, I don't think it breaks any expectations.
>
> Thanks!
>
On Thu, May 21, 2020 at 11:57:12AM +0200, Vlastimil Babka wrote:
> On 5/20/20 9:26 PM, Roman Gushchin wrote:
> > On Wed, May 20, 2020 at 02:25:22PM +0200, Vlastimil Babka wrote:
> >>
> >> However __mod_node_page_state() and mode_node_state() will now branch always. I
> >> wonder if the "API clean" goal is worth it...
> >
> > You mean just adding a special write-side helper which will perform a conversion
> > and put VM_WARN_ON_ONCE() into generic write-side helpers?
>
> What I mean is that maybe node/global helpers should assume page granularity,
> and lruvec/memcg helpers do the check is they should convert from bytes to pages
> when calling node/global helpers. Then there would be no extra branches in
> node/global helpers. But maybe it's not worth saving those branches, dunno.
The problem is with helpers like mod_lruvec_state(), which do modify both global
and memcg-level counters. Also memcg- and global counters share idxes, so
it will be confusing to have NR_SLAB_RECLAIMABLE in bytes on one level and
in pages on the other.
So, idk, maybe there is a better way of organizing these counters in a less
complicated manner, but I've no ideas at the moment. But if you do, I'll appreciate it.
Thanks!
On 4/22/20 10:46 PM, Roman Gushchin wrote:
> Allocate and release memory to store obj_cgroup pointers for each
> non-root slab page. Reuse page->mem_cgroup pointer to store a pointer
> to the allocated space.
>
> To distinguish between obj_cgroups and memcg pointers in case
> when it's not obvious which one is used (as in page_cgroup_ino()),
> let's always set the lowest bit in the obj_cgroup case.
>
> Signed-off-by: Roman Gushchin <[email protected]>
Reviewed-by: Vlastimil Babka <[email protected]>
But I have a suggestion:
...
> --- a/include/linux/slub_def.h
> +++ b/include/linux/slub_def.h
> @@ -191,4 +191,6 @@ static inline unsigned int obj_to_index(const struct kmem_cache *cache,
> cache->reciprocal_size);
> }
>
> +extern int objs_per_slab(struct kmem_cache *cache);
> +
> #endif /* _LINUX_SLUB_DEF_H */
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 7f87a0eeafec..63826e460b3f 100644
...
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -5992,4 +5992,9 @@ ssize_t slabinfo_write(struct file *file, const char __user *buffer,
> {
> return -EIO;
> }
> +
> +int objs_per_slab(struct kmem_cache *cache)
> +{
> + return oo_objects(cache->oo);
> +}
> #endif /* CONFIG_SLUB_DEBUG */
>
It's somewhat unfortunate to function call just for this. Although perhaps
compiler can be smart enough as charge_slab_page() (that callse objs_per_slab())
is inline and called from alloc_slab_page() which is also in mm/slub.c.
But it might be also a bit wasteful in case SLUB doesn't manage to allocate its
desired order, but smaller. The actual number of objects is then in page->objects.
So ideally this should use something like objs_per_slab_page(cache, page) where
SLAB supplies cache->num and SLUB page->objects, both implementations inline,
and ignoring the other parameter?
On Fri, May 22, 2020 at 08:27:15PM +0200, Vlastimil Babka wrote:
> On 4/22/20 10:46 PM, Roman Gushchin wrote:
> > Allocate and release memory to store obj_cgroup pointers for each
> > non-root slab page. Reuse page->mem_cgroup pointer to store a pointer
> > to the allocated space.
> >
> > To distinguish between obj_cgroups and memcg pointers in case
> > when it's not obvious which one is used (as in page_cgroup_ino()),
> > let's always set the lowest bit in the obj_cgroup case.
> >
> > Signed-off-by: Roman Gushchin <[email protected]>
>
> Reviewed-by: Vlastimil Babka <[email protected]>
Thank you!
>
> But I have a suggestion:
>
> ...
>
> > --- a/include/linux/slub_def.h
> > +++ b/include/linux/slub_def.h
> > @@ -191,4 +191,6 @@ static inline unsigned int obj_to_index(const struct kmem_cache *cache,
> > cache->reciprocal_size);
> > }
> >
> > +extern int objs_per_slab(struct kmem_cache *cache);
> > +
> > #endif /* _LINUX_SLUB_DEF_H */
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 7f87a0eeafec..63826e460b3f 100644
>
> ...
>
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -5992,4 +5992,9 @@ ssize_t slabinfo_write(struct file *file, const char __user *buffer,
> > {
> > return -EIO;
> > }
> > +
> > +int objs_per_slab(struct kmem_cache *cache)
> > +{
> > + return oo_objects(cache->oo);
> > +}
> > #endif /* CONFIG_SLUB_DEBUG */
> >
>
> It's somewhat unfortunate to function call just for this. Although perhaps
> compiler can be smart enough as charge_slab_page() (that callse objs_per_slab())
> is inline and called from alloc_slab_page() which is also in mm/slub.c.
>
> But it might be also a bit wasteful in case SLUB doesn't manage to allocate its
> desired order, but smaller. The actual number of objects is then in page->objects.
>
> So ideally this should use something like objs_per_slab_page(cache, page) where
> SLAB supplies cache->num and SLUB page->objects, both implementations inline,
> and ignoring the other parameter?
Yeah, good point, makes total sense to me. I'll implement it in the next version
of the patchset.
Thank you!
On 4/22/20 10:46 PM, Roman Gushchin wrote:
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -1370,7 +1370,8 @@ static struct page *kmem_getpages(struct kmem_cache *cachep, gfp_t flags,
> return NULL;
> }
>
> - if (charge_slab_page(page, flags, cachep->gfporder, cachep)) {
> + if (charge_slab_page(page, flags, cachep->gfporder, cachep,
> + cachep->num)) {
> __free_pages(page, cachep->gfporder);
> return NULL;
> }
Hmm noticed only when looking at later patch, this hunks adds a parameter that
the function doesn't take, so it doesn't compile.
On 4/22/20 10:46 PM, Roman Gushchin wrote:
> Store the obj_cgroup pointer in the corresponding place of
> page->obj_cgroups for each allocated non-root slab object.
> Make sure that each allocated object holds a reference to obj_cgroup.
>
> Objcg pointer is obtained from the memcg->objcg dereferencing
> in memcg_kmem_get_cache() and passed from pre_alloc_hook to
> post_alloc_hook. Then in case of successful allocation(s) it's
> getting stored in the page->obj_cgroups vector.
>
> The objcg obtaining part look a bit bulky now, but it will be simplified
> by next commits in the series.
>
> Signed-off-by: Roman Gushchin <[email protected]>
Reviewed-by: Vlastimil Babka <[email protected]>
Nit below:
> diff --git a/mm/slab.h b/mm/slab.h
> index 44def57f050e..525e09e05743 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
...
> @@ -636,8 +684,8 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
> s->flags, flags);
> }
>
> - if (memcg_kmem_enabled())
> - memcg_kmem_put_cache(s);
> + if (!is_root_cache(s))
> + memcg_slab_post_alloc_hook(s, objcg, size, p);
> }
>
> #ifndef CONFIG_SLOB
Keep also the memcg_kmem_enabled() static key check, like elsewhere?
On 4/22/20 10:46 PM, Roman Gushchin wrote:
> Switch to per-object accounting of non-root slab objects.
>
> Charging is performed using obj_cgroup API in the pre_alloc hook.
> Obj_cgroup is charged with the size of the object and the size
> of metadata: as now it's the size of an obj_cgroup pointer.
> If the amount of memory has been charged successfully, the actual
> allocation code is executed. Otherwise, -ENOMEM is returned.
>
> In the post_alloc hook if the actual allocation succeeded,
> corresponding vmstats are bumped and the obj_cgroup pointer is saved.
> Otherwise, the charge is canceled.
>
> On the free path obj_cgroup pointer is obtained and used to uncharge
> the size of the releasing object.
>
> Memcg and lruvec counters are now representing only memory used
> by active slab objects and do not include the free space. The free
> space is shared and doesn't belong to any specific cgroup.
>
> Global per-node slab vmstats are still modified from (un)charge_slab_page()
> functions. The idea is to keep all slab pages accounted as slab pages
> on system level.
>
> Signed-off-by: Roman Gushchin <[email protected]>
Reviewed-by: Vlastimil Babka <[email protected]>
Suggestion below:
> @@ -568,32 +548,33 @@ static __always_inline int charge_slab_page(struct page *page,
> gfp_t gfp, int order,
> struct kmem_cache *s)
> {
> - int ret;
> -
> - if (is_root_cache(s)) {
> - mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
> - PAGE_SIZE << order);
> - return 0;
> - }
> +#ifdef CONFIG_MEMCG_KMEM
> + if (!is_root_cache(s)) {
This could also benefit from memcg_kmem_enabled() static key test AFAICS. Maybe
even have a wrapper for both tests together?
> + int ret;
>
> - ret = memcg_alloc_page_obj_cgroups(page, gfp, objs_per_slab(s));
> - if (ret)
> - return ret;
> + ret = memcg_alloc_page_obj_cgroups(page, gfp, objs_per_slab(s));
You created memcg_alloc_page_obj_cgroups() empty variant for !CONFIG_MEMCG_KMEM
but now the only caller is under CONFIG_MEMCG_KMEM.
> + if (ret)
> + return ret;
>
> - return memcg_charge_slab(page, gfp, order, s);
> + percpu_ref_get_many(&s->memcg_params.refcnt, 1 << order);
Perhaps moving this refcount into memcg_alloc_page_obj_cgroups() (maybe the name
should be different then) will allow you to not add #ifdef CONFIG_MEMCG_KMEM in
this function.
Maybe this is all moot after patch 12/19, will find out :)
> + }
> +#endif
> + mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
> + PAGE_SIZE << order);
> + return 0;
> }
>
> static __always_inline void uncharge_slab_page(struct page *page, int order,
> struct kmem_cache *s)
> {
> - if (is_root_cache(s)) {
> - mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
> - -(PAGE_SIZE << order));
> - return;
> +#ifdef CONFIG_MEMCG_KMEM
> + if (!is_root_cache(s)) {
Everything from above also applies here.
> + memcg_free_page_obj_cgroups(page);
> + percpu_ref_put_many(&s->memcg_params.refcnt, 1 << order);
> }
> -
> - memcg_free_page_obj_cgroups(page);
> - memcg_uncharge_slab(page, order, s);
> +#endif
> + mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
> + -(PAGE_SIZE << order));
> }
>
> static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
On 4/22/20 10:47 PM, Roman Gushchin wrote:
> This is fairly big but mostly red patch, which makes all accounted
> slab allocations use a single set of kmem_caches instead of
> creating a separate set for each memory cgroup.
>
> Because the number of non-root kmem_caches is now capped by the number
> of root kmem_caches, there is no need to shrink or destroy them
> prematurely. They can be perfectly destroyed together with their
> root counterparts. This allows to dramatically simplify the
> management of non-root kmem_caches and delete a ton of code.
>
> This patch performs the following changes:
> 1) introduces memcg_params.memcg_cache pointer to represent the
> kmem_cache which will be used for all non-root allocations
> 2) reuses the existing memcg kmem_cache creation mechanism
> to create memcg kmem_cache on the first allocation attempt
> 3) memcg kmem_caches are named <kmemcache_name>-memcg,
> e.g. dentry-memcg
> 4) simplifies memcg_kmem_get_cache() to just return memcg kmem_cache
> or schedule it's creation and return the root cache
> 5) removes almost all non-root kmem_cache management code
> (separate refcounter, reparenting, shrinking, etc)
> 6) makes slab debugfs to display root_mem_cgroup css id and never
> show :dead and :deact flags in the memcg_slabinfo attribute.
>
> Following patches in the series will simplify the kmem_cache creation.
>
> Signed-off-by: Roman Gushchin <[email protected]>
> ---
> include/linux/memcontrol.h | 5 +-
> include/linux/slab.h | 5 +-
> mm/memcontrol.c | 163 +++-----------
> mm/slab.c | 16 +-
> mm/slab.h | 145 ++++---------
> mm/slab_common.c | 426 ++++---------------------------------
> mm/slub.c | 38 +---
> 7 files changed, 128 insertions(+), 670 deletions(-)
Nice stats.
Reviewed-by: Vlastimil Babka <[email protected]>
> @@ -548,17 +502,14 @@ static __always_inline int charge_slab_page(struct page *page,
> gfp_t gfp, int order,
> struct kmem_cache *s)
> {
> -#ifdef CONFIG_MEMCG_KMEM
Ah, indeed. Still, less churn if ref manipulation was done in
memcg_alloc/free_page_obj() ?
> if (!is_root_cache(s)) {
> int ret;
>
> ret = memcg_alloc_page_obj_cgroups(page, gfp, objs_per_slab(s));
> if (ret)
> return ret;
> -
> - percpu_ref_get_many(&s->memcg_params.refcnt, 1 << order);
> }
> -#endif
> +
> mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
> PAGE_SIZE << order);
> return 0;
On 4/22/20 10:47 PM, Roman Gushchin wrote:
> Currently there are two lists of kmem_caches:
> 1) slab_caches, which contains all kmem_caches,
> 2) slab_root_caches, which contains only root kmem_caches.
>
> And there is some preprocessor magic to have a single list
> if CONFIG_MEMCG_KMEM isn't enabled.
>
> It was required earlier because the number of non-root kmem_caches
> was proportional to the number of memory cgroups and could reach
> really big values. Now, when it cannot exceed the number of root
> kmem_caches, there is really no reason to maintain two lists.
>
> We never iterate over the slab_root_caches list on any hot paths,
> so it's perfectly fine to iterate over slab_caches and filter out
> non-root kmem_caches.
>
> It allows to remove a lot of config-dependent code and two pointers
> from the kmem_cache structure.
>
> Signed-off-by: Roman Gushchin <[email protected]>
Reviewed-by: Vlastimil Babka <[email protected]>
> @@ -1148,11 +1126,12 @@ static void cache_show(struct kmem_cache *s, struct seq_file *m)
>
> static int slab_show(struct seq_file *m, void *p)
> {
> - struct kmem_cache *s = list_entry(p, struct kmem_cache, root_caches_node);
> + struct kmem_cache *s = list_entry(p, struct kmem_cache, list);
>
> - if (p == slab_root_caches.next)
> + if (p == slab_caches.next)
> print_slabinfo_header(m);
> - cache_show(s, m);
> + if (is_root_cache(s))
> + cache_show(s, m);
If there wasn't patch 17/19 we could just remove this condition and have
/proc/slabinfo contain the -memcg variants?
On Fri, May 22, 2020 at 08:27:15PM +0200, Vlastimil Babka wrote:
> On 4/22/20 10:46 PM, Roman Gushchin wrote:
> > Allocate and release memory to store obj_cgroup pointers for each
> > non-root slab page. Reuse page->mem_cgroup pointer to store a pointer
> > to the allocated space.
> >
> > To distinguish between obj_cgroups and memcg pointers in case
> > when it's not obvious which one is used (as in page_cgroup_ino()),
> > let's always set the lowest bit in the obj_cgroup case.
> >
> > Signed-off-by: Roman Gushchin <[email protected]>
>
> Reviewed-by: Vlastimil Babka <[email protected]>
>
> But I have a suggestion:
>
> ...
>
> > --- a/include/linux/slub_def.h
> > +++ b/include/linux/slub_def.h
> > @@ -191,4 +191,6 @@ static inline unsigned int obj_to_index(const struct kmem_cache *cache,
> > cache->reciprocal_size);
> > }
> >
> > +extern int objs_per_slab(struct kmem_cache *cache);
> > +
> > #endif /* _LINUX_SLUB_DEF_H */
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 7f87a0eeafec..63826e460b3f 100644
>
> ...
>
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -5992,4 +5992,9 @@ ssize_t slabinfo_write(struct file *file, const char __user *buffer,
> > {
> > return -EIO;
> > }
> > +
> > +int objs_per_slab(struct kmem_cache *cache)
> > +{
> > + return oo_objects(cache->oo);
> > +}
> > #endif /* CONFIG_SLUB_DEBUG */
> >
>
> It's somewhat unfortunate to function call just for this. Although perhaps
> compiler can be smart enough as charge_slab_page() (that callse objs_per_slab())
> is inline and called from alloc_slab_page() which is also in mm/slub.c.
>
> But it might be also a bit wasteful in case SLUB doesn't manage to allocate its
> desired order, but smaller. The actual number of objects is then in page->objects.
>
> So ideally this should use something like objs_per_slab_page(cache, page) where
> SLAB supplies cache->num and SLUB page->objects, both implementations inline,
> and ignoring the other parameter?
Good idea! I'll do this in the next version. Thanks!
On Mon, May 25, 2020 at 05:07:22PM +0200, Vlastimil Babka wrote:
> On 4/22/20 10:46 PM, Roman Gushchin wrote:
> > Store the obj_cgroup pointer in the corresponding place of
> > page->obj_cgroups for each allocated non-root slab object.
> > Make sure that each allocated object holds a reference to obj_cgroup.
> >
> > Objcg pointer is obtained from the memcg->objcg dereferencing
> > in memcg_kmem_get_cache() and passed from pre_alloc_hook to
> > post_alloc_hook. Then in case of successful allocation(s) it's
> > getting stored in the page->obj_cgroups vector.
> >
> > The objcg obtaining part look a bit bulky now, but it will be simplified
> > by next commits in the series.
> >
> > Signed-off-by: Roman Gushchin <[email protected]>
>
> Reviewed-by: Vlastimil Babka <[email protected]>
>
> Nit below:
>
> > diff --git a/mm/slab.h b/mm/slab.h
> > index 44def57f050e..525e09e05743 100644
> > --- a/mm/slab.h
> > +++ b/mm/slab.h
> ...
> > @@ -636,8 +684,8 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
> > s->flags, flags);
> > }
> >
> > - if (memcg_kmem_enabled())
> > - memcg_kmem_put_cache(s);
> > + if (!is_root_cache(s))
> > + memcg_slab_post_alloc_hook(s, objcg, size, p);
> > }
> >
> > #ifndef CONFIG_SLOB
>
> Keep also the memcg_kmem_enabled() static key check, like elsewhere?
>
Ok, will add, it can speed things up a little bit. My only concern is that
the code is not ready for memcg_kmem_enabled() turning negative after being positive.
But it's not a concern, right?
Actually, we can simplify memcg_kmem_enabled() mechanics and enable it
only once as soon as the first memcg is fully initialized. I don't think there
is any value in tracking the actual number of active memcgs.
Thanks!
On Mon, May 25, 2020 at 06:10:55PM +0200, Vlastimil Babka wrote:
> On 4/22/20 10:46 PM, Roman Gushchin wrote:
> > Switch to per-object accounting of non-root slab objects.
> >
> > Charging is performed using obj_cgroup API in the pre_alloc hook.
> > Obj_cgroup is charged with the size of the object and the size
> > of metadata: as now it's the size of an obj_cgroup pointer.
> > If the amount of memory has been charged successfully, the actual
> > allocation code is executed. Otherwise, -ENOMEM is returned.
> >
> > In the post_alloc hook if the actual allocation succeeded,
> > corresponding vmstats are bumped and the obj_cgroup pointer is saved.
> > Otherwise, the charge is canceled.
> >
> > On the free path obj_cgroup pointer is obtained and used to uncharge
> > the size of the releasing object.
> >
> > Memcg and lruvec counters are now representing only memory used
> > by active slab objects and do not include the free space. The free
> > space is shared and doesn't belong to any specific cgroup.
> >
> > Global per-node slab vmstats are still modified from (un)charge_slab_page()
> > functions. The idea is to keep all slab pages accounted as slab pages
> > on system level.
> >
> > Signed-off-by: Roman Gushchin <[email protected]>
>
> Reviewed-by: Vlastimil Babka <[email protected]>
>
> Suggestion below:
>
> > @@ -568,32 +548,33 @@ static __always_inline int charge_slab_page(struct page *page,
> > gfp_t gfp, int order,
> > struct kmem_cache *s)
> > {
> > - int ret;
> > -
> > - if (is_root_cache(s)) {
> > - mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
> > - PAGE_SIZE << order);
> > - return 0;
> > - }
> > +#ifdef CONFIG_MEMCG_KMEM
> > + if (!is_root_cache(s)) {
>
> This could also benefit from memcg_kmem_enabled() static key test AFAICS. Maybe
> even have a wrapper for both tests together?
Added.
>
> > + int ret;
> >
> > - ret = memcg_alloc_page_obj_cgroups(page, gfp, objs_per_slab(s));
> > - if (ret)
> > - return ret;
> > + ret = memcg_alloc_page_obj_cgroups(page, gfp, objs_per_slab(s));
>
> You created memcg_alloc_page_obj_cgroups() empty variant for !CONFIG_MEMCG_KMEM
> but now the only caller is under CONFIG_MEMCG_KMEM.
Good catch, thanks!
>
> > + if (ret)
> > + return ret;
> >
> > - return memcg_charge_slab(page, gfp, order, s);
> > + percpu_ref_get_many(&s->memcg_params.refcnt, 1 << order);
>
> Perhaps moving this refcount into memcg_alloc_page_obj_cgroups() (maybe the name
> should be different then) will allow you to not add #ifdef CONFIG_MEMCG_KMEM in
> this function.
The reference counter bumping is not related to obj_cgroups,
we just bump a counter for each slab page belonging to the kmem_cache.
And it will go away later in the patchset with the rest of slab caches
refcounting.
>
> Maybe this is all moot after patch 12/19, will find out :)
>
> > + }
> > +#endif
> > + mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
> > + PAGE_SIZE << order);
> > + return 0;
> > }
> >
> > static __always_inline void uncharge_slab_page(struct page *page, int order,
> > struct kmem_cache *s)
> > {
> > - if (is_root_cache(s)) {
> > - mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
> > - -(PAGE_SIZE << order));
> > - return;
> > +#ifdef CONFIG_MEMCG_KMEM
> > + if (!is_root_cache(s)) {
>
> Everything from above also applies here.
Done.
Thanks!
>
> > + memcg_free_page_obj_cgroups(page);
> > + percpu_ref_put_many(&s->memcg_params.refcnt, 1 << order);
> > }
> > -
> > - memcg_free_page_obj_cgroups(page);
> > - memcg_uncharge_slab(page, order, s);
> > +#endif
> > + mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
> > + -(PAGE_SIZE << order));
> > }
> >
> > static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
>
>
On Tue, May 26, 2020 at 12:52:24PM +0200, Vlastimil Babka wrote:
> On 4/22/20 10:47 PM, Roman Gushchin wrote:
> > Currently there are two lists of kmem_caches:
> > 1) slab_caches, which contains all kmem_caches,
> > 2) slab_root_caches, which contains only root kmem_caches.
> >
> > And there is some preprocessor magic to have a single list
> > if CONFIG_MEMCG_KMEM isn't enabled.
> >
> > It was required earlier because the number of non-root kmem_caches
> > was proportional to the number of memory cgroups and could reach
> > really big values. Now, when it cannot exceed the number of root
> > kmem_caches, there is really no reason to maintain two lists.
> >
> > We never iterate over the slab_root_caches list on any hot paths,
> > so it's perfectly fine to iterate over slab_caches and filter out
> > non-root kmem_caches.
> >
> > It allows to remove a lot of config-dependent code and two pointers
> > from the kmem_cache structure.
> >
> > Signed-off-by: Roman Gushchin <[email protected]>
>
> Reviewed-by: Vlastimil Babka <[email protected]>
Thanks!
>
> > @@ -1148,11 +1126,12 @@ static void cache_show(struct kmem_cache *s, struct seq_file *m)
> >
> > static int slab_show(struct seq_file *m, void *p)
> > {
> > - struct kmem_cache *s = list_entry(p, struct kmem_cache, root_caches_node);
> > + struct kmem_cache *s = list_entry(p, struct kmem_cache, list);
> >
> > - if (p == slab_root_caches.next)
> > + if (p == slab_caches.next)
> > print_slabinfo_header(m);
> > - cache_show(s, m);
> > + if (is_root_cache(s))
> > + cache_show(s, m);
>
> If there wasn't patch 17/19 we could just remove this condition and have
> /proc/slabinfo contain the -memcg variants?
Sure, it's an option too. But because it's a user facing option, I'd keep it
as it is now at least until everything will settle down a bit.
Thanks!
On 5/26/20 7:53 PM, Roman Gushchin wrote:
> On Mon, May 25, 2020 at 05:07:22PM +0200, Vlastimil Babka wrote:
>> On 4/22/20 10:46 PM, Roman Gushchin wrote:
>> > diff --git a/mm/slab.h b/mm/slab.h
>> > index 44def57f050e..525e09e05743 100644
>> > --- a/mm/slab.h
>> > +++ b/mm/slab.h
>> ...
>> > @@ -636,8 +684,8 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
>> > s->flags, flags);
>> > }
>> >
>> > - if (memcg_kmem_enabled())
>> > - memcg_kmem_put_cache(s);
>> > + if (!is_root_cache(s))
>> > + memcg_slab_post_alloc_hook(s, objcg, size, p);
>> > }
>> >
>> > #ifndef CONFIG_SLOB
>>
>> Keep also the memcg_kmem_enabled() static key check, like elsewhere?
>>
>
> Ok, will add, it can speed things up a little bit. My only concern is that
> the code is not ready for memcg_kmem_enabled() turning negative after being positive.
> But it's not a concern, right?
>
> Actually, we can simplify memcg_kmem_enabled() mechanics and enable it
> only once as soon as the first memcg is fully initialized. I don't think there
> is any value in tracking the actual number of active memcgs.
Yeah, it should be acceptable that once the key is enabled after boot, there's
no way back until reboot.
> Thanks!
>