2021-01-06 17:43:24

by Vlastimil Babka

[permalink] [raw]
Subject: [RFC 0/3] mm, slab, slub: remove cpu and memory hotplug locks

Hi,

some related work caused me to look at how we use get/put_mems_online() and
get/put_online_cpus() during kmem cache creation/descruction/shrinking, and
realize that it should be actually safe to remove all of that with rather small
effort (as e.g. Michal Hocko suspected in some of the past discussions
already). This has the benefit to avoid rather heavy locks that have caused
locking order issues already in the past. So this is the result, Patches 1 and
2 remove memory hotplug and cpu hotplug locking, respectively. Patch 3 is due
to realization that in fact some races exist despite the locks (even if not
removed), but the most sane solution is not to introduce more of them, but
rather accept some wasted memory in scenarios that should be rare anyway (full
memory hot remove), as we do the same in other contexts already. It's all RFC
for now, as I might have missed some reason why it's not safe.

Vlastimil Babka (3):
mm, slab, slub: stop taking memory hotplug lock
mm, slab, slub: stop taking cpu hotplug lock
mm, slub: stop freeing kmem_cache_node structures on node offline

mm/slab_common.c | 20 ++++--------------
mm/slub.c | 54 ++++++++++++++++++++++++++++++++----------------
2 files changed, 40 insertions(+), 34 deletions(-)

--
2.29.2


2021-01-06 17:44:28

by Vlastimil Babka

[permalink] [raw]
Subject: [RFC 1/3] mm, slab, slub: stop taking memory hotplug lock

Since commit 03afc0e25f7f ("slab: get_online_mems for
kmem_cache_{create,destroy,shrink}") we are taking memory hotplug lock for SLAB
and SLUB when creating, destroying or shrinking a cache. It is quite a heavy
lock and it's best to avoid it if possible, as we had several issues with
lockdep complaining about ordering in the past, see e.g. e4f8e513c3d3
("mm/slub: fix a deadlock in show_slab_objects()").

The problem scenario in 03afc0e25f7f (solved by the memory hotplug lock) can be
summarized as follows: while there's slab_mutex synchronizing new kmem cache
creation and SLUB's MEM_GOING_ONLINE callback slab_mem_going_online_callback(),
we may miss creation of kmem_cache_node for the hotplugged node in the new kmem
cache, because the hotplug callback doesn't yet see the new cache, and cache
creation in init_kmem_cache_nodes() only inits kmem_cache_node for nodes in the
N_NORMAL_MEMORY nodemask, which however may not yet include the new node, as
that happens only later after the MEM_GOING_ONLINE callback.

Instead of using get/put_online_mems(), the problem can be solved by SLUB
maintaining its own nodemask of nodes for which it has allocated the per-node
kmem_cache_node structures. This nodemask would generally mirror the
N_NORMAL_MEMORY nodemask, but would be updated only in under SLUB's control in
its memory hotplug callbacks under the slab_mutex. This patch adds such
nodemask and its handling.

Commit 03afc0e25f7f mentiones "issues like [the one above]", but there don't
appear to be further issues. All the paths (shared for SLAB and SLUB) taking
the memory hotplug locks are also taking the slab_mutex, except
kmem_cache_shrink() where 03afc0e25f7f replaced slab_mutex with
get/put_online_mems(). So this patch restores slab_mutex in
kmem_cache_shrink(). slab_mutex should be otherwise sufficient, as all the
memory hotplug callbacks in SLUB take it as well.

SLAB also doesn't need the memory hotplug locking, which it only gained by
03afc0e25f7f through the shared paths in slab_common.c. Its memory hotplug
callbacks are also protected by slab_mutex against races with these paths. The
problem of SLUB relying on N_NORMAL_MEMORY doesn't apply to SLAB, as its
setup_kmem_cache_nodes relies on N_ONLINE, and the new node is already set
there during the MEM_GOING_ONLINE callback, so no special care is needed
for SLAB.

As such, this patch removes all get/put_online_mems() usage by the slab
subsystem.

Signed-off-by: Vlastimil Babka <[email protected]>
---
mm/slab_common.c | 10 ++++------
mm/slub.c | 28 +++++++++++++++++++++++++---
2 files changed, 29 insertions(+), 9 deletions(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index 075b23ce94ec..e728265c8b7d 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -311,7 +311,6 @@ kmem_cache_create_usercopy(const char *name,
int err;

get_online_cpus();
- get_online_mems();

mutex_lock(&slab_mutex);

@@ -361,7 +360,6 @@ kmem_cache_create_usercopy(const char *name,
out_unlock:
mutex_unlock(&slab_mutex);

- put_online_mems();
put_online_cpus();

if (err) {
@@ -490,7 +488,6 @@ void kmem_cache_destroy(struct kmem_cache *s)
return;

get_online_cpus();
- get_online_mems();

mutex_lock(&slab_mutex);

@@ -507,7 +504,6 @@ void kmem_cache_destroy(struct kmem_cache *s)
out_unlock:
mutex_unlock(&slab_mutex);

- put_online_mems();
put_online_cpus();
}
EXPORT_SYMBOL(kmem_cache_destroy);
@@ -526,10 +522,12 @@ int kmem_cache_shrink(struct kmem_cache *cachep)
int ret;

get_online_cpus();
- get_online_mems();
+ mutex_lock(&slab_mutex);
+
kasan_cache_shrink(cachep);
ret = __kmem_cache_shrink(cachep);
- put_online_mems();
+
+ mutex_unlock(&slab_mutex);
put_online_cpus();
return ret;
}
diff --git a/mm/slub.c b/mm/slub.c
index 1f4584954f4c..2e2edd5c9cfc 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -236,6 +236,14 @@ static inline void stat(const struct kmem_cache *s, enum stat_item si)
#endif
}

+/*
+ * Tracks for which NUMA nodes we have kmem_cache_nodes allocated.
+ * Corresponds to node_state[N_NORMAL_MEMORY], but can temporarily
+ * differ during memory hotplug/hotremove operations.
+ * Protected by slab_mutex.
+ */
+static nodemask_t slab_nodes;
+
/********************************************************************
* Core slab cache functions
*******************************************************************/
@@ -2678,7 +2686,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
* ignore the node constraint
*/
if (unlikely(node != NUMA_NO_NODE &&
- !node_state(node, N_NORMAL_MEMORY)))
+ !node_isset(node, slab_nodes)))
node = NUMA_NO_NODE;
goto new_slab;
}
@@ -2689,7 +2697,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
* same as above but node_match() being false already
* implies node != NUMA_NO_NODE
*/
- if (!node_state(node, N_NORMAL_MEMORY)) {
+ if (!node_isset(node, slab_nodes)) {
node = NUMA_NO_NODE;
goto redo;
} else {
@@ -3599,7 +3607,7 @@ static int init_kmem_cache_nodes(struct kmem_cache *s)
{
int node;

- for_each_node_state(node, N_NORMAL_MEMORY) {
+ for_each_node_mask(node, slab_nodes) {
struct kmem_cache_node *n;

if (slab_state == DOWN) {
@@ -4259,6 +4267,7 @@ static void slab_mem_offline_callback(void *arg)
return;

mutex_lock(&slab_mutex);
+ node_clear(offline_node, slab_nodes);
list_for_each_entry(s, &slab_caches, list) {
n = get_node(s, offline_node);
if (n) {
@@ -4312,6 +4321,11 @@ static int slab_mem_going_online_callback(void *arg)
init_kmem_cache_node(n);
s->node[nid] = n;
}
+ /*
+ * Any cache created after this point will also have kmem_cache_node
+ * initialized for the new node.
+ */
+ node_set(nid, slab_nodes);
out:
mutex_unlock(&slab_mutex);
return ret;
@@ -4392,6 +4406,7 @@ void __init kmem_cache_init(void)
{
static __initdata struct kmem_cache boot_kmem_cache,
boot_kmem_cache_node;
+ int node;

if (debug_guardpage_minorder())
slub_max_order = 0;
@@ -4399,6 +4414,13 @@ void __init kmem_cache_init(void)
kmem_cache_node = &boot_kmem_cache_node;
kmem_cache = &boot_kmem_cache;

+ /*
+ * Initialize the nodemask for which we will allocate per node
+ * structures. Here we don't need taking slab_mutex yet.
+ */
+ for_each_node_state(node, N_NORMAL_MEMORY)
+ node_set(node, slab_nodes);
+
create_boot_cache(kmem_cache_node, "kmem_cache_node",
sizeof(struct kmem_cache_node), SLAB_HWCACHE_ALIGN, 0, 0);

--
2.29.2

Subject: Re: [RFC 0/3] mm, slab, slub: remove cpu and memory hotplug locks

On Wed, 6 Jan 2021, Vlastimil Babka wrote:

> rather accept some wasted memory in scenarios that should be rare anyway (full
> memory hot remove), as we do the same in other contexts already. It's all RFC
> for now, as I might have missed some reason why it's not safe.

Looks good to me. My only concern is the kernel that has hotplug disabled.
Current code allows the online/offline checks to be optimized away.

Can this patch be enhanced to do the same?

2021-01-12 09:06:14

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [RFC 0/3] mm, slab, slub: remove cpu and memory hotplug locks


On 1/6/21 8:09 PM, Christoph Lameter wrote:
> On Wed, 6 Jan 2021, Vlastimil Babka wrote:
>
>> rather accept some wasted memory in scenarios that should be rare anyway (full
>> memory hot remove), as we do the same in other contexts already. It's all RFC
>> for now, as I might have missed some reason why it's not safe.
>
> Looks good to me. My only concern is the kernel that has hotplug disabled.
> Current code allows the online/offline checks to be optimized away.
>
> Can this patch be enhanced to do the same?

Thanks, indeed I didn't think about that.
But on closer inspection, there doesn't seem to be need for such enhancement:

- Patch 1 adds the new slab_nodes nodemask, which is basically a copy of
N_NORMAL_MEMORY. Without memory hotplug, the callbacks that would update it
don't occur (maybe are even eliminated as dead code?), other code that uses the
nodemask is unaffected wtf performance, it's just using a different nodemask for
the same operations. The extra memory usage of adding the nodemask is negligible
and not worth complicating the code to distinguish between the new nodemask and
N_NORMAL_MEMORY depending on hotplug being disabled or enabled.

- Patch 1 also restores slab_mutex lock in kmem_cache_shrink(). Commit
03afc0e25f7f removed it because the memory hotplug lock was deemed to be
sufficient replacement, but probably didn't consider the case where hotplug is
disabled and thus the hotplug lock is no-op. Maybe it's safe not to take
slab_mutex in kmem_cache_shrink() in that case, but kmem_cache_shrink() is only
called from a sysfs handler, thus a very cold path anyway.
But, I found out that lockdep complains about it, so I have to rethink this part
anyway... (when kmem_cache_shrink() is called from write to the 'shrink' file we
already have kernfs lock "kn->active#28" and try to lock slab_mutex, but there's
existing dependency in reverse order where in kmem_cache_create() we start with
slab_mutex and sysfs_slab_add takes the kernfs lock, I wonder how this wasn't a
problem before 03afc0e25f7f)

- Patch 2 purely just removes calls to cpu hotplug lock.

- Patch 3 only affects memory hotplug callbacks so nothing to enhance if hotplug
is disabled