Received: by 2002:ac0:aed5:0:0:0:0:0 with SMTP id t21csp5584837imb; Thu, 7 Mar 2019 20:17:09 -0800 (PST) X-Google-Smtp-Source: APXvYqwdalb6VwCVj0grra7+1XWqY1VpRFniRehjcjezkVVyoWciAOXL5+a9vvFV6leCUhCM/2/Y X-Received: by 2002:a65:51c3:: with SMTP id i3mr14603796pgq.45.1552018629337; Thu, 07 Mar 2019 20:17:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1552018629; cv=none; d=google.com; s=arc-20160816; b=Y4I+LZ4f/+8Lbw4Y6+F2QtS/qh+JACB6EmgcTWq/xhoJreBVpF+3MYbBJlc8/W1cWz dv1K6E381iEXfB5/PyXRg9yZglnGhnOdJ54n6QQ1ZqBq6sf7Soh7onmOb3IZPGqQzIOe gpAQcN1GRWmuHODV8y5jPlXC5AM0RfIl8MEdym6l326a0Cjw8gmWflSFwtIhzafew50u vonqGKdkX4CzrnEo3En5OZc9iZliikO2tEN51xzeZ3sbbN9r2NbVghNCwQfECrEcIayL D3qFEW+fSRRv2R3mSBB5hKd56+DZXko6G2ifmCWPGkeSJlAsW7UcgA68Q6PJfhIPJfuX aIBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=C1UFNBfZ1PMvCNxNU0QdAtVm0PxPQxmOQ26CPtV4v/w=; b=pnw6h6Le++yiswcu5D/zr+iUndAb3Axr56GZSYGyCCEs9K4hoaHbsVSmN5ODztvpVe 1cbivmnFxRTv7czd7Z3KkBOwuis7fa73XB0Aqpw6HtxW9BrOcK5Bk0IvqTN9p4AIYHqe hfv2fISYdqxQymVTN9Xpd21KKtm8qctKR6ZmQH2qyed7VCbw/tWpZEcZhrcSxgrMmzmC 5rnreXIeY2UJ1iaB6pCz79DkmLwT/ALMXPO17SbUfOuD9mPlyhJABHMYE15iMPUYnvbe GcJlSCuaf6Jbi0wROyxuQmRMbJOBhluoRUv0d6LAd0OkVfVLX619T4JEWz61rg9m3JTe Lq7A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=Qmgo7VSB; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f2si5588788pgv.10.2019.03.07.20.16.54; Thu, 07 Mar 2019 20:17:09 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=Qmgo7VSB; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726622AbfCHEP5 (ORCPT + 99 others); Thu, 7 Mar 2019 23:15:57 -0500 Received: from wout2-smtp.messagingengine.com ([64.147.123.25]:51531 "EHLO wout2-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726616AbfCHEPy (ORCPT ); Thu, 7 Mar 2019 23:15:54 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id D3DC7344B; Thu, 7 Mar 2019 23:15:52 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 07 Mar 2019 23:15:53 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=C1UFNBfZ1PMvCNxNU0QdAtVm0PxPQxmOQ26CPtV4v/w=; b=Qmgo7VSB 25Hv9qMlIjQRkzsqOTVGEP5pLcSbpt3DJWN8NWVNcmjDKNpiyIyEtFWdajaruenR 33v3iwjZWOfl5JkRUIRJC7kJAtaxoxgARgjAq1sufoH46mTMp1QyzGDSsfgqa5O+ 6w+qnrmhq4BO9tyIg4nccAOu0S+AjwSUgNpgIuqfcJ8fTCW664/tarqTE/ut3iGn PDO2Xsa0JWcl2TVuPC7cKht8NTrIxwF6mdZUrsCduNPz+y6CojsMKS1qQMo6szLW aoGsCaX8wrSfQbcVPagNzwupUVF6poQuujexNn1h9A/qGLh/msC9US0BgBrnwYm4 zMDmQoMU8kUyiQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrfeelgdeifecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrhedrudehkeenucfrrghrrghmpehmrghilhhfrhhomhepthhosghi nheskhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgepudeg X-ME-Proxy: Received: from eros.localdomain (124-169-5-158.dyn.iinet.net.au [124.169.5.158]) by mail.messagingengine.com (Postfix) with ESMTPA id 38D21E4548; Thu, 7 Mar 2019 23:15:48 -0500 (EST) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christopher Lameter , Pekka Enberg , Matthew Wilcox , Tycho Andersen , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC 15/15] slub: Enable balancing slab objects across nodes Date: Fri, 8 Mar 2019 15:14:26 +1100 Message-Id: <20190308041426.16654-16-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190308041426.16654-1-tobin@kernel.org> References: <20190308041426.16654-1-tobin@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We have just implemented Slab Movable Objects (SMO). On NUMA systems slabs can become unbalanced i.e. many objects on one node while other nodes have few objects. Using SMO we can balance the objects across all the nodes. The algorithm used is as follows: 1. Move all objects to node 0 (this has the effect of defragmenting the cache). 2. Calculate the desired number of slabs for each node (this is done using the approximation nr_slabs / nr_nodes). 3. Loop over the nodes moving the desired number of slabs from node 0 to the node. Feature is conditionally built in with CONFIG_SMO_NODE, this is because we need the full list (we enable SLUB_DEBUG to get this). Future version may separate final list out of SLUB_DEBUG. Expose this functionality to userspace via a sysfs entry. Add sysfs entry: /sysfs/kernel/slab//balance Write of '1' to this file triggers balance, no other value accepted. This feature relies on SMO being enable for the cache, this is done with a call to, after the isolate/migrate functions have been defined. kmem_cache_setup_mobility(s, isolate, migrate) Signed-off-by: Tobin C. Harding --- mm/slub.c | 115 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 115 insertions(+) diff --git a/mm/slub.c b/mm/slub.c index ac9b8f592e10..65cf305a70c3 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4584,6 +4584,104 @@ static unsigned long __move_all_objects_to(struct kmem_cache *s, int node) return left; } + +/* + * __move_n_slabs() - Attempt to move 'num' slabs to target_node, + * Return: The number of slabs moved or error code. + */ +static long __move_n_slabs(struct kmem_cache *s, int node, int target_node, + long num) +{ + struct kmem_cache_node *n = get_node(s, node); + LIST_HEAD(move_list); + struct page *page, *page2; + unsigned long flags; + void **scratch; + long done = 0; + + if (node == target_node) + return -EINVAL; + + scratch = alloc_scratch(s); + if (!scratch) + return -ENOMEM; + + spin_lock_irqsave(&n->list_lock, flags); + list_for_each_entry_safe(page, page2, &n->full, lru) { + if (!slab_trylock(page)) + /* Busy slab. Get out of the way */ + continue; + + list_move(&page->lru, &move_list); + page->frozen = 1; + slab_unlock(page); + + if (++done >= num) + break; + } + spin_unlock_irqrestore(&n->list_lock, flags); + + list_for_each_entry(page, &move_list, lru) { + if (page->inuse) + __move(page, scratch, target_node); + } + kfree(scratch); + + /* Inspect results and dispose of pages */ + spin_lock_irqsave(&n->list_lock, flags); + list_for_each_entry_safe(page, page2, &move_list, lru) { + list_del(&page->lru); + slab_lock(page); + page->frozen = 0; + + if (page->inuse) { + /* + * This is best effort only, if slab still has + * objects just put it back on the partial list. + */ + n->nr_partial++; + list_add_tail(&page->lru, &n->partial); + slab_unlock(page); + } else { + slab_unlock(page); + discard_slab(s, page); + } + } + spin_unlock_irqrestore(&n->list_lock, flags); + + return done; +} + +/* + * __balance_nodes_partial() - Balance partial objects. + * @s: The cache we are working on. + * + * Attempt to balance the objects that are in partial slabs evenly + * across all nodes. + */ +static void __balance_nodes_partial(struct kmem_cache *s) +{ + struct kmem_cache_node *n = get_node(s, 0); + unsigned long desired_nr_slabs_per_node; + unsigned long nr_slabs; + int nr_nodes = 0; + int nid; + + (void)__move_all_objects_to(s, 0); + + for_each_node_state(nid, N_NORMAL_MEMORY) + nr_nodes++; + + nr_slabs = atomic_long_read(&n->nr_slabs); + desired_nr_slabs_per_node = nr_slabs / nr_nodes; + + for_each_node_state(nid, N_NORMAL_MEMORY) { + if (nid == 0) + continue; + + __move_n_slabs(s, 0, nid, desired_nr_slabs_per_node); + } +} #endif /** @@ -5836,6 +5934,22 @@ static ssize_t move_store(struct kmem_cache *s, const char *buf, size_t length) return length; } SLAB_ATTR(move); + +static ssize_t balance_show(struct kmem_cache *s, char *buf) +{ + return 0; +} + +static ssize_t balance_store(struct kmem_cache *s, + const char *buf, size_t length) +{ + if (buf[0] == '1') + __balance_nodes_partial(s); + else + return -EINVAL; + return length; +} +SLAB_ATTR(balance); #endif /* CONFIG_SMO_NODE */ #ifdef CONFIG_NUMA @@ -5964,6 +6078,7 @@ static struct attribute *slab_attrs[] = { &shrink_attr.attr, #ifdef CONFIG_SMO_NODE &move_attr.attr, + &balance_attr.attr, #endif &slabs_cpu_partial_attr.attr, #ifdef CONFIG_SLUB_DEBUG -- 2.21.0