Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp4580679yba; Sun, 19 May 2019 23:21:54 -0700 (PDT) X-Google-Smtp-Source: APXvYqxJcKVSIKTU08nhL+9ghpTyUBGzoNUH0R4wsDq+J04iUpZsyEYgtt3KCIuEmC4j90F+duRl X-Received: by 2002:a63:6841:: with SMTP id d62mr74122027pgc.17.1558333314664; Sun, 19 May 2019 23:21:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558333314; cv=none; d=google.com; s=arc-20160816; b=xcgi4lonqMzwyYnTR8RBbnDTrKZecn4qfXhZE++NCGLPrKKtwLHLgGIsuPkNcVKeR5 OxdisOpALnPcs7PdYeohKA146fZfgkK+B72i7ZQUXCIARAESwlkf+IX8jFSObLZO34jb kXzWVROm76R4x3NScxAOwNN4D0poljgM3481M4aJbQabt8eM7Px4iYFBt1IR1g0WyfQE 9H/P5g1/qR6t4IPoS3zgyOfzaoP46ZZmEwhOzFpIcsplj/qkyio7hWnX7x7SMulUlJKH PBU9QuL0wIALkhVejw0P+uQ/ddXGqkLE4BsooAf7OOtDpiMHOgX2rJsR1nyHkQfMpbWL 20Yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=K23AG6kPeiLizgBxJZXoFVpxoNhX/Z2JwBtP24yd/wE=; b=Ln0nHGIa7ABEv/2f8Ogox1Nx0lgPhvYLCetmZSOkWgLpgfiyJa9KzSdyP+hWK7VU9l JgHpWZUGOP3SEzFJ3Cz5h0QBvWvNsuQHDXzGYn9DaA55dKpkiqR3lIZHFrulDx+OsUrb 7sH7Y8QuhDUr0B5jAuVUGs0Rr8OnvYo4MWWyYw5OpZh1oI0uQj3gSLR34RFTA/Pw85z7 +jyy3HHRrJjdLgvloIv/g/vFik3OgZHOrAp3QGOhT3bhHBDesOllsCfOR5scTh3TMuJo u3yq7LIrqVkExBu2TDabp8Vd8huOB5iPKcOml/7cH6l9f0V5pdSmpJK8agjxzFNuq8D3 akAw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=FP1+azQc; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o7si16772103plk.264.2019.05.19.23.21.39; Sun, 19 May 2019 23:21:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=FP1+azQc; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730282AbfETFmd (ORCPT + 99 others); Mon, 20 May 2019 01:42:33 -0400 Received: from new4-smtp.messagingengine.com ([66.111.4.230]:60187 "EHLO new4-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727057AbfETFmc (ORCPT ); Mon, 20 May 2019 01:42:32 -0400 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailnew.nyi.internal (Postfix) with ESMTP id 3DB96D210; Mon, 20 May 2019 01:42:30 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Mon, 20 May 2019 01:42:30 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=K23AG6kPeiLizgBxJZXoFVpxoNhX/Z2JwBtP24yd/wE=; b=FP1+azQc rQo9GjVhupZBKSw2zglCLDgHlLDuQoAd2mhHz9NO4RMZnIck3CtwAzP/p0ddUgtn VuJd5rvuTRdb8D8ThKHXrNgTgMt0nG032738vlqt4bUylqfMz6CkhRHYHXIfgwSn 6NYAtjYgYyh0940MzBe8khndc1V4L8T3xeYYdt/gRW8Nb30ZlmwDTZT/JnLdDZI0 Ij+teHPg39csT2Osu1f114zoyoguVS18FbF+3qdM7IuPI3GFc3s6s7+uov0gHyCs ePW1AzP7C1mZyxIycSvVCr/fH4oC0ieZ9WULZkNjbWWKMqqJa6JrWCmn011VVuVX caINSxyKVfQeXA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduuddruddtjedguddtudcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgs ihhnucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenuc fkphepuddvgedrudeiledrudehiedrvddtfeenucfrrghrrghmpehmrghilhhfrhhomhep thhosghinheskhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgepuddt X-ME-Proxy: Received: from eros.localdomain (124-169-156-203.dyn.iinet.net.au [124.169.156.203]) by mail.messagingengine.com (Postfix) with ESMTPA id 22A3580060; Mon, 20 May 2019 01:42:22 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton , Matthew Wilcox Cc: "Tobin C. Harding" , Roman Gushchin , Alexander Viro , Christoph Hellwig , Pekka Enberg , David Rientjes , Joonsoo Kim , Christopher Lameter , Miklos Szeredi , Andreas Dilger , Waiman Long , Tycho Andersen , Theodore Ts'o , Andi Kleen , David Chinner , Nick Piggin , Rik van Riel , Hugh Dickins , Jonathan Corbet , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v5 11/16] tools/testing/slab: Add XArray movable objects tests Date: Mon, 20 May 2019 15:40:12 +1000 Message-Id: <20190520054017.32299-12-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190520054017.32299-1-tobin@kernel.org> References: <20190520054017.32299-1-tobin@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We just implemented movable objects for the XArray. Let's test it intree. Add test module for the XArray's movable objects implementation. Functionality of the XArray Slab Movable Object implementation can usually be seen by simply by using `slabinfo` on a running machine since the radix tree is typically in use on a running machine and will have partial slabs. For repeated testing we can use the test module to run to simulate a workload on the XArray then use `slabinfo` to test object migration is functioning. If testing on freshly spun up VM (low radix tree workload) it may be necessary to load/unload the module a number of times to create partial slabs. Example test session -------------------- Relevant /proc/slabinfo column headers: name Prior to testing slabinfo report for radix_tree_node: # slabinfo radix_tree_node --report Slabcache: radix_tree_node Aliases: 0 Order : 2 Objects: 8352 ** Reclaim accounting active ** Defragmentation at 30% Sizes (bytes) Slabs Debug Memory ------------------------------------------------------------------------ Object : 576 Total : 497 Sanity Checks : On Total: 8142848 SlabObj: 912 Full : 473 Redzoning : On Used : 4810752 SlabSiz: 16384 Partial: 24 Poisoning : On Loss : 3332096 Loss : 336 CpuSlab: 0 Tracking : On Lalig: 2806272 Align : 8 Objects: 17 Tracing : Off Lpadd: 437360 Here you can see the kernel was built with Slab Movable Objects enabled for the XArray (XArray uses the radix tree below the surface). After inserting the test module (note we have triggered allocation of a number of radix tree nodes increasing the object count but decreasing the number of partial slabs): # slabinfo radix_tree_node --report Slabcache: radix_tree_node Aliases: 0 Order : 2 Objects: 8442 ** Reclaim accounting active ** Defragmentation at 30% Sizes (bytes) Slabs Debug Memory ------------------------------------------------------------------------ Object : 576 Total : 499 Sanity Checks : On Total: 8175616 SlabObj: 912 Full : 484 Redzoning : On Used : 4862592 SlabSiz: 16384 Partial: 15 Poisoning : On Loss : 3313024 Loss : 336 CpuSlab: 0 Tracking : On Lalig: 2836512 Align : 8 Objects: 17 Tracing : Off Lpadd: 439120 Now we can shrink the radix_tree_node cache: # slabinfo radix_tree_node --shrink # slabinfo radix_tree_node --report Slabcache: radix_tree_node Aliases: 0 Order : 2 Objects: 8515 ** Reclaim accounting active ** Defragmentation at 30% Sizes (bytes) Slabs Debug Memory ------------------------------------------------------------------------ Object : 576 Total : 501 Sanity Checks : On Total: 8208384 SlabObj: 912 Full : 500 Redzoning : On Used : 4904640 SlabSiz: 16384 Partial: 1 Poisoning : On Loss : 3303744 Loss : 336 CpuSlab: 0 Tracking : On Lalig: 2861040 Align : 8 Objects: 17 Tracing : Off Lpadd: 440880 Note the single remaining partial slab. Signed-off-by: Tobin C. Harding --- tools/testing/slab/Makefile | 2 +- tools/testing/slab/slub_defrag_xarray.c | 211 ++++++++++++++++++++++++ 2 files changed, 212 insertions(+), 1 deletion(-) create mode 100644 tools/testing/slab/slub_defrag_xarray.c diff --git a/tools/testing/slab/Makefile b/tools/testing/slab/Makefile index 440c2e3e356f..44c18d9a4d52 100644 --- a/tools/testing/slab/Makefile +++ b/tools/testing/slab/Makefile @@ -1,4 +1,4 @@ -obj-m += slub_defrag.o +obj-m += slub_defrag.o slub_defrag_xarray.o KTREE=../../.. diff --git a/tools/testing/slab/slub_defrag_xarray.c b/tools/testing/slab/slub_defrag_xarray.c new file mode 100644 index 000000000000..41143f73256c --- /dev/null +++ b/tools/testing/slab/slub_defrag_xarray.c @@ -0,0 +1,211 @@ +// SPDX-License-Identifier: GPL-2.0+ +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define SMOX_CACHE_NAME "smox_test" +static struct kmem_cache *cachep; + +/* + * Declare XArrays globally so we can clean them up on module unload. + */ + +/* Used by test_smo_xarray()*/ +DEFINE_XARRAY(things); + +/* Thing to store pointers to in the XArray */ +struct smox_thing { + long id; +}; + +/* It's up to the caller to ensure id is unique */ +static struct smox_thing *alloc_thing(int id) +{ + struct smox_thing *thing; + + thing = kmem_cache_alloc(cachep, GFP_KERNEL); + if (!thing) + return ERR_PTR(-ENOMEM); + + thing->id = id; + return thing; +} + +/** + * smox_object_ctor() - SMO object constructor function. + * @ptr: Pointer to memory where the object should be constructed. + */ +void smox_object_ctor(void *ptr) +{ + struct smox_thing *thing = ptr; + + thing->id = -1; +} + +/** + * smox_cache_migrate() - kmem_cache migrate function. + * @cp: kmem_cache pointer. + * @objs: Array of pointers to objects to migrate. + * @size: Number of objects in @objs. + * @node: NUMA node where the object should be allocated. + * @private: Pointer returned by kmem_cache_isolate_func(). + */ +void smox_cache_migrate(struct kmem_cache *cp, void **objs, int size, + int node, void *private) +{ + struct smox_thing **ptrs = (struct smox_thing **)objs; + struct smox_thing *old, *new; + struct smox_thing *thing; + unsigned long index; + void *entry; + int i; + + for (i = 0; i < size; i++) { + old = ptrs[i]; + + new = kmem_cache_alloc(cachep, GFP_KERNEL); + if (!new) { + pr_debug("kmem_cache_alloc failed\n"); + return; + } + + new->id = old->id; + + /* Update reference the brain dead way */ + xa_for_each(&things, index, thing) { + if (thing == old) { + entry = xa_store(&things, index, new, GFP_KERNEL); + if (entry != old) { + pr_err("failed to exchange new/old\n"); + return; + } + } + } + kmem_cache_free(cachep, old); + } +} + +/* + * test_smo_xarray() - Run some tests using an XArray. + */ +static int test_smo_xarray(void) +{ + const int keep = 6; /* Free 5 out of 6 items */ + const int nr_items = 10000; + struct smox_thing *thing; + unsigned long index; + void *entry; + int expected; + int i; + + /* + * Populate XArray, this adds to the radix_tree_node cache as + * well as the smox_test cache. + */ + for (i = 0; i < nr_items; i++) { + thing = alloc_thing(i); + entry = xa_store(&things, i, thing, GFP_KERNEL); + if (xa_is_err(entry)) { + pr_err("smox: failed to allocate entry: %d\n", i); + return -ENOMEM; + } + } + + /* Now free items, putting holes in both caches. */ + for (i = 0; i < nr_items; i++) { + if (i % keep == 0) + continue; + + thing = xa_erase(&things, i); + if (xa_is_err(thing)) + pr_err("smox: error erasing entry: %d\n", i); + kmem_cache_free(cachep, thing); + } + + expected = 0; + xa_for_each(&things, index, thing) { + if (thing->id != expected || index != expected) { + pr_err("smox: error; got %ld want %d at %ld\n", + thing->id, expected, index); + return -1; + } + expected += keep; + } + + /* + * Leave caches sparsely allocated. Shrink caches manually with: + * + * slabinfo radix_tree_node --shrink + * slabinfo smox_test --shrink + */ + + return 0; +} + +static int __init smox_cache_init(void) +{ + cachep = kmem_cache_create(SMOX_CACHE_NAME, + sizeof(struct smox_thing), + 0, 0, smox_object_ctor); + if (!cachep) + return -1; + + return 0; +} + +static void __exit smox_cache_cleanup(void) +{ + struct smox_thing *thing; + unsigned long i; + + xa_for_each(&things, i, thing) { + kmem_cache_free(cachep, thing); + } + xa_destroy(&things); + kmem_cache_destroy(cachep); +} + +static int __init smox_init(void) +{ + int ret; + + ret = smox_cache_init(); + if (ret) { + pr_err("smo_xarray: failed to create cache\n"); + return ret; + } + pr_info("smo_xarray: created kmem_cache: %s\n", SMOX_CACHE_NAME); + + kmem_cache_setup_mobility(cachep, NULL, smox_cache_migrate); + pr_info("smo_xarray: kmem_cache %s defrag enabled\n", SMOX_CACHE_NAME); + + /* + * Running this test consumes memory unless you shrink the + * radix_tree_node cache manually with `slabinfo`. + */ + ret = test_smo_xarray(); + if (ret) + pr_warn("test_smo_xarray failed: %d\n", ret); + + pr_info("smo_xarray: module loaded successfully\n"); + return 0; +} +module_init(smox_init); + +static void __exit smox_exit(void) +{ + smox_cache_cleanup(); + + pr_info("smo_xarray: module removed\n"); +} +module_exit(smox_exit); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Tobin C. Harding"); +MODULE_DESCRIPTION("SMO XArray test module."); -- 2.21.0