Received: by 2002:ac0:950c:0:0:0:0:0 with SMTP id f12csp2706833imc; Tue, 12 Mar 2019 22:22:08 -0700 (PDT) X-Google-Smtp-Source: APXvYqzx9Bom0SphyQTCnPu3eFbK9iZkBIFrl+gXFpA39H/IIfpz1KWVqKppoYapNOf4PcMoidem X-Received: by 2002:aa7:9102:: with SMTP id 2mr41642931pfh.179.1552454528520; Tue, 12 Mar 2019 22:22:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552454528; cv=none; d=google.com; s=arc-20160816; b=RLOWvHX08l+y80ADGXoPm1jIJph1af9VjBmnblqVOfrhA10Mj0zY+QxXvOI2k8owu0 heDvmIjhwAxpGu+fk3SXMConiVZO0RzUJJYY52Ui0H2K0alI5yX1vGnvKn3cFYqQ+KgR tjSVfG64zX0mwZ9DdIzUcMzYhj7trV2O0m01ocfuHJb6O0PWItSIrvsRZqxDa0s6RGDk ncM9/a115TBZRREGg8H3pwP3+TUrJMOIkYH4JyG+fWDN2ee/GkLDsRmXLp79GZ6vBEdN AcN4AmLVXb1fbwOKzPUBZWt3Zfp7POHnCJ5vsengnYOobXQgM2EnO3M/H8/P71sx+flr IBpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=oH5EnFXP99XnzGKj2+Fm20mBDHurM0tmDh5XcLBmzO0=; b=i7r39hq3X8SPMF2/FCwvuhplNoC6cCHZcgoUQvMRd89pcPdLsyhuCFhrhLTPSBkP2z DO9RWVA6QMjwyzX2Ca83QH3oQH5r3CF+Ki8L6Oa6inSwLDzowFmkfd1ZAiWMdqIrVZLe TOWA9bS9uLb1se20PQR2tTqtga1OpWjRnppbyJ+XlciaGSxTnzZi0S1sR3Vnehbrup9E T7Wd9b/JQVg+KN/WHOtjhxcNuHSP7x99TYiw5HewYQXHuf1gqcfYxq46oFJ/z22JzB8r ofFAbbIxWxzO6WVcCNW9hOT4mgsc7DW9QMF6nuolGKjN9aPnG346aU4gOJolveGtDvWe rN+g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=yyj8H+HO; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f19si4735499pgj.563.2019.03.12.22.21.52; Tue, 12 Mar 2019 22:22:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=yyj8H+HO; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726886AbfCMFVU (ORCPT + 99 others); Wed, 13 Mar 2019 01:21:20 -0400 Received: from wout2-smtp.messagingengine.com ([64.147.123.25]:55027 "EHLO wout2-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725821AbfCMFVS (ORCPT ); Wed, 13 Mar 2019 01:21:18 -0400 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id AF46438B2; Wed, 13 Mar 2019 01:21:16 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Wed, 13 Mar 2019 01:21:17 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=oH5EnFXP99XnzGKj2+Fm20mBDHurM0tmDh5XcLBmzO0=; b=yyj8H+HO 0lEcwPvsPIKedyFwvpt6C7idg6C71qRWiMru79jaMnbydSrBWisIMNPVooAYEPIb vPWmT5pHGXVnv2BdMpIqsbwKAYa8usJOAP4ScHK1YDw+ZUsemauGcvRVmYhPgyNV Kgjpaet3X9e6xAzOmx+cIeaVu5lXBV4IuLsGFmQgqm7BjwkdsLK6NEUT1BlzpZMj vnTWSkC0LetbFYXLePJk4rLXdzLJhHxdCojK8G4SD9vt7LkmPeI/pXvUoPPA7y67 HSGesKprGMkQl2/pAP+EIyZAvbs7EB1FndLXRJzjKXUOBlZXYGIvnSThowLAh8Og 0RBn22oRc+9rfQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrgeelgdekudcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrvdefrddukeegnecurfgrrhgrmhepmhgrihhlfhhrohhmpehtohgs ihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedv X-ME-Proxy: Received: from eros.localdomain (124-169-23-184.dyn.iinet.net.au [124.169.23.184]) by mail.messagingengine.com (Postfix) with ESMTPA id CBF93E4360; Wed, 13 Mar 2019 01:21:12 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Roman Gushchin , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 3/5] slab: Use slab_list instead of lru Date: Wed, 13 Mar 2019 16:20:28 +1100 Message-Id: <20190313052030.13392-4-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190313052030.13392-1-tobin@kernel.org> References: <20190313052030.13392-1-tobin@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently we use the page->lru list for maintaining lists of slabs. We have a list_head in the page structure (slab_list) that can be used for this purpose. Doing so makes the code cleaner since we are not overloading the lru list. The slab_list is part of a union within the page struct (included here stripped down): union { struct { /* Page cache and anonymous pages */ struct list_head lru; ... }; struct { dma_addr_t dma_addr; }; struct { /* slab, slob and slub */ union { struct list_head slab_list; struct { /* Partial pages */ struct page *next; int pages; /* Nr of pages left */ int pobjects; /* Approximate count */ }; }; ... Here we see that slab_list and lru are the same bits. We can verify that this change is safe to do by examining the object file produced from slab.c before and after this patch is applied. Steps taken to verify: 1. checkout current tip of Linus' tree commit a667cb7a94d4 ("Merge branch 'akpm' (patches from Andrew)") 2. configure and build (selecting SLAB allocator) CONFIG_SLAB=y CONFIG_SLAB_FREELIST_RANDOM=y CONFIG_DEBUG_SLAB=y CONFIG_DEBUG_SLAB_LEAK=y CONFIG_HAVE_DEBUG_KMEMLEAK=y 3. dissasemble object file `objdump -dr mm/slab.o > before.s 4. apply patch 5. build 6. dissasemble object file `objdump -dr mm/slab.o > after.s 7. diff before.s after.s Use slab_list list_head instead of the lru list_head for maintaining lists of slabs. Reviewed-by: Roman Gushchin Signed-off-by: Tobin C. Harding --- mm/slab.c | 49 +++++++++++++++++++++++++------------------------ 1 file changed, 25 insertions(+), 24 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 28652e4218e0..09cc64ef9613 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1710,8 +1710,8 @@ static void slabs_destroy(struct kmem_cache *cachep, struct list_head *list) { struct page *page, *n; - list_for_each_entry_safe(page, n, list, lru) { - list_del(&page->lru); + list_for_each_entry_safe(page, n, list, slab_list) { + list_del(&page->slab_list); slab_destroy(cachep, page); } } @@ -2265,8 +2265,8 @@ static int drain_freelist(struct kmem_cache *cache, goto out; } - page = list_entry(p, struct page, lru); - list_del(&page->lru); + page = list_entry(p, struct page, slab_list); + list_del(&page->slab_list); n->free_slabs--; n->total_slabs--; /* @@ -2726,13 +2726,13 @@ static void cache_grow_end(struct kmem_cache *cachep, struct page *page) if (!page) return; - INIT_LIST_HEAD(&page->lru); + INIT_LIST_HEAD(&page->slab_list); n = get_node(cachep, page_to_nid(page)); spin_lock(&n->list_lock); n->total_slabs++; if (!page->active) { - list_add_tail(&page->lru, &(n->slabs_free)); + list_add_tail(&page->slab_list, &n->slabs_free); n->free_slabs++; } else fixup_slab_list(cachep, n, page, &list); @@ -2841,9 +2841,9 @@ static inline void fixup_slab_list(struct kmem_cache *cachep, void **list) { /* move slabp to correct slabp list: */ - list_del(&page->lru); + list_del(&page->slab_list); if (page->active == cachep->num) { - list_add(&page->lru, &n->slabs_full); + list_add(&page->slab_list, &n->slabs_full); if (OBJFREELIST_SLAB(cachep)) { #if DEBUG /* Poisoning will be done without holding the lock */ @@ -2857,7 +2857,7 @@ static inline void fixup_slab_list(struct kmem_cache *cachep, page->freelist = NULL; } } else - list_add(&page->lru, &n->slabs_partial); + list_add(&page->slab_list, &n->slabs_partial); } /* Try to find non-pfmemalloc slab if needed */ @@ -2880,20 +2880,20 @@ static noinline struct page *get_valid_first_slab(struct kmem_cache_node *n, } /* Move pfmemalloc slab to the end of list to speed up next search */ - list_del(&page->lru); + list_del(&page->slab_list); if (!page->active) { - list_add_tail(&page->lru, &n->slabs_free); + list_add_tail(&page->slab_list, &n->slabs_free); n->free_slabs++; } else - list_add_tail(&page->lru, &n->slabs_partial); + list_add_tail(&page->slab_list, &n->slabs_partial); - list_for_each_entry(page, &n->slabs_partial, lru) { + list_for_each_entry(page, &n->slabs_partial, slab_list) { if (!PageSlabPfmemalloc(page)) return page; } n->free_touched = 1; - list_for_each_entry(page, &n->slabs_free, lru) { + list_for_each_entry(page, &n->slabs_free, slab_list) { if (!PageSlabPfmemalloc(page)) { n->free_slabs--; return page; @@ -2908,11 +2908,12 @@ static struct page *get_first_slab(struct kmem_cache_node *n, bool pfmemalloc) struct page *page; assert_spin_locked(&n->list_lock); - page = list_first_entry_or_null(&n->slabs_partial, struct page, lru); + page = list_first_entry_or_null(&n->slabs_partial, struct page, + slab_list); if (!page) { n->free_touched = 1; page = list_first_entry_or_null(&n->slabs_free, struct page, - lru); + slab_list); if (page) n->free_slabs--; } @@ -3413,29 +3414,29 @@ static void free_block(struct kmem_cache *cachep, void **objpp, objp = objpp[i]; page = virt_to_head_page(objp); - list_del(&page->lru); + list_del(&page->slab_list); check_spinlock_acquired_node(cachep, node); slab_put_obj(cachep, page, objp); STATS_DEC_ACTIVE(cachep); /* fixup slab chains */ if (page->active == 0) { - list_add(&page->lru, &n->slabs_free); + list_add(&page->slab_list, &n->slabs_free); n->free_slabs++; } else { /* Unconditionally move a slab to the end of the * partial list on free - maximum time for the * other objects to be freed, too. */ - list_add_tail(&page->lru, &n->slabs_partial); + list_add_tail(&page->slab_list, &n->slabs_partial); } } while (n->free_objects > n->free_limit && !list_empty(&n->slabs_free)) { n->free_objects -= cachep->num; - page = list_last_entry(&n->slabs_free, struct page, lru); - list_move(&page->lru, list); + page = list_last_entry(&n->slabs_free, struct page, slab_list); + list_move(&page->slab_list, list); n->free_slabs--; n->total_slabs--; } @@ -3473,7 +3474,7 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) int i = 0; struct page *page; - list_for_each_entry(page, &n->slabs_free, lru) { + list_for_each_entry(page, &n->slabs_free, slab_list) { BUG_ON(page->active); i++; @@ -4336,9 +4337,9 @@ static int leaks_show(struct seq_file *m, void *p) check_irq_on(); spin_lock_irq(&n->list_lock); - list_for_each_entry(page, &n->slabs_full, lru) + list_for_each_entry(page, &n->slabs_full, slab_list) handle_slab(x, cachep, page); - list_for_each_entry(page, &n->slabs_partial, lru) + list_for_each_entry(page, &n->slabs_partial, slab_list) handle_slab(x, cachep, page); spin_unlock_irq(&n->list_lock); } -- 2.21.0