Received: by 2002:ac0:950c:0:0:0:0:0 with SMTP id f12csp2706759imc; Tue, 12 Mar 2019 22:21:56 -0700 (PDT) X-Google-Smtp-Source: APXvYqzAPnIgvrxdG+/JK34KgPSJta6V1hKSdE+fEPJxyYpOGprOZ1BzAHRkB1UUd9h4a3QzjlE6 X-Received: by 2002:a62:3681:: with SMTP id d123mr42410226pfa.242.1552454516726; Tue, 12 Mar 2019 22:21:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552454516; cv=none; d=google.com; s=arc-20160816; b=Pcurpnwe1yn3LAOuaOiYPm+LN/HkENVxFAvzealQQlt7vdMmLqFK9xXIhDyMqu7GM7 2LASz1GkpAIL8uoYcUSG+SAJ6t2gulfjmBKkHnwKxkQ7y35gXQVULjAOQlqvsaARV7Lb lWiSh483f/jtbINGsscaQoY6lCyGLzrEzll5kE8q6k5E7sHg5jNF28OK+b852hMhzlu9 45MgAKVhnJhzE3dLcqjj25xo5CV+qcCu6eoyVvlWKvnCZcuaKV6lDrW3W+iBpGGLxl09 hnVK3L62T0anecuXl/hOaXL+mudZgcqLO5YoXAjsvzAp7NohI3ZiYcU8hXIg5DItXuvL 8GpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Y8N9T28j5V2rcuZvEFbiMge48OZHR3sYOunVsFBM+HA=; b=P2IQIfln3QYCEW5WeGcWICA/Sbkid3aSus+l6m3Qu87Q88h+2idm3AjrV/SD/9WaaL O85ZN6oDeKKpVGPxh/wQr1vmZQDLDO/R+oj3+wAOrDCNatTPvhLlSvV5lEG1IiXSES+J uVDX8V/TU72kdba2oVe1q9hxMpEm91/Ko1e6ngMOpwk40I8seDV7aJDWRL63PLJ+DQ3+ LZIqIuIg00WglAJL8MFI4yXu9GwRkmVnDKgpz+yLn7NrvpCFiCxN3cW0nEepEg0FZGeF acdZKfwDgcGr7S9kvReGLhB3zeMctzrlw88vVlWuPeQTYxHGv5zBervRwskWV14oLp8F t8Ng== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=yDZ2iVVm; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v79si9719215pfa.207.2019.03.12.22.21.41; Tue, 12 Mar 2019 22:21:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=yDZ2iVVm; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726859AbfCMFVQ (ORCPT + 99 others); Wed, 13 Mar 2019 01:21:16 -0400 Received: from wout2-smtp.messagingengine.com ([64.147.123.25]:54297 "EHLO wout2-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725821AbfCMFVO (ORCPT ); Wed, 13 Mar 2019 01:21:14 -0400 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id F3D4938B7; Wed, 13 Mar 2019 01:21:12 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Wed, 13 Mar 2019 01:21:13 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=Y8N9T28j5V2rcuZvEFbiMge48OZHR3sYOunVsFBM+HA=; b=yDZ2iVVm 5psjseLHdnznxFaT+DnVAdhsGId6gAWyeg0vBMHl0JMAREDzaKD33cI/KXX7JAcD 9Vnok+jjUycsqcCVHBOtDKz18bIa7S3gCoxUOaJRXNchyPWSoIbDEYiJd56uv8OJ JKYFunc9hm8VN3sxCBG+a/vQpBzoHIj0erW38KdVljPjv4dTGFmCzoAfdjR3A8Hr Y55NhT1TTURNsDPgOc9USK/R1NriKmLAljrDbIw8Ui5SI1BehgKLGvIRiZRVKVlE m2VSOrztE9rdxZ2iepOqHO0fujjrr3zY0mROLaFEj8oxjNYl+hs2m/ux/TJz4NBS pfLLdpQZWkMGEg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrgeelgdekudcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrvdefrddukeegnecurfgrrhgrmhepmhgrihhlfhhrohhmpehtohgs ihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedu X-ME-Proxy: Received: from eros.localdomain (124-169-23-184.dyn.iinet.net.au [124.169.23.184]) by mail.messagingengine.com (Postfix) with ESMTPA id 022ADE427E; Wed, 13 Mar 2019 01:21:08 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Roman Gushchin , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 2/5] slub: Use slab_list instead of lru Date: Wed, 13 Mar 2019 16:20:27 +1100 Message-Id: <20190313052030.13392-3-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190313052030.13392-1-tobin@kernel.org> References: <20190313052030.13392-1-tobin@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently we use the page->lru list for maintaining lists of slabs. We have a list_head in the page structure (slab_list) that can be used for this purpose. Doing so makes the code cleaner since we are not overloading the lru list. The slab_list is part of a union within the page struct (included here stripped down): union { struct { /* Page cache and anonymous pages */ struct list_head lru; ... }; struct { dma_addr_t dma_addr; }; struct { /* slab, slob and slub */ union { struct list_head slab_list; struct { /* Partial pages */ struct page *next; int pages; /* Nr of pages left */ int pobjects; /* Approximate count */ }; }; ... Here we see that slab_list and lru are the same bits. We can verify that this change is safe to do by examining the object file produced from slub.c before and after this patch is applied. Steps taken to verify: 1. checkout current tip of Linus' tree commit a667cb7a94d4 ("Merge branch 'akpm' (patches from Andrew)") 2. configure and build (defaults to SLUB allocator) CONFIG_SLUB=y CONFIG_SLUB_DEBUG=y CONFIG_SLUB_DEBUG_ON=y CONFIG_SLUB_STATS=y CONFIG_HAVE_DEBUG_KMEMLEAK=y CONFIG_SLAB_FREELIST_RANDOM=y CONFIG_SLAB_FREELIST_HARDENED=y 3. dissasemble object file `objdump -dr mm/slub.o > before.s 4. apply patch 5. build 6. dissasemble object file `objdump -dr mm/slub.o > after.s 7. diff before.s after.s Use slab_list list_head instead of the lru list_head for maintaining lists of slabs. Reviewed-by: Roman Gushchin Signed-off-by: Tobin C. Harding --- mm/slub.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index b282e22885cd..d692b5e0163d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1023,7 +1023,7 @@ static void add_full(struct kmem_cache *s, return; lockdep_assert_held(&n->list_lock); - list_add(&page->lru, &n->full); + list_add(&page->slab_list, &n->full); } static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct page *page) @@ -1032,7 +1032,7 @@ static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct return; lockdep_assert_held(&n->list_lock); - list_del(&page->lru); + list_del(&page->slab_list); } /* Tracking of the number of slabs for debugging purposes */ @@ -1773,9 +1773,9 @@ __add_partial(struct kmem_cache_node *n, struct page *page, int tail) { n->nr_partial++; if (tail == DEACTIVATE_TO_TAIL) - list_add_tail(&page->lru, &n->partial); + list_add_tail(&page->slab_list, &n->partial); else - list_add(&page->lru, &n->partial); + list_add(&page->slab_list, &n->partial); } static inline void add_partial(struct kmem_cache_node *n, @@ -1789,7 +1789,7 @@ static inline void remove_partial(struct kmem_cache_node *n, struct page *page) { lockdep_assert_held(&n->list_lock); - list_del(&page->lru); + list_del(&page->slab_list); n->nr_partial--; } @@ -1863,7 +1863,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, return NULL; spin_lock(&n->list_lock); - list_for_each_entry_safe(page, page2, &n->partial, lru) { + list_for_each_entry_safe(page, page2, &n->partial, slab_list) { void *t; if (!pfmemalloc_match(page, flags)) @@ -2407,7 +2407,7 @@ static unsigned long count_partial(struct kmem_cache_node *n, struct page *page; spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, lru) + list_for_each_entry(page, &n->partial, slab_list) x += get_count(page); spin_unlock_irqrestore(&n->list_lock, flags); return x; @@ -3702,10 +3702,10 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) BUG_ON(irqs_disabled()); spin_lock_irq(&n->list_lock); - list_for_each_entry_safe(page, h, &n->partial, lru) { + list_for_each_entry_safe(page, h, &n->partial, slab_list) { if (!page->inuse) { remove_partial(n, page); - list_add(&page->lru, &discard); + list_add(&page->slab_list, &discard); } else { list_slab_objects(s, page, "Objects remaining in %s on __kmem_cache_shutdown()"); @@ -3713,7 +3713,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) } spin_unlock_irq(&n->list_lock); - list_for_each_entry_safe(page, h, &discard, lru) + list_for_each_entry_safe(page, h, &discard, slab_list) discard_slab(s, page); } @@ -3993,7 +3993,7 @@ int __kmem_cache_shrink(struct kmem_cache *s) * Note that concurrent frees may occur while we hold the * list_lock. page->inuse here is the upper limit. */ - list_for_each_entry_safe(page, t, &n->partial, lru) { + list_for_each_entry_safe(page, t, &n->partial, slab_list) { int free = page->objects - page->inuse; /* Do not reread page->inuse */ @@ -4003,10 +4003,10 @@ int __kmem_cache_shrink(struct kmem_cache *s) BUG_ON(free <= 0); if (free == page->objects) { - list_move(&page->lru, &discard); + list_move(&page->slab_list, &discard); n->nr_partial--; } else if (free <= SHRINK_PROMOTE_MAX) - list_move(&page->lru, promote + free - 1); + list_move(&page->slab_list, promote + free - 1); } /* @@ -4019,7 +4019,7 @@ int __kmem_cache_shrink(struct kmem_cache *s) spin_unlock_irqrestore(&n->list_lock, flags); /* Release empty slabs */ - list_for_each_entry_safe(page, t, &discard, lru) + list_for_each_entry_safe(page, t, &discard, slab_list) discard_slab(s, page); if (slabs_node(s, node)) @@ -4211,11 +4211,11 @@ static struct kmem_cache * __init bootstrap(struct kmem_cache *static_cache) for_each_kmem_cache_node(s, node, n) { struct page *p; - list_for_each_entry(p, &n->partial, lru) + list_for_each_entry(p, &n->partial, slab_list) p->slab_cache = s; #ifdef CONFIG_SLUB_DEBUG - list_for_each_entry(p, &n->full, lru) + list_for_each_entry(p, &n->full, slab_list) p->slab_cache = s; #endif } @@ -4432,7 +4432,7 @@ static int validate_slab_node(struct kmem_cache *s, spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, lru) { + list_for_each_entry(page, &n->partial, slab_list) { validate_slab_slab(s, page, map); count++; } @@ -4443,7 +4443,7 @@ static int validate_slab_node(struct kmem_cache *s, if (!(s->flags & SLAB_STORE_USER)) goto out; - list_for_each_entry(page, &n->full, lru) { + list_for_each_entry(page, &n->full, slab_list) { validate_slab_slab(s, page, map); count++; } @@ -4639,9 +4639,9 @@ static int list_locations(struct kmem_cache *s, char *buf, continue; spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, lru) + list_for_each_entry(page, &n->partial, slab_list) process_slab(&t, s, page, alloc, map); - list_for_each_entry(page, &n->full, lru) + list_for_each_entry(page, &n->full, slab_list) process_slab(&t, s, page, alloc, map); spin_unlock_irqrestore(&n->list_lock, flags); } -- 2.21.0