Received: by 2002:a17:90a:8582:0:0:0:0 with SMTP id m2csp2415914pjn; Tue, 2 Apr 2019 16:07:56 -0700 (PDT) X-Google-Smtp-Source: APXvYqxN9A0SmLRPzmdMgG+sfw4g7eGkeeI0WXieBvPpX39ZIyIYK2E6MGkTtPZZ945VbKmmQDcm X-Received: by 2002:a17:902:3e3:: with SMTP id d90mr16133092pld.271.1554246476024; Tue, 02 Apr 2019 16:07:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554246476; cv=none; d=google.com; s=arc-20160816; b=Q/bAKVYl62sNuVmIzU6ycsOMmLZ5a0vzbJHd6epblTs0gEerrM+xpyY71PmiL4qpAb V+obMQDn7hH7QFXG6de1aPpflM+T0arizbSuaLaQCQS3Dz1ZCdCSZi1XdPPYr64QvLv5 F/FIcjjzJVXpPOr2eZKXBFIMuv4SRICBxXH6YGQHLT9JtgAOJFMIAp5aExikNUSsNR6n YFyAGFJitZChRmnoWBmsv+L8BiV7UR6HsJdmFOFgRN0zosC1UkvEgDkjnJphqX8RkMA4 2o0lH5dctdD9yfo+fmx4frztAiHW18frdxSoeOx1G+2GL/oCsiwZUsJfpfNHhKmfPmLL FdGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=qs5oMJVG8T7y17a75wRO6haSj5HkqCPcGJrvYSSveK4=; b=UWk3KEv9q7kXFzSBem63+j9qeJkfzuDOLg0t/+uNmRbJ+QaxfMDSdMC+Gj9+RKGi6K yJ1AWSyvKVwoxkm2gnFyKnA1pBwlGZNUF7V/z02Hm08nN7P7hEpFF3fKJxH/XsCCSbFX DApuTSybO4vpz9wVrf51GG+T8v9jz/O5JineEb5b7uTXoQFOboqyfGw9Hnm4s/peeo7B IvnbrFWI99nuL55+Y6TIEWFSWyt5TUyTIWD+ysgz/rMYwWDTn4TpdLXgvtBFE6+rnwxV aY1up5PXDTwkZ1PZoG6lUX5FBokBuNWBxrOJKwmIiudhFSRm5Lz47hYVYxhM5u0daZB6 wR+w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=pV6zDQgV; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f16si12324675pgj.149.2019.04.02.16.07.40; Tue, 02 Apr 2019 16:07:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=pV6zDQgV; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726978AbfDBXGs (ORCPT + 99 others); Tue, 2 Apr 2019 19:06:48 -0400 Received: from out2-smtp.messagingengine.com ([66.111.4.26]:39619 "EHLO out2-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726083AbfDBXGr (ORCPT ); Tue, 2 Apr 2019 19:06:47 -0400 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 2E6F922028; Tue, 2 Apr 2019 19:06:46 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Tue, 02 Apr 2019 19:06:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=qs5oMJVG8T7y17a75wRO6haSj5HkqCPcGJrvYSSveK4=; b=pV6zDQgV 51S0o14N/YzIUe8qXuA4ZYzNP+4WSpAt6O1Ajm4OXfNBL7Un91tHrOEUkAYEhWuV ibh92Ph0jEEgXTt9xwZMRA3fe6ieF+eAMuTCOkI8nol0w8u9DsqfoonHxylHwnsp mZ5wQpifhQLm1HMmytXFypzRiQBX2FvR2sRDdHbtM09YUmgNGh5lT/lA6JZKOeSd PYhiE3akqRYcGl6rAqDDA7WG5QONjrbvhGp2Cs+g5DLhRAzz+gLhNvTdrR6cwqqm 5IiSZ6o3jpjrUgfYp3lCXeN1+4Jcv+QADAaRbG4+CddgCozvEdmXluxEv4JibnIO PjptbIuavnfNdg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduuddrtddugdduieculddtuddrgedutddrtddtmd cutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdp uffrtefokffrpgfnqfghnecuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivg hnthhsucdlqddutddtmdenucfjughrpefhvffufffkofgjfhgggfestdekredtredttden ucfhrhhomhepfdfvohgsihhnucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrh hnvghlrdhorhhgqeenucfkphepuddvgedrudeiledrvdejrddvtdeknecurfgrrhgrmhep mhgrihhlfhhrohhmpehtohgsihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghruf hiiigvpeeg X-ME-Proxy: Received: from eros.localdomain (124-169-27-208.dyn.iinet.net.au [124.169.27.208]) by mail.messagingengine.com (Postfix) with ESMTPA id 8D856100E5; Tue, 2 Apr 2019 19:06:42 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Roman Gushchin , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 5/7] slub: Use slab_list instead of lru Date: Wed, 3 Apr 2019 10:05:43 +1100 Message-Id: <20190402230545.2929-6-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190402230545.2929-1-tobin@kernel.org> References: <20190402230545.2929-1-tobin@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently we use the page->lru list for maintaining lists of slabs. We have a list in the page structure (slab_list) that can be used for this purpose. Doing so makes the code cleaner since we are not overloading the lru list. Use the slab_list instead of the lru list for maintaining lists of slabs. Acked-by: Christoph Lameter Signed-off-by: Tobin C. Harding --- mm/slub.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 8fbba4ff6c67..d17f117830a9 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1023,7 +1023,7 @@ static void add_full(struct kmem_cache *s, return; lockdep_assert_held(&n->list_lock); - list_add(&page->lru, &n->full); + list_add(&page->slab_list, &n->full); } static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct page *page) @@ -1032,7 +1032,7 @@ static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct return; lockdep_assert_held(&n->list_lock); - list_del(&page->lru); + list_del(&page->slab_list); } /* Tracking of the number of slabs for debugging purposes */ @@ -1773,9 +1773,9 @@ __add_partial(struct kmem_cache_node *n, struct page *page, int tail) { n->nr_partial++; if (tail == DEACTIVATE_TO_TAIL) - list_add_tail(&page->lru, &n->partial); + list_add_tail(&page->slab_list, &n->partial); else - list_add(&page->lru, &n->partial); + list_add(&page->slab_list, &n->partial); } static inline void add_partial(struct kmem_cache_node *n, @@ -1789,7 +1789,7 @@ static inline void remove_partial(struct kmem_cache_node *n, struct page *page) { lockdep_assert_held(&n->list_lock); - list_del(&page->lru); + list_del(&page->slab_list); n->nr_partial--; } @@ -1863,7 +1863,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, return NULL; spin_lock(&n->list_lock); - list_for_each_entry_safe(page, page2, &n->partial, lru) { + list_for_each_entry_safe(page, page2, &n->partial, slab_list) { void *t; if (!pfmemalloc_match(page, flags)) @@ -2407,7 +2407,7 @@ static unsigned long count_partial(struct kmem_cache_node *n, struct page *page; spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, lru) + list_for_each_entry(page, &n->partial, slab_list) x += get_count(page); spin_unlock_irqrestore(&n->list_lock, flags); return x; @@ -3705,10 +3705,10 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) BUG_ON(irqs_disabled()); spin_lock_irq(&n->list_lock); - list_for_each_entry_safe(page, h, &n->partial, lru) { + list_for_each_entry_safe(page, h, &n->partial, slab_list) { if (!page->inuse) { remove_partial(n, page); - list_add(&page->lru, &discard); + list_add(&page->slab_list, &discard); } else { list_slab_objects(s, page, "Objects remaining in %s on __kmem_cache_shutdown()"); @@ -3716,7 +3716,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) } spin_unlock_irq(&n->list_lock); - list_for_each_entry_safe(page, h, &discard, lru) + list_for_each_entry_safe(page, h, &discard, slab_list) discard_slab(s, page); } @@ -3996,7 +3996,7 @@ int __kmem_cache_shrink(struct kmem_cache *s) * Note that concurrent frees may occur while we hold the * list_lock. page->inuse here is the upper limit. */ - list_for_each_entry_safe(page, t, &n->partial, lru) { + list_for_each_entry_safe(page, t, &n->partial, slab_list) { int free = page->objects - page->inuse; /* Do not reread page->inuse */ @@ -4006,10 +4006,10 @@ int __kmem_cache_shrink(struct kmem_cache *s) BUG_ON(free <= 0); if (free == page->objects) { - list_move(&page->lru, &discard); + list_move(&page->slab_list, &discard); n->nr_partial--; } else if (free <= SHRINK_PROMOTE_MAX) - list_move(&page->lru, promote + free - 1); + list_move(&page->slab_list, promote + free - 1); } /* @@ -4022,7 +4022,7 @@ int __kmem_cache_shrink(struct kmem_cache *s) spin_unlock_irqrestore(&n->list_lock, flags); /* Release empty slabs */ - list_for_each_entry_safe(page, t, &discard, lru) + list_for_each_entry_safe(page, t, &discard, slab_list) discard_slab(s, page); if (slabs_node(s, node)) @@ -4214,11 +4214,11 @@ static struct kmem_cache * __init bootstrap(struct kmem_cache *static_cache) for_each_kmem_cache_node(s, node, n) { struct page *p; - list_for_each_entry(p, &n->partial, lru) + list_for_each_entry(p, &n->partial, slab_list) p->slab_cache = s; #ifdef CONFIG_SLUB_DEBUG - list_for_each_entry(p, &n->full, lru) + list_for_each_entry(p, &n->full, slab_list) p->slab_cache = s; #endif } @@ -4435,7 +4435,7 @@ static int validate_slab_node(struct kmem_cache *s, spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, lru) { + list_for_each_entry(page, &n->partial, slab_list) { validate_slab_slab(s, page, map); count++; } @@ -4446,7 +4446,7 @@ static int validate_slab_node(struct kmem_cache *s, if (!(s->flags & SLAB_STORE_USER)) goto out; - list_for_each_entry(page, &n->full, lru) { + list_for_each_entry(page, &n->full, slab_list) { validate_slab_slab(s, page, map); count++; } @@ -4642,9 +4642,9 @@ static int list_locations(struct kmem_cache *s, char *buf, continue; spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, lru) + list_for_each_entry(page, &n->partial, slab_list) process_slab(&t, s, page, alloc, map); - list_for_each_entry(page, &n->full, lru) + list_for_each_entry(page, &n->full, slab_list) process_slab(&t, s, page, alloc, map); spin_unlock_irqrestore(&n->list_lock, flags); } -- 2.21.0