Received: by 2002:a17:90a:8582:0:0:0:0 with SMTP id m2csp2417301pjn; Tue, 2 Apr 2019 16:09:17 -0700 (PDT) X-Google-Smtp-Source: APXvYqzo6q/xKzGhblIegij3yQw6aG6k8Cp2a3rY4GmqOY78RXOc2DCIkGOB9NwJAUIk6uFvbAmJ X-Received: by 2002:a17:902:a704:: with SMTP id w4mr15158850plq.51.1554246557865; Tue, 02 Apr 2019 16:09:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554246557; cv=none; d=google.com; s=arc-20160816; b=I8qUgsmGHK8U/lr92U+I/10Dp0xHZ4tTpB0AKESJ8jJm7INlqneWjBM8LnrUycTkPe jpWVuBuM01e3zp96rAYxjKLHziybhFfVHgexZG1QnYOVVbP5HANBQFacA8yNVem1n7Dl q425J4DV+4z8RWGlE+OD5Bsw998TE6rA1XQCfrZQs/VeKfRWIZH/m/p9fjnv1R+Hdpwl cFuR25uW4idVUCfxzoOFrX1+E4zt1eZskrwtRVQk2jYjiMdV6zAG078z58VH3FbzNeEY SrsfMiSbyVGw4a5C7d8Isqmq/OS+8W7w8yQuVqWy4Ba5WfGvl7+ym81E5w51VqVDbEty +aZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=gJ4aFp/Hf0v/GP1gn4YkY0hhZE2uSZfGmW1/JObfUr4=; b=jn0o4Mz4txCWNmTYnan3F8cFUYW/+4cXwv+IQpVoIsJyHhc62Niw5OC7412FnXPPsP DStxcJ0BOaplUqVYf5H7dmQVVde4D8iR3tdPdwsKrgB4lFn53tDhAPiz0rGqgIAvhMdL PO9OQxRFLBWfF7Wh6iD/aTesKWDKGhmqxgU3SJPmubfxyP8kx9PN01nmOYSXHZYwRyam eYiqV3EIBFDxNSKxKBDdpCrk6sW0xiz1XF71h8qpYRxe5z7dv+oJQtzoDlc+ixlf/3u9 Xdf13WYvKkW6a/hH2fjI4Q3nfL7kq54MucqxlLATKtYTgH2dezHmH0UzUZTreAy4LXVB QwYA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=CIjqBdlg; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h20si7248631pgn.69.2019.04.02.16.09.00; Tue, 02 Apr 2019 16:09:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=CIjqBdlg; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727021AbfDBXGw (ORCPT + 99 others); Tue, 2 Apr 2019 19:06:52 -0400 Received: from out2-smtp.messagingengine.com ([66.111.4.26]:45889 "EHLO out2-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726083AbfDBXGv (ORCPT ); Tue, 2 Apr 2019 19:06:51 -0400 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 545232208C; Tue, 2 Apr 2019 19:06:50 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Tue, 02 Apr 2019 19:06:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=gJ4aFp/Hf0v/GP1gn4YkY0hhZE2uSZfGmW1/JObfUr4=; b=CIjqBdlg 4pZi6byHB2UOnB7lSDMnuzKYQtIxxkQFbQacRaQVuQ3COHrquyuDyQyrIPElSA5j yAbr6GDGctSOU3N9wHRj4GEGxa/QHIgio+rRGfIvhf2EAqP10uxD5BdddpWg5q71 4PWX27lwB6Lp+zoVH+B5b6tgekX+zOVnHiCQgzm68W2uy2E47LNctUnSiqq6WDgs GxoBKz2KBbFxmSfnlNAliVJOI51HEY+jA3xpD8lvWA44E5DFEXSxosPOqYvvkdRy dO9keOl418HZFw/DoeJ2giYIxyOux/lUOE8XtVFAgMmp3UxsnO+0KJFjsgk9NqNY cPTJEQCmBfzrgQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduuddrtddugdduieculddtuddrgedutddrtddtmd cutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdp uffrtefokffrpgfnqfghnecuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivg hnthhsucdlqddutddtmdenucfjughrpefhvffufffkofgjfhgggfestdekredtredttden ucfhrhhomhepfdfvohgsihhnucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrh hnvghlrdhorhhgqeenucfkphepuddvgedrudeiledrvdejrddvtdeknecurfgrrhgrmhep mhgrihhlfhhrohhmpehtohgsihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghruf hiiigvpeeg X-ME-Proxy: Received: from eros.localdomain (124-169-27-208.dyn.iinet.net.au [124.169.27.208]) by mail.messagingengine.com (Postfix) with ESMTPA id B549E100E5; Tue, 2 Apr 2019 19:06:46 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Roman Gushchin , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 6/7] slab: Use slab_list instead of lru Date: Wed, 3 Apr 2019 10:05:44 +1100 Message-Id: <20190402230545.2929-7-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190402230545.2929-1-tobin@kernel.org> References: <20190402230545.2929-1-tobin@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently we use the page->lru list for maintaining lists of slabs. We have a list in the page structure (slab_list) that can be used for this purpose. Doing so makes the code cleaner since we are not overloading the lru list. Use the slab_list instead of the lru list for maintaining lists of slabs. Signed-off-by: Tobin C. Harding --- mm/slab.c | 49 +++++++++++++++++++++++++------------------------ 1 file changed, 25 insertions(+), 24 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 329bfe67f2ca..09e2a0131338 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1710,8 +1710,8 @@ static void slabs_destroy(struct kmem_cache *cachep, struct list_head *list) { struct page *page, *n; - list_for_each_entry_safe(page, n, list, lru) { - list_del(&page->lru); + list_for_each_entry_safe(page, n, list, slab_list) { + list_del(&page->slab_list); slab_destroy(cachep, page); } } @@ -2267,8 +2267,8 @@ static int drain_freelist(struct kmem_cache *cache, goto out; } - page = list_entry(p, struct page, lru); - list_del(&page->lru); + page = list_entry(p, struct page, slab_list); + list_del(&page->slab_list); n->free_slabs--; n->total_slabs--; /* @@ -2728,13 +2728,13 @@ static void cache_grow_end(struct kmem_cache *cachep, struct page *page) if (!page) return; - INIT_LIST_HEAD(&page->lru); + INIT_LIST_HEAD(&page->slab_list); n = get_node(cachep, page_to_nid(page)); spin_lock(&n->list_lock); n->total_slabs++; if (!page->active) { - list_add_tail(&page->lru, &(n->slabs_free)); + list_add_tail(&page->slab_list, &n->slabs_free); n->free_slabs++; } else fixup_slab_list(cachep, n, page, &list); @@ -2843,9 +2843,9 @@ static inline void fixup_slab_list(struct kmem_cache *cachep, void **list) { /* move slabp to correct slabp list: */ - list_del(&page->lru); + list_del(&page->slab_list); if (page->active == cachep->num) { - list_add(&page->lru, &n->slabs_full); + list_add(&page->slab_list, &n->slabs_full); if (OBJFREELIST_SLAB(cachep)) { #if DEBUG /* Poisoning will be done without holding the lock */ @@ -2859,7 +2859,7 @@ static inline void fixup_slab_list(struct kmem_cache *cachep, page->freelist = NULL; } } else - list_add(&page->lru, &n->slabs_partial); + list_add(&page->slab_list, &n->slabs_partial); } /* Try to find non-pfmemalloc slab if needed */ @@ -2882,20 +2882,20 @@ static noinline struct page *get_valid_first_slab(struct kmem_cache_node *n, } /* Move pfmemalloc slab to the end of list to speed up next search */ - list_del(&page->lru); + list_del(&page->slab_list); if (!page->active) { - list_add_tail(&page->lru, &n->slabs_free); + list_add_tail(&page->slab_list, &n->slabs_free); n->free_slabs++; } else - list_add_tail(&page->lru, &n->slabs_partial); + list_add_tail(&page->slab_list, &n->slabs_partial); - list_for_each_entry(page, &n->slabs_partial, lru) { + list_for_each_entry(page, &n->slabs_partial, slab_list) { if (!PageSlabPfmemalloc(page)) return page; } n->free_touched = 1; - list_for_each_entry(page, &n->slabs_free, lru) { + list_for_each_entry(page, &n->slabs_free, slab_list) { if (!PageSlabPfmemalloc(page)) { n->free_slabs--; return page; @@ -2910,11 +2910,12 @@ static struct page *get_first_slab(struct kmem_cache_node *n, bool pfmemalloc) struct page *page; assert_spin_locked(&n->list_lock); - page = list_first_entry_or_null(&n->slabs_partial, struct page, lru); + page = list_first_entry_or_null(&n->slabs_partial, struct page, + slab_list); if (!page) { n->free_touched = 1; page = list_first_entry_or_null(&n->slabs_free, struct page, - lru); + slab_list); if (page) n->free_slabs--; } @@ -3415,29 +3416,29 @@ static void free_block(struct kmem_cache *cachep, void **objpp, objp = objpp[i]; page = virt_to_head_page(objp); - list_del(&page->lru); + list_del(&page->slab_list); check_spinlock_acquired_node(cachep, node); slab_put_obj(cachep, page, objp); STATS_DEC_ACTIVE(cachep); /* fixup slab chains */ if (page->active == 0) { - list_add(&page->lru, &n->slabs_free); + list_add(&page->slab_list, &n->slabs_free); n->free_slabs++; } else { /* Unconditionally move a slab to the end of the * partial list on free - maximum time for the * other objects to be freed, too. */ - list_add_tail(&page->lru, &n->slabs_partial); + list_add_tail(&page->slab_list, &n->slabs_partial); } } while (n->free_objects > n->free_limit && !list_empty(&n->slabs_free)) { n->free_objects -= cachep->num; - page = list_last_entry(&n->slabs_free, struct page, lru); - list_move(&page->lru, list); + page = list_last_entry(&n->slabs_free, struct page, slab_list); + list_move(&page->slab_list, list); n->free_slabs--; n->total_slabs--; } @@ -3475,7 +3476,7 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) int i = 0; struct page *page; - list_for_each_entry(page, &n->slabs_free, lru) { + list_for_each_entry(page, &n->slabs_free, slab_list) { BUG_ON(page->active); i++; @@ -4338,9 +4339,9 @@ static int leaks_show(struct seq_file *m, void *p) check_irq_on(); spin_lock_irq(&n->list_lock); - list_for_each_entry(page, &n->slabs_full, lru) + list_for_each_entry(page, &n->slabs_full, slab_list) handle_slab(x, cachep, page); - list_for_each_entry(page, &n->slabs_partial, lru) + list_for_each_entry(page, &n->slabs_partial, slab_list) handle_slab(x, cachep, page); spin_unlock_irq(&n->list_lock); } -- 2.21.0