Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758271AbcCVCXI (ORCPT ); Mon, 21 Mar 2016 22:23:08 -0400 Received: from LGEAMRELO12.lge.com ([156.147.23.52]:37083 "EHLO lgeamrelo12.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750852AbcCVCXG (ORCPT ); Mon, 21 Mar 2016 22:23:06 -0400 X-Original-SENDERIP: 156.147.1.151 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 165.244.98.150 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 10.177.223.161 X-Original-MAILFROM: minchan@kernel.org Date: Tue, 22 Mar 2016 11:22:03 +0900 From: Minchan Kim To: kbuild test robot CC: kbuild-all@01.org, Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, jlayton@poochiereds.net, bfields@fieldses.org, Vlastimil Babka , Joonsoo Kim , koct9i@gmail.com, aquini@redhat.com, virtualization@lists.linux-foundation.org, Mel Gorman , Hugh Dickins , Sergey Senozhatsky , Rik van Riel , rknize@motorola.com, Gioh Kim , Sangseok Lee , Chan Gyun Jeong , Al Viro , YiPing Xu Subject: Re: [PATCH] zsmalloc: fix semicolon.cocci warnings Message-ID: <20160322022203.GB30070@bbox> References: <201603211705.9hgJl8K7%fengguang.wu@intel.com> <20160321094825.GA50080@roam.intel.com> MIME-Version: 1.0 In-Reply-To: <20160321094825.GA50080@roam.intel.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-MIMETrack: Itemize by SMTP Server on LGEKRMHUB02/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2016/03/22 11:21:00, Serialize by Router on LGEKRMHUB02/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2016/03/22 11:21:02, Serialize complete at 2016/03/22 11:21:02 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 10060 Lines: 341 On Mon, Mar 21, 2016 at 05:48:25PM +0800, kbuild test robot wrote: > mm/zsmalloc.c:1103:2-3: Unneeded semicolon > > > Remove unneeded semicolon. > > Generated by: scripts/coccinelle/misc/semicolon.cocci > > CC: Minchan Kim > Signed-off-by: Fengguang Wu > --- > > zsmalloc.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > --- a/mm/zsmalloc.c > +++ b/mm/zsmalloc.c > @@ -1100,7 +1100,7 @@ void unlock_zspage(struct page *first_pa > VM_BUG_ON_PAGE(!PageLocked(cursor), cursor); > if (cursor != locked_page) > unlock_page(cursor); > - }; > + } > } > > static void free_zspage(struct zs_pool *pool, struct page *first_page) Thanks. Fixed. >From bb46f8265b55228f31b8096bd1c13dd6e6ee1bc4 Mon Sep 17 00:00:00 2001 From: Minchan Kim Date: Wed, 9 Mar 2016 09:37:57 +0900 Subject: [PATCH] zsmalloc: migrate tail pages in zspage This patch enables tail page migration of zspage. In this point, I tested zsmalloc regression with micro-benchmark which does zs_malloc/map/unmap/zs_free for all size class in every CPU(my system is 12) during 20 sec. It shows 1% regression which is really small when we consider the benefit of this feature and realworkload overhead(i.e., most overhead comes from compression). Signed-off-by: Minchan Kim --- mm/zsmalloc.c | 131 +++++++++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 115 insertions(+), 16 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 9b4b03d8f993..3f1d488633e1 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -551,6 +551,19 @@ static void set_zspage_mapping(struct page *first_page, m->class = class_idx; } +static bool check_isolated_page(struct page *first_page) +{ + struct page *cursor; + + for (cursor = first_page; cursor != NULL; cursor = + get_next_page(cursor)) { + if (PageIsolated(cursor)) + return true; + } + + return false; +} + /* * zsmalloc divides the pool into various size classes where each * class maintains a list of zspages where each zspage is divided @@ -1052,6 +1065,44 @@ void lock_zspage(struct page *first_page) } while ((cursor = get_next_page(cursor)) != NULL); } +int trylock_zspage(struct page *first_page, struct page *locked_page) +{ + struct page *cursor, *fail; + + VM_BUG_ON_PAGE(!is_first_page(first_page), first_page); + + for (cursor = first_page; cursor != NULL; cursor = + get_next_page(cursor)) { + if (cursor != locked_page) { + if (!trylock_page(cursor)) { + fail = cursor; + goto unlock; + } + } + } + + return 1; +unlock: + for (cursor = first_page; cursor != fail; cursor = + get_next_page(cursor)) { + if (cursor != locked_page) + unlock_page(cursor); + } + + return 0; +} + +void unlock_zspage(struct page *first_page, struct page *locked_page) +{ + struct page *cursor = first_page; + + for (; cursor != NULL; cursor = get_next_page(cursor)) { + VM_BUG_ON_PAGE(!PageLocked(cursor), cursor); + if (cursor != locked_page) + unlock_page(cursor); + } +} + static void free_zspage(struct zs_pool *pool, struct page *first_page) { struct page *nextp, *tmp; @@ -1090,16 +1141,17 @@ static void init_zspage(struct size_class *class, struct page *first_page, first_page->freelist = NULL; INIT_LIST_HEAD(&first_page->lru); set_zspage_inuse(first_page, 0); - BUG_ON(!trylock_page(first_page)); - first_page->mapping = mapping; - __SetPageMovable(first_page); - unlock_page(first_page); while (page) { struct page *next_page; struct link_free *link; void *vaddr; + BUG_ON(!trylock_page(page)); + page->mapping = mapping; + __SetPageMovable(page); + unlock_page(page); + vaddr = kmap_atomic(page); link = (struct link_free *)vaddr + off / sizeof(*link); @@ -1850,6 +1902,7 @@ static enum fullness_group putback_zspage(struct size_class *class, VM_BUG_ON_PAGE(!list_empty(&first_page->lru), first_page); VM_BUG_ON_PAGE(ZsPageIsolate(first_page), first_page); + VM_BUG_ON_PAGE(check_isolated_page(first_page), first_page); fullness = get_fullness_group(class, first_page); insert_zspage(class, fullness, first_page); @@ -1956,6 +2009,12 @@ static struct page *isolate_source_page(struct size_class *class) if (!page) continue; + /* To prevent race between object and page migration */ + if (!trylock_zspage(page, NULL)) { + page = NULL; + continue; + } + remove_zspage(class, i, page); inuse = get_zspage_inuse(page); @@ -1964,6 +2023,7 @@ static struct page *isolate_source_page(struct size_class *class) if (inuse != freezed) { unfreeze_zspage(class, page, freezed); putback_zspage(class, page); + unlock_zspage(page, NULL); page = NULL; continue; } @@ -1995,6 +2055,12 @@ static struct page *isolate_target_page(struct size_class *class) if (!page) continue; + /* To prevent race between object and page migration */ + if (!trylock_zspage(page, NULL)) { + page = NULL; + continue; + } + remove_zspage(class, i, page); inuse = get_zspage_inuse(page); @@ -2003,6 +2069,7 @@ static struct page *isolate_target_page(struct size_class *class) if (inuse != freezed) { unfreeze_zspage(class, page, freezed); putback_zspage(class, page); + unlock_zspage(page, NULL); page = NULL; continue; } @@ -2076,11 +2143,13 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class) putback_zspage(class, dst_page); unfreeze_zspage(class, dst_page, class->objs_per_zspage); + unlock_zspage(dst_page, NULL); spin_unlock(&class->lock); dst_page = NULL; } if (zspage_empty(class, src_page)) { + unlock_zspage(src_page, NULL); free_zspage(pool, src_page); spin_lock(&class->lock); zs_stat_dec(class, OBJ_ALLOCATED, @@ -2103,12 +2172,14 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class) putback_zspage(class, src_page); unfreeze_zspage(class, src_page, class->objs_per_zspage); + unlock_zspage(src_page, NULL); } if (dst_page) { putback_zspage(class, dst_page); unfreeze_zspage(class, dst_page, class->objs_per_zspage); + unlock_zspage(dst_page, NULL); } spin_unlock(&class->lock); @@ -2211,10 +2282,11 @@ bool zs_page_isolate(struct page *page, isolate_mode_t mode) VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(PageIsolated(page), page); /* - * In this implementation, it allows only first page migration. + * first_page will not be destroyed by PG_lock of @page but it could + * be migrated out. For prohibiting it, zs_page_migrate calls + * trylock_zspage so it closes the race. */ - VM_BUG_ON_PAGE(!is_first_page(page), page); - first_page = page; + first_page = get_first_page(page); /* * Without class lock, fullness is meaningless while constant @@ -2228,9 +2300,18 @@ bool zs_page_isolate(struct page *page, isolate_mode_t mode) if (!spin_trylock(&class->lock)) return false; + if (check_isolated_page(first_page)) + goto skip_isolate; + + /* + * If this is first time isolation for zspage, isolate zspage from + * size_class to prevent further allocations from the zspage. + */ get_zspage_mapping(first_page, &class_idx, &fullness); remove_zspage(class, fullness, first_page); SetZsPageIsolate(first_page); + +skip_isolate: SetPageIsolated(page); spin_unlock(&class->lock); @@ -2253,7 +2334,7 @@ int zs_page_migrate(struct address_space *mapping, struct page *newpage, VM_BUG_ON_PAGE(!PageMovable(page), page); VM_BUG_ON_PAGE(!PageIsolated(page), page); - first_page = page; + first_page = get_first_page(page); get_zspage_mapping(first_page, &class_idx, &fullness); pool = page->mapping->private_data; class = pool->size_class[class_idx]; @@ -2268,6 +2349,13 @@ int zs_page_migrate(struct address_space *mapping, struct page *newpage, if (get_zspage_inuse(first_page) == 0) goto out_class_unlock; + /* + * It prevents first_page migration during tail page opeartion for + * get_first_page's stability. + */ + if (!trylock_zspage(first_page, page)) + goto out_class_unlock; + freezed = freeze_zspage(class, first_page); if (freezed != get_zspage_inuse(first_page)) goto out_unfreeze; @@ -2306,21 +2394,26 @@ int zs_page_migrate(struct address_space *mapping, struct page *newpage, kunmap_atomic(addr); replace_sub_page(class, first_page, newpage, page); - first_page = newpage; + first_page = get_first_page(newpage); get_page(newpage); VM_BUG_ON_PAGE(get_fullness_group(class, first_page) == ZS_EMPTY, first_page); - ClearZsPageIsolate(first_page); - putback_zspage(class, first_page); + if (!check_isolated_page(first_page)) { + INIT_LIST_HEAD(&first_page->lru); + ClearZsPageIsolate(first_page); + putback_zspage(class, first_page); + } + /* Migration complete. Free old page */ reset_page(page); ClearPageIsolated(page); put_page(page); ret = MIGRATEPAGE_SUCCESS; - + page = newpage; out_unfreeze: unfreeze_zspage(class, first_page, freezed); + unlock_zspage(first_page, page); out_class_unlock: spin_unlock(&class->lock); @@ -2338,7 +2431,7 @@ void zs_page_putback(struct page *page) VM_BUG_ON_PAGE(!PageMovable(page), page); VM_BUG_ON_PAGE(!PageIsolated(page), page); - first_page = page; + first_page = get_first_page(page); get_zspage_mapping(first_page, &class_idx, &fullness); pool = page->mapping->private_data; class = pool->size_class[class_idx]; @@ -2348,11 +2441,17 @@ void zs_page_putback(struct page *page) * in zs_free will wait the page lock of @page without * destroying of zspage. */ - INIT_LIST_HEAD(&first_page->lru); spin_lock(&class->lock); ClearPageIsolated(page); - ClearZsPageIsolate(first_page); - putback_zspage(class, first_page); + /* + * putback zspage to right list if this is last isolated page + * putback in the zspage. + */ + if (!check_isolated_page(first_page)) { + INIT_LIST_HEAD(&first_page->lru); + ClearZsPageIsolate(first_page); + putback_zspage(class, first_page); + } spin_unlock(&class->lock); } -- 1.9.1