Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751229AbdCPCqx (ORCPT ); Wed, 15 Mar 2017 22:46:53 -0400 Received: from mail-pf0-f196.google.com ([209.85.192.196]:32875 "EHLO mail-pf0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751122AbdCPCqt (ORCPT ); Wed, 15 Mar 2017 22:46:49 -0400 From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: Minchan Kim , Sergey Senozhatsky , linux-kernel@vger.kernel.org, kernel-team@lge.com, Joonsoo Kim Subject: [PATCH 1/4] mm/zsmalloc: always set movable/highmem flag to the zspage Date: Thu, 16 Mar 2017 11:46:35 +0900 Message-Id: <1489632398-31501-2-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1489632398-31501-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1489632398-31501-1-git-send-email-iamjoonsoo.kim@lge.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2742 Lines: 79 From: Joonsoo Kim Zspage is always movable and is used through zs_map_object() function which returns directly accessible pointer that contains content of zspage. It is independent on the user's allocation flag. Therefore, it's better to always set movable/highmem flag to the zspage. After that, we don't need __GFP_MOVABLE/__GFP_HIGHMEM clearing in cache_alloc_handle()/cache_alloc_zspage() since there is no zs_malloc caller who specifies __GFP_MOVABLE/__GFP_HIGHMEM. Signed-off-by: Joonsoo Kim --- drivers/block/zram/zram_drv.c | 9 ++------- mm/zsmalloc.c | 10 ++++------ 2 files changed, 6 insertions(+), 13 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 0194441..f65dcd1 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -684,19 +684,14 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index, */ if (!handle) handle = zs_malloc(meta->mem_pool, clen, - __GFP_KSWAPD_RECLAIM | - __GFP_NOWARN | - __GFP_HIGHMEM | - __GFP_MOVABLE); + __GFP_KSWAPD_RECLAIM | __GFP_NOWARN); if (!handle) { zcomp_stream_put(zram->comp); zstrm = NULL; atomic64_inc(&zram->stats.writestall); - handle = zs_malloc(meta->mem_pool, clen, - GFP_NOIO | __GFP_HIGHMEM | - __GFP_MOVABLE); + handle = zs_malloc(meta->mem_pool, clen, GFP_NOIO); if (handle) goto compress_again; diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index b7ee9c3..fada232 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -347,8 +347,7 @@ static void destroy_cache(struct zs_pool *pool) static unsigned long cache_alloc_handle(struct zs_pool *pool, gfp_t gfp) { - return (unsigned long)kmem_cache_alloc(pool->handle_cachep, - gfp & ~(__GFP_HIGHMEM|__GFP_MOVABLE)); + return (unsigned long)kmem_cache_alloc(pool->handle_cachep, gfp); } static void cache_free_handle(struct zs_pool *pool, unsigned long handle) @@ -358,9 +357,8 @@ static void cache_free_handle(struct zs_pool *pool, unsigned long handle) static struct zspage *cache_alloc_zspage(struct zs_pool *pool, gfp_t flags) { - return kmem_cache_alloc(pool->zspage_cachep, - flags & ~(__GFP_HIGHMEM|__GFP_MOVABLE)); -} + return kmem_cache_alloc(pool->zspage_cachep, flags); +}; static void cache_free_zspage(struct zs_pool *pool, struct zspage *zspage) { @@ -1120,7 +1118,7 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, for (i = 0; i < class->pages_per_zspage; i++) { struct page *page; - page = alloc_page(gfp); + page = alloc_page(gfp | __GFP_MOVABLE | __GFP_HIGHMEM); if (!page) { while (--i >= 0) { dec_zone_page_state(pages[i], NR_ZSPAGES); -- 1.9.1