Received: by 10.223.176.5 with SMTP id f5csp427644wra; Wed, 7 Feb 2018 01:30:49 -0800 (PST) X-Google-Smtp-Source: AH8x2279dxYdSrb2UnbXSmEhzlZn3anjhVyP9e564kSY+3RNvrKGTqWlX0l5iHSuHE9mqduWgl3v X-Received: by 2002:a17:902:28c4:: with SMTP id f62-v6mr5356100plb.31.1517995849315; Wed, 07 Feb 2018 01:30:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517995849; cv=none; d=google.com; s=arc-20160816; b=yHszaiqiTkwb9ciTjLFi9fHBji8ic6qFzjYhxiczoqKusNohPZqr2SNzYh+TZlRzlw J5sWxymns6eimGG4aM7JtFfx72NS/NRatBQx76N7gydxAHf2z2V6wxU4+2trvjnFBVkh OJB7/NsaApCWJpU4xmU+AdjtVf4nWz46st9o1Eghff8/1yvY30i7b4CV2SGe6FS0ddmq VQolR7uZ87A46qxK1ZFveQasRPIJv20/qUPovNZCxLBkZSSQTMfIVbvJukdWBFqQHSNT hJZCaGhohRL2oSWvnNcPdXGrP9mc6Ztt9KFTv+vT96R3alOoO0vj8OENCzlZDJ3jBS0y asiw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=Vo4G6GKXPsv2a8xJd/spVMZ+2e7UKfR+7M3MiiwGXzw=; b=Zi61n0ModSnvJ5fnXYo1VLO0X4JGPtsHKvtoWEYz85Gi6RHd28e/QeYDWKUyGZgglb g1WJYghjiV2PgGLHKWvMS6hM64t6RVqj4J5zeKSXJj/vq30Ut7G4B2ofGUHSnVEa+Dpo 1VOQR3nURONlGJOf9joU4PWoNEXEPFo7QOf+LLA8GPyZRM3INdUgbkomieq4JpIRSuj8 OIlyyBpN7b87RR398YWPnIe8VloF8+NV8XPoaCH/X3MVf1y3JHOsdAwXANzl153f8Ri3 ZYRoCoa97Ew9b6JW9UBjabmllL1nrokX5H3TqxEDTXdJrRvy/5xUyknnqKZ3Aw3Fz8CT VVgw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=eW7r9frI; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d21-v6si800563pll.268.2018.02.07.01.30.35; Wed, 07 Feb 2018 01:30:49 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=eW7r9frI; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753768AbeBGJ3q (ORCPT + 99 others); Wed, 7 Feb 2018 04:29:46 -0500 Received: from mail-pl0-f68.google.com ([209.85.160.68]:34185 "EHLO mail-pl0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753732AbeBGJ3m (ORCPT ); Wed, 7 Feb 2018 04:29:42 -0500 Received: by mail-pl0-f68.google.com with SMTP id q17so129707pll.1 for ; Wed, 07 Feb 2018 01:29:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Vo4G6GKXPsv2a8xJd/spVMZ+2e7UKfR+7M3MiiwGXzw=; b=eW7r9frIbVuDqYNRdxXIPL1/YMQ3kX0L6FkCa7IEtGlvHUZnQyVivHqhoOqyCi/ZuK JeBOzf4cGuS7Io6X3PASzDw10MiUZxGPUoLrITmK3WdsFSdbkYRDpXLsl/qiFsUykVq8 E69lRF6Ee1XKxwWJUzFxZjSOmMUcBQutcq4seKl7RA15L+aH7FlKfCrnOqYozqlNWzHN Hxs1Py6bv1T0oWlEB9VztDwJB832I4AewGP6a/0eTnRnAQM++yDWHuOM+Co8cgfZ9nSn 4eP5w+yOQOUNxSRAAIQmlJBh0uWlR4/z8yg9GJEiYzC08sk0kfnDpKA+ednbWowQ8UNU JZgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Vo4G6GKXPsv2a8xJd/spVMZ+2e7UKfR+7M3MiiwGXzw=; b=jlWCpizjYQQ2B3Bq69rhp6tDZR6CqtUWe0ZzoUYPqs77PHVjzYYuUUQrCPBk8C6oxf mYhwMj04m4EYAj3fViKLAcPlrr6OlMlCq5algOcaRgyjJjWuqFkFHlGB4U7ZWmJWYOsQ VbFDT6C9c/u4zTbF/rN4UvzhJHTAyvHrVXTzxgw7PaLakQAqkb8s8PXn8cM9hzS9PTIG azc+poghLR8X81GRDvjwh4RS9S2ZACBUGw7rqP67GwDKaIounC8e6RnQ/XPcVyDW0ocX RrNyPfx/Rgel3MfwXBBbZ8jNGNfx8cU4/6LDRhdzjl+ilfXTvZMy+THGtEl6aLcEFobv yUjQ== X-Gm-Message-State: APf1xPB4/5aSFDn0aE33QZ8amLcOMBLzF7IH3CMPDg39LmlwUspo5imH lzasFz1Ri2CqmClNyQ4zrI4= X-Received: by 2002:a17:902:8491:: with SMTP id c17-v6mr5395771plo.105.1517995781716; Wed, 07 Feb 2018 01:29:41 -0800 (PST) Received: from localhost.localdomain ([39.7.54.25]) by smtp.gmail.com with ESMTPSA id m3sm2244516pgs.90.2018.02.07.01.29.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 07 Feb 2018 01:29:40 -0800 (PST) From: Sergey Senozhatsky X-Google-Original-From: Sergey Senozhatsky To: Minchan Kim , Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Sergey Senozhatsky , Sergey Senozhatsky Subject: [PATCH 2/2] zram: drop max_zpage_size and use zs_huge_object() Date: Wed, 7 Feb 2018 18:29:19 +0900 Message-Id: <20180207092919.19696-3-sergey.senozhatsky@gmail.com> X-Mailer: git-send-email 2.16.1 In-Reply-To: <20180207092919.19696-1-sergey.senozhatsky@gmail.com> References: <20180207092919.19696-1-sergey.senozhatsky@gmail.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch removes ZRAM's enforced "huge object" value and uses zsmalloc huge-class watermark instead, which makes more sense. TEST - I used a 1G zram device, LZO compression back-endi, original data set size was 444MB. Looking at zsmalloc classes stats the test ended up to be pretty fair. BASE ZRAM/ZSMALLOC ===================== zram mm_stat 498978816 191482495 199831552 0 199831552 15634 0 zsmalloc classes class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage freeable ... 151 2448 0 0 1240 1240 744 3 0 168 2720 0 0 4200 4200 2800 2 0 190 3072 0 0 10100 10100 7575 3 0 202 3264 0 0 380 380 304 4 0 254 4096 0 0 10620 10620 10620 1 0 Total 7 46 106982 106187 48787 0 PATCHED ZRAM/ZSMALLOC ===================== zram mm_stat 498978816 182579184 194248704 0 194248704 15628 0 zsmalloc classes class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage freeable ... 151 2448 0 0 1240 1240 744 3 0 168 2720 0 0 4200 4200 2800 2 0 190 3072 0 0 10100 10100 7575 3 0 202 3264 0 0 7180 7180 5744 4 0 254 4096 0 0 3820 3820 3820 1 0 Total 8 45 106959 106193 47424 0 As we can see, we reduced the number of objects stored in class-4096, because a huge number of objects which we previously forcibly stored in class-4096 now stored in non-huge class-3264. This results in lower memory consumption: - zsmalloc now uses 47424 physical pages, which is less than 48787 pages zsmalloc used before. - objects that we store in class-3264 share zspages. That's why overall the number of pages that both class-4096 and class-3264 consumed went down from 10924 to 9564. Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zram_drv.c | 6 +++--- drivers/block/zram/zram_drv.h | 16 ---------------- 2 files changed, 3 insertions(+), 19 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 0afa6c8c3857..3d2bc4b1423c 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -965,7 +965,7 @@ static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec, return ret; } - if (unlikely(comp_len > max_zpage_size)) { + if (unlikely(zs_huge_object(comp_len))) { if (zram_wb_enabled(zram) && allow_wb) { zcomp_stream_put(zram->comp); ret = write_to_bdev(zram, bvec, index, bio, &element); @@ -1022,10 +1022,10 @@ static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec, dst = zs_map_object(zram->mem_pool, handle, ZS_MM_WO); src = zstrm->buffer; - if (comp_len == PAGE_SIZE) + if (zs_huge_object(comp_len)) src = kmap_atomic(page); memcpy(dst, src, comp_len); - if (comp_len == PAGE_SIZE) + if (zs_huge_object(comp_len)) kunmap_atomic(src); zcomp_stream_put(zram->comp); diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h index 31762db861e3..d71c8000a964 100644 --- a/drivers/block/zram/zram_drv.h +++ b/drivers/block/zram/zram_drv.h @@ -21,22 +21,6 @@ #include "zcomp.h" -/*-- Configurable parameters */ - -/* - * Pages that compress to size greater than this are stored - * uncompressed in memory. - */ -static const size_t max_zpage_size = PAGE_SIZE / 4 * 3; - -/* - * NOTE: max_zpage_size must be less than or equal to: - * ZS_MAX_ALLOC_SIZE. Otherwise, zs_malloc() would - * always return failure. - */ - -/*-- End of configurable params */ - #define SECTOR_SHIFT 9 #define SECTORS_PER_PAGE_SHIFT (PAGE_SHIFT - SECTOR_SHIFT) #define SECTORS_PER_PAGE (1 << SECTORS_PER_PAGE_SHIFT) -- 2.16.1