Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp1968046imm; Thu, 24 May 2018 03:45:36 -0700 (PDT) X-Google-Smtp-Source: AB8JxZr+3DcdNimGUfYIRn7XVDgSHE37NFeCspH1EwyYhNL5dP/uD3wOfvLxujStmViQ5vYqbMLq X-Received: by 2002:a62:be10:: with SMTP id l16-v6mr6596896pff.180.1527158736116; Thu, 24 May 2018 03:45:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527158736; cv=none; d=google.com; s=arc-20160816; b=hJ26Aw9q+LCow5ZmnrrI6p6tEK2uMDE2VlNnsN9hcVZ5ESdrxZ74UBJJ3EdpqBbvkZ hJoYa7xD/De8OW2fy2OGjqgdUSQPPl8s6U+p2sKDnXfsg7r9T2YswBfYoRXaILV0Bq1A oNPDTmXrXBIcbBBXcBQj/fHewr+MdlCeWcCQ4STINBa9ST3Ny1ui49AH1CKIoYWJnpFx T6b4wvmIhVH7IVDzZdXuLIrWly/RdQEjk+zhdP79G7pkZqqJKIOlJ39NHql3n+B/1gI6 FziEcowcPOTopKFSRpkzCoiikvom+G7y2IgGaWAeGOWZkrjuvI7SwO88HkHMA1r0Hw3b F2jg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :arc-authentication-results; bh=OWTqP9a4Tspk64NIjXJAb1qHzFZzTpLN1zOrq49No6k=; b=DcQ34QP+jGcMo/GAFib9tHZ5rCJ62Aj/nQxMzkFfCP7E8bVfiBS6Wu/i2HuGQmpuwO izHvumgesqnGjpzzJOd+dqry9iNGWGdSDkRcDq5WkfncxouMqRU1vk5XpZ4NrLZZRUCR aEU+qxvck0A45tRDJSy8eroebvbnkdMCJThDot89OLrrK8lNIdpaCjIf4cQThdja80xW QmacPd1h+dogXFbQOWUY+GgHrRaH9nNVxVUlRwL+T35/BWajjO4zq2pNYEny0Olnel7c fHkCKh+BFmg6SnoCfByy1OQfNeNRFg+Guommm0OXAKKyy+VsDDI0+/2fPy7N1+GoIGo2 waqQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g72-v6si21997506pfb.280.2018.05.24.03.45.21; Thu, 24 May 2018 03:45:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1032388AbeEXKol (ORCPT + 99 others); Thu, 24 May 2018 06:44:41 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:35242 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S968277AbeEXJ55 (ORCPT ); Thu, 24 May 2018 05:57:57 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 32C8A40201A5; Thu, 24 May 2018 09:57:57 +0000 (UTC) Received: from dhcp-12-102.nay.redhat.com (unknown [10.66.12.102]) by smtp.corp.redhat.com (Postfix) with ESMTP id B296F10EE6C1; Thu, 24 May 2018 09:57:54 +0000 (UTC) From: Li Wang To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Seth Jennings , Dan Streetman , Huang Ying , Yu Zhao Subject: [PATCH RFC] zswap: reject to compress/store page if zswap_max_pool_percent is 0 Date: Thu, 24 May 2018 17:57:51 +0800 Message-Id: <20180524095752.17770-1-liwang@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Thu, 24 May 2018 09:57:57 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Thu, 24 May 2018 09:57:57 +0000 (UTC) for IP:'10.11.54.3' DOMAIN:'int-mx03.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'liwang@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The '/sys/../zswap/stored_pages:' keep raising in zswap test with "zswap.max_pool_percent=0" parameter. But theoretically, it should not compress or store pages any more since there is no space for compressed pool. Reproduce steps: 1. Boot kernel with "zswap.enabled=1 zswap.max_pool_percent=17" 2. Set the max_pool_percent to 0 # echo 0 > /sys/module/zswap/parameters/max_pool_percent Confirm this parameter works fine # cat /sys/kernel/debug/zswap/pool_total_size 0 3. Do memory stress test to see if some pages have been compressed # stress --vm 1 --vm-bytes $mem_available"M" --timeout 60s Watching the 'stored_pages' numbers increasing or not The root cause is: When the zswap_max_pool_percent is set to 0 via kernel parameter, the zswap_is_full() will always return true to shrink the pool size by zswap_shrink(). If the pool size has been shrinked a little success, zswap will do compress/store pages again. Then we get fails on that as above. Signed-off-by: Li Wang Cc: Seth Jennings Cc: Dan Streetman Cc: Huang Ying Cc: Yu Zhao --- mm/zswap.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/mm/zswap.c b/mm/zswap.c index 61a5c41..2b537bb 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1007,6 +1007,11 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, u8 *src, *dst; struct zswap_header zhdr = { .swpentry = swp_entry(type, offset) }; + if (!zswap_max_pool_percent) { + ret = -ENOMEM; + goto reject; + } + /* THP isn't supported */ if (PageTransHuge(page)) { ret = -EINVAL; -- 2.9.5