Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp559091pxk; Sun, 30 Aug 2020 14:09:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyxPXCm1S9zx06JXZ0g3ztsIHqvIhRnAOBhL4awTaFA3jhJNeJ/BndUgKVTLEFZfsPPHJLp X-Received: by 2002:a50:ed8d:: with SMTP id h13mr8517725edr.50.1598821764578; Sun, 30 Aug 2020 14:09:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598821764; cv=none; d=google.com; s=arc-20160816; b=h82hSAQE1sfEunYV3cazB2fxeQ4T7djtyVxNMV1LbXVNlEZ+aybZZ1qPmUK6inWFq/ NvpOa0CFgTI5X8V80a9G3aA/k18G6/PhBk3Ygj+/AsSYwCqYXbzkR/OsVALgO7v7pDfy tTyuvrWXZCmE/vA1VwELm706LgQm9N+UT2psFZoOZoYieKoO8TOWOEZna+XdcmJK21Pr Yew+sfL0gB++8W+8Oqae69bh8jbed4nnuKU7GyXMDaYrIijscsktvkE+aO1MAwiSJEze VM2cL8AWOwbwNTlLtrXpK3+9O6ZelygV26XvdxEYxLn2xI4axVq9UQITdpWixo4qWxXg Vcdw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date:dkim-signature; bh=SQ10D8PXryeEKHHcl5IvJgd6P48aMcyJPD1+SoD+oOw=; b=V+ZQKhNsWxSkN3YP/B8mPAv89eB8xLrZeAexO8nqHIpZZqZzWc3MZlErMGeQgVvHKI WuuEhLNuNw1yfde1gJyQkwN/0/Rk1Z9vm1nxNM5pw+kleiCCBceZtpfgIQxOdGttUUZe ZskQY/eXXc7m1Q8eIcCqK0wxLDVdZPxYe4B00dbKTeXnROW7y2Usgx6HleIg4/Z7nWdW txRY5FlE1gndZYAvGbSvbk/J3KqGFhRDJbL7NRzhb4aIzY4wrufsNtBeoIytHBmEnPy6 TMKzaUj151LX2iHQ2aHgTP+GBxHPQ+3kH6WharRAg7e3jzGlsBeOFcleVUGDzH6wwz+D 5SvA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=c4OqQCxX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bh4si4369618ejb.193.2020.08.30.14.09.01; Sun, 30 Aug 2020 14:09:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=c4OqQCxX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726380AbgH3VIZ (ORCPT + 99 others); Sun, 30 Aug 2020 17:08:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726150AbgH3VIY (ORCPT ); Sun, 30 Aug 2020 17:08:24 -0400 Received: from mail-oi1-x243.google.com (mail-oi1-x243.google.com [IPv6:2607:f8b0:4864:20::243]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDF13C061573 for ; Sun, 30 Aug 2020 14:08:24 -0700 (PDT) Received: by mail-oi1-x243.google.com with SMTP id j7so5477981oij.9 for ; Sun, 30 Aug 2020 14:08:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=SQ10D8PXryeEKHHcl5IvJgd6P48aMcyJPD1+SoD+oOw=; b=c4OqQCxXIwZCv3JjpmkYk1C/CeDsSc970MD8kj38easVa+rgeXjtnreeKAvbX5dnN9 d2uuNuM4vPcJmTcVou5fUb4QsSDGIo0I3jzh7ZNG+qN6GK/hSQ+UkqXT4pYaGthcv1VY 89WTQRrQljR99h7ywkXXy3vo2t9LlTpEqqRthmrCKuytS4dvmws+u25TJBxt6AwCfwvX bT4yk0N1mRAjDTukMmYU2jHo7PXwaeoqAUCooMsaalgiXLhzYKEMF9PsTlbNZOGJBrAa PFUs3sGOo/dZLqlr4+danZhsgj+kTFIxVg381SJZ22wuz518PSM0dkFR3E6QF7kOlmHp TUfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=SQ10D8PXryeEKHHcl5IvJgd6P48aMcyJPD1+SoD+oOw=; b=Fl3DM9iNplOf90kqK80Ah0485nHFWwd0KOZKCl0MoPuJ0t7faB8/IL6GiVgv4vZQia S/yN0sC6Iw4EEnt5ji4DgQRhb5dpC/2WM/SAkarzovskaJfw8NdV0R+6UpiBrQCEUGGX d1zFOnatK5NZUPgdREd0tbp4p4pnKiSa3jYKHVQt5gXtzLInc+lDc7siVuUilk419v0G WOzuuSfzTSvMdd/3Bmoy0lx0s7twhZbs7GOjcGmRpBWlpdIt1WECj/Ap9BYpWKZ/COhp C8i2Cly9KHyw6o32husobVdNoCtP6GCjMBPP8+4npmgeyvxoVxw5OUt9UM4Z+MeglZiX zJpQ== X-Gm-Message-State: AOAM530h1C56TBCS1jNKUGoUyfRFL6eYqbGPgNP5aqgcxetk/ft3jDCb EZK8GjfkB7hmhqBbZquBQDJqow== X-Received: by 2002:aca:fc85:: with SMTP id a127mr1801672oii.148.1598821699723; Sun, 30 Aug 2020 14:08:19 -0700 (PDT) Received: from eggly.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id 63sm1408348ooj.32.2020.08.30.14.08.17 (version=TLS1 cipher=ECDHE-ECDSA-AES128-SHA bits=128/128); Sun, 30 Aug 2020 14:08:18 -0700 (PDT) Date: Sun, 30 Aug 2020 14:08:16 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: Andrew Morton cc: Alex Shi , Johannes Weiner , Michal Hocko , Mike Kravetz , Shakeel Butt , Matthew Wilcox , Qian Cai , Chris Wilson , Kuo-Hsin Yang , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 4/5] mm: fix check_move_unevictable_pages() on THP In-Reply-To: Message-ID: References: User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org check_move_unevictable_pages() is used in making unevictable shmem pages evictable: by shmem_unlock_mapping(), drm_gem_check_release_pagevec() and i915/gem check_release_pagevec(). Those may pass down subpages of a huge page, when /sys/kernel/mm/transparent_hugepage/shmem_enabled is "force". That does not crash or warn at present, but the accounting of vmstats unevictable_pgs_scanned and unevictable_pgs_rescued is inconsistent: scanned being incremented on each subpage, rescued only on the head (since tails already appear evictable once the head has been updated). 5.8 commit 5d91f31faf8e ("mm: swap: fix vmstats for huge page") has established that vm_events in general (and unevictable_pgs_rescued in particular) should count every subpage: so follow that precedent here. Do this in such a way that if mem_cgroup_page_lruvec() is made stricter (to check page->mem_cgroup is always set), no problem: skip the tails before calling it, and add thp_nr_pages() to vmstats on the head. Signed-off-by: Hugh Dickins --- Nothing here worth going to stable, since it's just a testing config that is fixed, whose event numbers are not very important; but this will be needed before Alex Shi's warning, and might as well go in now. The callers of check_move_unevictable_pages() could be optimized, to skip over tails: but Matthew Wilcox has other changes in flight there, so let's skip the optimization for now. mm/vmscan.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) --- 5.9-rc2/mm/vmscan.c 2020-08-16 17:32:50.721507348 -0700 +++ linux/mm/vmscan.c 2020-08-28 17:47:10.595580876 -0700 @@ -4260,8 +4260,14 @@ void check_move_unevictable_pages(struct for (i = 0; i < pvec->nr; i++) { struct page *page = pvec->pages[i]; struct pglist_data *pagepgdat = page_pgdat(page); + int nr_pages; + + if (PageTransTail(page)) + continue; + + nr_pages = thp_nr_pages(page); + pgscanned += nr_pages; - pgscanned++; if (pagepgdat != pgdat) { if (pgdat) spin_unlock_irq(&pgdat->lru_lock); @@ -4280,7 +4286,7 @@ void check_move_unevictable_pages(struct ClearPageUnevictable(page); del_page_from_lru_list(page, lruvec, LRU_UNEVICTABLE); add_page_to_lru_list(page, lruvec, lru); - pgrescued++; + pgrescued += nr_pages; } }