Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp944250ybk; Wed, 20 May 2020 16:29:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz3E3kVqyoBcq9rwBVOT+cKJELF2unrUNnMYZGTb3Kd5hxpLv0T3AfB770VinVwcOWjjei5 X-Received: by 2002:a17:906:27c2:: with SMTP id k2mr1208401ejc.239.1590017397288; Wed, 20 May 2020 16:29:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1590017397; cv=none; d=google.com; s=arc-20160816; b=cB/7mXqunPUNrkNB8zdPZYMvVq9qv8pVIui6fMXvRdFONB/wEbBe5OD84MlsYciqUq xdIxOKaEtlNsdHOuSg5c0xaSUseoBVZH7Cb7DI9z4iSshgo9meW8ydw6JpZLEk411P4i fYRrQhu2VpVtr4/EJ8ZFmvUwE/gnJiUxRc7OWnBDDu1e04hVw1rME9vaYngN96lcCH+d MkRhxouLnQ3hpzSKL4EdlkGp/veaCmf2u7hQG35ZLoGIBmlOegVrBwK6eCU1BY+vhKZI DUIE3SCxONpVmMM7Jd3NindQ71eyIeOIjsZoQEHaANKIR/1OnwrXU5oaZjwglZaGceBm uUgg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=2ozutM1RuMVFVDvSIK8bxM1gfp4d+rFh7jBPaadPhlY=; b=dSPhBVDQcDm4KKTy53DZEhIMVAxAcj0Ke8Zvsnqp0RlI3VzIJ2Mdj3GqnJ/Vi3DY37 rI72vpXXdrukv974spv0libSj0NkIJ7dEWAfgu/dByMU8uFnuWvktMwgA6P4PlD4HH07 Eib7q07CeMkYKZWDNoyrcumoSiSf76d0JU4bURQMW1tV4jQpWgcN3+zHUc/vv3Q3ClD+ T4wciEkWMi85P+skgjqL2qYQcOe4yLJqllWSal2Ne4OUsztMaAhZR95sv1U9xXieewh0 IzsDxXrWCmhXzdD/803QIv+MOOvFRN3QppWja989LAA1ndtfMjDt/Z4PRbqKWtaDxfqp psRA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=Lb7ndAEu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i18si2554147ejk.81.2020.05.20.16.29.34; Wed, 20 May 2020 16:29:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=Lb7ndAEu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728706AbgETX0D (ORCPT + 99 others); Wed, 20 May 2020 19:26:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728683AbgETX0C (ORCPT ); Wed, 20 May 2020 19:26:02 -0400 Received: from mail-qt1-x842.google.com (mail-qt1-x842.google.com [IPv6:2607:f8b0:4864:20::842]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1933CC061A0F for ; Wed, 20 May 2020 16:26:02 -0700 (PDT) Received: by mail-qt1-x842.google.com with SMTP id v4so4103712qte.3 for ; Wed, 20 May 2020 16:26:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2ozutM1RuMVFVDvSIK8bxM1gfp4d+rFh7jBPaadPhlY=; b=Lb7ndAEuNhjpm7qFW7cY/KXnvFygKYOdT4JswXczLPm909v/DQ8GR7bbm44g/X0spu p+/16sYWRdQx8Pm26HKQwjp4euw6hp95qZ8If2y+6Lox6HQZx2c+LIwq5Y0oPZxBzXs3 ZA3Arusf3P6B4eajc4K3WyFw2kA4iSUUg1kEJ96RgKwwEWmlVQ2wDIvpq6YM/ddUvEKn oEjySAahIKLRb9VG1+Twv7hYdcsJI4ETtJdTABdfTl7Opktcpi2wNjDAtfxC2xkALi/K o2svRWYCilFhKKJYzc7pz7+IYAeMn0+Gbgr0Txxt00hvGDR9FzKc8alzfLV2+uOXOopH 9JOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2ozutM1RuMVFVDvSIK8bxM1gfp4d+rFh7jBPaadPhlY=; b=jdhjcqyoDjgU6CdxxmhXKjK3yNnzuW83PssKutp06G0X8Vp9PEnbfBWZUurQm8ZWir RdcRXBxEcekRLQeVjSxw6XSrsrmDrNV6Y1TvFWcDs6B0SubcyGQKiOCQXyTyxtP/yovw b/sjA0tHlqdyIecMpNmejivuwUwfYBxMx0XXL6ccbyWSa0so+xzfru/ZEAQQ93QBGOwB jzlcvfH0KLW90duHg/XSrg6cTN34djwofAldspr3EH8DBUPi0ye7e0vDdKmDzCRakh6P CZqOTZdLL3zzlhh3D/NAdVSZ38o9VhGmfHSsZVv9qZJZqEPnx0N1V8KrynAYtTnISHa/ AYDg== X-Gm-Message-State: AOAM533TTZEPoeEdWThlLOwJN+wujtmNwyjqgDpX1/XA/lJkjn8uIQTh 8G90/SWctifrisAlP7FvKLDdZA== X-Received: by 2002:ac8:6615:: with SMTP id c21mr8107277qtp.185.1590017161371; Wed, 20 May 2020 16:26:01 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:4708]) by smtp.gmail.com with ESMTPSA id l6sm2544316qkc.59.2020.05.20.16.26.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 May 2020 16:26:00 -0700 (PDT) From: Johannes Weiner To: linux-mm@kvack.org Cc: Rik van Riel , Minchan Kim , Michal Hocko , Andrew Morton , Joonsoo Kim , linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 01/14] mm: fix LRU balancing effect of new transparent huge pages Date: Wed, 20 May 2020 19:25:12 -0400 Message-Id: <20200520232525.798933-2-hannes@cmpxchg.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200520232525.798933-1-hannes@cmpxchg.org> References: <20200520232525.798933-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, THP are counted as single pages until they are split right before being swapped out. However, at that point the VM is already in the middle of reclaim, and adjusting the LRU balance then is useless. Always account THP by the number of basepages, and remove the fixup from the splitting path. Signed-off-by: Johannes Weiner Reviewed-by: Rik van Riel Acked-by: Michal Hocko Acked-by: Minchan Kim --- mm/swap.c | 25 +++++++++++-------------- 1 file changed, 11 insertions(+), 14 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index bf9a79fed62d..68eae1e2787a 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -263,13 +263,14 @@ void rotate_reclaimable_page(struct page *page) } static void update_page_reclaim_stat(struct lruvec *lruvec, - int file, int rotated) + int file, int rotated, + unsigned int nr_pages) { struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; - reclaim_stat->recent_scanned[file]++; + reclaim_stat->recent_scanned[file] += nr_pages; if (rotated) - reclaim_stat->recent_rotated[file]++; + reclaim_stat->recent_rotated[file] += nr_pages; } static void __activate_page(struct page *page, struct lruvec *lruvec, @@ -286,7 +287,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec, trace_mm_lru_activate(page); __count_vm_event(PGACTIVATE); - update_page_reclaim_stat(lruvec, file, 1); + update_page_reclaim_stat(lruvec, file, 1, hpage_nr_pages(page)); } } @@ -541,7 +542,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, if (active) __count_vm_event(PGDEACTIVATE); - update_page_reclaim_stat(lruvec, file, 0); + update_page_reclaim_stat(lruvec, file, 0, hpage_nr_pages(page)); } static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, @@ -557,7 +558,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, add_page_to_lru_list(page, lruvec, lru); __count_vm_events(PGDEACTIVATE, hpage_nr_pages(page)); - update_page_reclaim_stat(lruvec, file, 0); + update_page_reclaim_stat(lruvec, file, 0, hpage_nr_pages(page)); } } @@ -582,7 +583,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, __count_vm_events(PGLAZYFREE, hpage_nr_pages(page)); count_memcg_page_event(page, PGLAZYFREE); - update_page_reclaim_stat(lruvec, 1, 0); + update_page_reclaim_stat(lruvec, 1, 0, hpage_nr_pages(page)); } } @@ -890,8 +891,6 @@ EXPORT_SYMBOL(__pagevec_release); void lru_add_page_tail(struct page *page, struct page *page_tail, struct lruvec *lruvec, struct list_head *list) { - const int file = 0; - VM_BUG_ON_PAGE(!PageHead(page), page); VM_BUG_ON_PAGE(PageCompound(page_tail), page); VM_BUG_ON_PAGE(PageLRU(page_tail), page); @@ -917,9 +916,6 @@ void lru_add_page_tail(struct page *page, struct page *page_tail, add_page_to_lru_list_tail(page_tail, lruvec, page_lru(page_tail)); } - - if (!PageUnevictable(page)) - update_page_reclaim_stat(lruvec, file, PageActive(page_tail)); } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -962,8 +958,9 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, if (page_evictable(page)) { lru = page_lru(page); - update_page_reclaim_stat(lruvec, page_is_file_lru(page), - PageActive(page)); + update_page_reclaim_stat(lruvec, is_file_lru(lru), + PageActive(page), + hpage_nr_pages(page)); if (was_unevictable) count_vm_event(UNEVICTABLE_PGRESCUED); } else { -- 2.26.2