Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp18812pxv; Wed, 14 Jul 2021 21:56:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzTBB2tVxW0EznLeraG6NRELgc4PyqB1BcifIn+Ke6anwZHXeyig5/CJVGICw+03Ay6W40w X-Received: by 2002:a05:6e02:20eb:: with SMTP id q11mr1330190ilv.272.1626324971113; Wed, 14 Jul 2021 21:56:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626324971; cv=none; d=google.com; s=arc-20160816; b=ihp9+qOjQtiT5ieZPBXQ0Wkj3xuMawVePjJz2F60cW/Pp7v+x6DlCQ7p10+h9sU969 s6ZIqqnLBedrjHmQ7rgIb4Z93usX97vMidumQhZQ9PWu+aUShcjBK8QXD0MQXXGoQqwi e8TO0in8T2kSjv4PbxF5YiL8iUlQoaCWSIo9zSdqLWFxYj6Ko52nEA9+g296jOUYXb5d zUemuBROWXiV1ndmDqi3+74VcH8bwMORRLw+a83e5g7g0VTzQ4vCQnv3w7PJ/zcOW787 XfKIDLbEOzPvDVbSOlGWZum/jSmIwpb7fnIXAg1NPJBC1QMCdr5kau6m9SFqt7gQkWlV OL1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=QBV0Ef3qEWScyQdT9HDYPzKB2faCtvFJrd0hqsE53I8=; b=fPQn7bJhzT/AklahYtQEp6aCO3uBIyNtnM2aSpsd/iVR13uOFA2+9eZLlwQLQSJ4Tj SGnr6mxhejAqufQbJALaCvqJ7lXiVkhjPVozRfdTb8Vc/jVgfRk1QmAHo1eEtLKvCh3s zEzx4yG15IaChblaSpYOjRb5JoXCI8+IdnKaN4/jufVb7KAAOFz0vZCSKjsAi6h0nBtU azCO8V4V6WVankG0oe+iEalCA7NLRBnXBqEbNk50fV4YHV5B3aABJ1PsDkS9oU41nNc8 V9gue8ZZH/kQ0YkK8H+zAnijjnifp58L10QVqyNv+45BebPSLZccid0Mq+DMDPNfrgpS FT6g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=sD3+Xt4p; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q10si5704242ilu.60.2021.07.14.21.55.58; Wed, 14 Jul 2021 21:56:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=sD3+Xt4p; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231666AbhGOEPR (ORCPT + 99 others); Thu, 15 Jul 2021 00:15:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52928 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234936AbhGOEPK (ORCPT ); Thu, 15 Jul 2021 00:15:10 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 40987C061764; Wed, 14 Jul 2021 21:12:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=QBV0Ef3qEWScyQdT9HDYPzKB2faCtvFJrd0hqsE53I8=; b=sD3+Xt4p+vX7sT7oyetDzpc5E+ sE2/+j+uJG1r4xJOdOA1VHkzuHhcMRdJs7eA6e6Rsu6fLN0KxJ1BKUo04HV6uPpMIs7PuOIYMdN0F ibJudhKAMptxdIc1HcomDzD7404E+feRP6ncIMEJb9B5V+8NJyG/ibkZIpmRWTVHSMbl2UEdTYhc6 W25jbPV8kzsyZQZwh9gPopYgQZ3iuevf0xsntF7vvA97nFRSgMY5m0joGwksayH9yHxq7BiJMHVze iS1XUe4opcVT5qkKDalWTbnbD8ydKv1Khdtve25AhrgQC1K4cfRQDtMkZDVxsju//agcjm4tfJn2I /+ZFetOw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3siQ-002wIz-Sy; Thu, 15 Jul 2021 04:11:33 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 043/138] mm/memcg: Convert mem_cgroup_migrate() to take folios Date: Thu, 15 Jul 2021 04:35:29 +0100 Message-Id: <20210715033704.692967-44-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Convert all callers of mem_cgroup_migrate() to call page_folio() first. They all look like they're using head pages already, but this proves it. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/memcontrol.h | 4 ++-- mm/filemap.c | 4 +++- mm/memcontrol.c | 35 +++++++++++++++++------------------ mm/migrate.c | 4 +++- mm/shmem.c | 5 ++++- 5 files changed, 29 insertions(+), 23 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 941a1a7131c9..d75a708eac13 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -712,7 +712,7 @@ void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); void mem_cgroup_uncharge(struct folio *folio); void mem_cgroup_uncharge_list(struct list_head *page_list); -void mem_cgroup_migrate(struct page *oldpage, struct page *newpage); +void mem_cgroup_migrate(struct folio *old, struct folio *new); /** * mem_cgroup_lruvec - get the lru list vector for a memcg & node @@ -1214,7 +1214,7 @@ static inline void mem_cgroup_uncharge_list(struct list_head *page_list) { } -static inline void mem_cgroup_migrate(struct page *old, struct page *new) +static inline void mem_cgroup_migrate(struct folio *old, struct folio *new) { } diff --git a/mm/filemap.c b/mm/filemap.c index 31d4ecd4268e..5c4e3185ecb3 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -817,6 +817,8 @@ EXPORT_SYMBOL(file_write_and_wait_range); */ void replace_page_cache_page(struct page *old, struct page *new) { + struct folio *fold = page_folio(old); + struct folio *fnew = page_folio(new); struct address_space *mapping = old->mapping; void (*freepage)(struct page *) = mapping->a_ops->freepage; pgoff_t offset = old->index; @@ -831,7 +833,7 @@ void replace_page_cache_page(struct page *old, struct page *new) new->mapping = mapping; new->index = offset; - mem_cgroup_migrate(old, new); + mem_cgroup_migrate(fold, fnew); xas_lock_irqsave(&xas, flags); xas_store(&xas, new); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index fc94048e6451..92bbced86bdb 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6941,36 +6941,35 @@ void mem_cgroup_uncharge_list(struct list_head *page_list) } /** - * mem_cgroup_migrate - charge a page's replacement - * @oldpage: currently circulating page - * @newpage: replacement page + * mem_cgroup_migrate - Charge a folio's replacement. + * @old: Currently circulating folio. + * @new: Replacement folio. * - * Charge @newpage as a replacement page for @oldpage. @oldpage will + * Charge @new as a replacement folio for @old. @old will * be uncharged upon free. * - * Both pages must be locked, @newpage->mapping must be set up. + * Both folios must be locked, @new->mapping must be set up. */ -void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) +void mem_cgroup_migrate(struct folio *old, struct folio *new) { - struct folio *newfolio = page_folio(newpage); struct mem_cgroup *memcg; - unsigned int nr_pages = folio_nr_pages(newfolio); + unsigned int nr_pages = folio_nr_pages(new); unsigned long flags; - VM_BUG_ON_PAGE(!PageLocked(oldpage), oldpage); - VM_BUG_ON_FOLIO(!folio_test_locked(newfolio), newfolio); - VM_BUG_ON_FOLIO(PageAnon(oldpage) != folio_test_anon(newfolio), newfolio); - VM_BUG_ON_FOLIO(compound_nr(oldpage) != nr_pages, newfolio); + VM_BUG_ON_FOLIO(!folio_test_locked(old), old); + VM_BUG_ON_FOLIO(!folio_test_locked(new), new); + VM_BUG_ON_FOLIO(folio_test_anon(old) != folio_test_anon(new), new); + VM_BUG_ON_FOLIO(folio_nr_pages(old) != nr_pages, new); if (mem_cgroup_disabled()) return; - /* Page cache replacement: new page already charged? */ - if (folio_memcg(newfolio)) + /* Page cache replacement: new folio already charged? */ + if (folio_memcg(new)) return; - memcg = page_memcg(oldpage); - VM_WARN_ON_ONCE_PAGE(!memcg, oldpage); + memcg = folio_memcg(old); + VM_WARN_ON_ONCE_FOLIO(!memcg, old); if (!memcg) return; @@ -6982,11 +6981,11 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) } css_get(&memcg->css); - commit_charge(newfolio, memcg); + commit_charge(new, memcg); local_irq_save(flags); mem_cgroup_charge_statistics(memcg, nr_pages); - memcg_check_events(memcg, page_to_nid(newpage)); + memcg_check_events(memcg, folio_nid(new)); local_irq_restore(flags); } diff --git a/mm/migrate.c b/mm/migrate.c index b5bdae748f82..910552318df3 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -541,6 +541,8 @@ int migrate_huge_page_move_mapping(struct address_space *mapping, */ void migrate_page_states(struct page *newpage, struct page *page) { + struct folio *folio = page_folio(page); + struct folio *newfolio = page_folio(newpage); int cpupid; if (PageError(page)) @@ -608,7 +610,7 @@ void migrate_page_states(struct page *newpage, struct page *page) copy_page_owner(page, newpage); if (!PageHuge(page)) - mem_cgroup_migrate(page, newpage); + mem_cgroup_migrate(folio, newfolio); } EXPORT_SYMBOL(migrate_page_states); diff --git a/mm/shmem.c b/mm/shmem.c index 3931fed5c8d8..2fd75b4d4974 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1619,6 +1619,7 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp, struct shmem_inode_info *info, pgoff_t index) { struct page *oldpage, *newpage; + struct folio *old, *new; struct address_space *swap_mapping; swp_entry_t entry; pgoff_t swap_index; @@ -1655,7 +1656,9 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp, xa_lock_irq(&swap_mapping->i_pages); error = shmem_replace_entry(swap_mapping, swap_index, oldpage, newpage); if (!error) { - mem_cgroup_migrate(oldpage, newpage); + old = page_folio(oldpage); + new = page_folio(newpage); + mem_cgroup_migrate(old, new); __inc_lruvec_page_state(newpage, NR_FILE_PAGES); __dec_lruvec_page_state(oldpage, NR_FILE_PAGES); } -- 2.30.2