Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp4074496ybz; Mon, 20 Apr 2020 15:13:34 -0700 (PDT) X-Google-Smtp-Source: APiQypJUpp1ke3SOyq7XlYhAV3eij3nI4V16k01GSWlDXap+zclKmCUohX/2Wy201EkNtiFTkmgk X-Received: by 2002:a17:906:bfc9:: with SMTP id us9mr4309923ejb.84.1587420813959; Mon, 20 Apr 2020 15:13:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1587420813; cv=none; d=google.com; s=arc-20160816; b=l1jZoSULSW9/xTWDd9EZ99EsqvVm+2ZRkfWPq2sPhcJjf3Uj5/Cs3wBDwpApg9DAIU B0ez90UOTkt2QxE40AAXoj8ruM/bhjydCCdy7FENbtOobFRyrXE1CRlXcvakmuYslxlQ bn38QiQVZeen9l54RrTYd9lobiVGGsnysHsIZpJnUbdzHKCPstFgVoXrL1ZQhChFtARm boy49SLTnp/qAjcQAK3y//eyPyC9D4eRK00iaICEFXm3N+9yyIDez1BdM+VZoUW5hRlU dlhNAzQt8TfqykjXakRy8+/gNvQBvhpfekmXtCpod/lZ8bQzgCXfq+dNu++gw5hn8SeK vH7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=NQlXjGd8cWwHx163k4HH7Ic3eFVRzUnuA2WhWG1OsgA=; b=ZhqUb3YUZUF9xVOibTiFiC1VIj+PVIgMIlj5t84eIqCYYdzQNx7Cq6OFrdKtM/xKOS jtzGKQvRqsjp2uqt+lIqKJtfZsAsi3mdBu3KAHmm0O3QrFDaQKxYsDcFQAd1uGKsybuc qOOrzllSdYFaJ5wa3NAqtyRknav1heGpn8/C2SM6qECoP/b/0AqIba3YKIPcgt4L/Bsp w8aaY3RiWalkOu4k26yklY3q+3JNBAr3q+YwDpSC7oLElJ1PdzaidjtXLB/gOl0t8OnJ qQcXoc7vTvT6zsqRhjzPuMvT6r+zq94lECQZD/4xp1DLPC3ktbYErSbxMd7QLTXO+VvB I+CQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=lnHPkR6U; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s8si526299edj.463.2020.04.20.15.13.10; Mon, 20 Apr 2020 15:13:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=lnHPkR6U; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728127AbgDTWLy (ORCPT + 99 others); Mon, 20 Apr 2020 18:11:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728042AbgDTWLu (ORCPT ); Mon, 20 Apr 2020 18:11:50 -0400 Received: from mail-qt1-x843.google.com (mail-qt1-x843.google.com [IPv6:2607:f8b0:4864:20::843]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57045C061A0C for ; Mon, 20 Apr 2020 15:11:50 -0700 (PDT) Received: by mail-qt1-x843.google.com with SMTP id l13so10020660qtr.7 for ; Mon, 20 Apr 2020 15:11:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NQlXjGd8cWwHx163k4HH7Ic3eFVRzUnuA2WhWG1OsgA=; b=lnHPkR6UiNAHp/TiGxb8u8qVeySABveLUC7D5Yf0mwbtb1W1LpSN9Mfb3CA8w1LxxV TbQ3t/Ydz6A4s58B6r2ovay4prewKRJxWXxBVBpJdHQD6/UEsDT69x7Pw52MS+9iF4+L mmw0Ta4tnDNvy5x+rlvhZA60pWcXQKI7p+X/rlyUSRC4DVOXFhNA8SnMRu/WZiaVvgYz KoGDJ6TSHrpPVTIw2Al9FgQUegfIoGXE8K+YVx3FzUin6WjzOkWh37YGil9CgJVjzKnv oQuHe+e4KarIvFr/a8/U5tuHNw6TGjwlxfdxZkaBl2b4Dm2jRdEH2DHVTlc/FgqD/Z1Q q5PA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NQlXjGd8cWwHx163k4HH7Ic3eFVRzUnuA2WhWG1OsgA=; b=lOchfKWdEedmmLTrrARP5eWgbG8gvUkA7c13Et5A1BwXf2y1hnHxzVIJAoWwpQWuR1 uNz6daWvjYtSAgaqL6GxRXspx2U+zU0Wzc0u0P9S+B0zmlfKZLzEhvdpKEN2F309bh0P ohQZ7S90kufFX91NycGrVDkEk3pvoiimCKmbp3Kspo/ffxMcx9HIDO00kOA2sZqAbhRm 3XAfRiqwhFRR4rxGH9VbKMOfIY8IBZllPDe3yLNw4sp1FBSqaxY7IFG+OmKtQf8T50ga nH1IxYYDm9K1CVRsU+gVCyololqlFPUtNn6xAbMpGDfwOh7VRosjiJPXbMEmpx04DAXq Ym6g== X-Gm-Message-State: AGi0PubifxbZlEzyRRWpXAVLHpO31adjzsaXUkyXaRokgBTqXYU61SzU yONX8qYQHNJ71GpOnPvk1/Ed0w== X-Received: by 2002:ac8:193d:: with SMTP id t58mr17336290qtj.93.1587420709413; Mon, 20 Apr 2020 15:11:49 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:e6b6]) by smtp.gmail.com with ESMTPSA id k2sm496368qta.39.2020.04.20.15.11.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Apr 2020 15:11:48 -0700 (PDT) From: Johannes Weiner To: Joonsoo Kim , Alex Shi Cc: Shakeel Butt , Hugh Dickins , Michal Hocko , "Kirill A. Shutemov" , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 05/18] mm: memcontrol: convert page cache to a new mem_cgroup_charge() API Date: Mon, 20 Apr 2020 18:11:13 -0400 Message-Id: <20200420221126.341272-6-hannes@cmpxchg.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200420221126.341272-1-hannes@cmpxchg.org> References: <20200420221126.341272-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The try/commit/cancel protocol that memcg uses dates back to when pages used to be uncharged upon removal from the page cache, and thus couldn't be committed before the insertion had succeeded. Nowadays, pages are uncharged when they are physically freed; it doesn't matter whether the insertion was successful or not. For the page cache, the transaction dance has become unnecessary. Introduce a mem_cgroup_charge() function that simply charges a newly allocated page to a cgroup and sets up page->mem_cgroup in one single step. If the insertion fails, the caller doesn't have to do anything but free/put the page. Then switch the page cache over to this new API. Subsequent patches will also convert anon pages, but it needs a bit more prep work. Right now, memcg depends on page->mapping being already set up at the time of charging, so that it can maintain its own MEMCG_CACHE and MEMCG_RSS counters. For anon, page->mapping is set under the same pte lock under which the page is publishd, so a single charge point that can block doesn't work there just yet. The following prep patches will replace the private memcg counters with the generic vmstat counters, thus removing the page->mapping dependency, then complete the transition to the new single-point charge API and delete the old transactional scheme. Signed-off-by: Johannes Weiner --- include/linux/memcontrol.h | 10 ++++ mm/filemap.c | 24 ++++------ mm/memcontrol.c | 27 +++++++++++ mm/shmem.c | 97 +++++++++++++++++--------------------- 4 files changed, 89 insertions(+), 69 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index c7875a48c8c1..5e8b0e38f145 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -367,6 +367,10 @@ int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm, void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg, bool lrucare); void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg); + +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, + bool lrucare); + void mem_cgroup_uncharge(struct page *page); void mem_cgroup_uncharge_list(struct list_head *page_list); @@ -872,6 +876,12 @@ static inline void mem_cgroup_cancel_charge(struct page *page, { } +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, + gfp_t gfp_mask, bool lrucare) +{ + return 0; +} + static inline void mem_cgroup_uncharge(struct page *page) { } diff --git a/mm/filemap.c b/mm/filemap.c index 5b31af9d5b1b..5bdbda965177 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -832,7 +832,6 @@ static int __add_to_page_cache_locked(struct page *page, { XA_STATE(xas, &mapping->i_pages, offset); int huge = PageHuge(page); - struct mem_cgroup *memcg; int error; void *old; @@ -840,17 +839,16 @@ static int __add_to_page_cache_locked(struct page *page, VM_BUG_ON_PAGE(PageSwapBacked(page), page); mapping_set_update(&xas, mapping); - if (!huge) { - error = mem_cgroup_try_charge(page, current->mm, - gfp_mask, &memcg); - if (error) - return error; - } - get_page(page); page->mapping = mapping; page->index = offset; + if (!huge) { + error = mem_cgroup_charge(page, current->mm, gfp_mask, false); + if (error) + goto error; + } + do { xas_lock_irq(&xas); old = xas_load(&xas); @@ -874,20 +872,18 @@ static int __add_to_page_cache_locked(struct page *page, xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK)); - if (xas_error(&xas)) + if (xas_error(&xas)) { + error = xas_error(&xas); goto error; + } - if (!huge) - mem_cgroup_commit_charge(page, memcg, false); trace_mm_filemap_add_to_page_cache(page); return 0; error: page->mapping = NULL; /* Leave page->index set: truncation relies upon it */ - if (!huge) - mem_cgroup_cancel_charge(page, memcg); put_page(page); - return xas_error(&xas); + return error; } ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 711d6dd5cbb1..b38c0a672d26 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6577,6 +6577,33 @@ void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg) cancel_charge(memcg, nr_pages); } +/** + * mem_cgroup_charge - charge a newly allocated page to a cgroup + * @page: page to charge + * @mm: mm context of the victim + * @gfp_mask: reclaim mode + * @lrucare: page might be on the LRU already + * + * Try to charge @page to the memcg that @mm belongs to, reclaiming + * pages according to @gfp_mask if necessary. + * + * Returns 0 on success. Otherwise, an error code is returned. + */ +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, + bool lrucare) +{ + struct mem_cgroup *memcg; + int ret; + + VM_BUG_ON_PAGE(!page->mapping, page); + + ret = mem_cgroup_try_charge(page, mm, gfp_mask, &memcg); + if (ret) + return ret; + mem_cgroup_commit_charge(page, memcg, lrucare); + return 0; +} + struct uncharge_gather { struct mem_cgroup *memcg; unsigned long pgpgout; diff --git a/mm/shmem.c b/mm/shmem.c index 52c66801321e..2384f6c7ef71 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -605,11 +605,13 @@ static inline bool is_huge_enabled(struct shmem_sb_info *sbinfo) */ static int shmem_add_to_page_cache(struct page *page, struct address_space *mapping, - pgoff_t index, void *expected, gfp_t gfp) + pgoff_t index, void *expected, gfp_t gfp, + struct mm_struct *charge_mm) { XA_STATE_ORDER(xas, &mapping->i_pages, index, compound_order(page)); unsigned long i = 0; unsigned long nr = compound_nr(page); + int error; VM_BUG_ON_PAGE(PageTail(page), page); VM_BUG_ON_PAGE(index != round_down(index, nr), page); @@ -621,6 +623,16 @@ static int shmem_add_to_page_cache(struct page *page, page->mapping = mapping; page->index = index; + error = mem_cgroup_charge(page, charge_mm, gfp, PageSwapCache(page)); + if (error) { + if (!PageSwapCache(page) && PageTransHuge(page)) { + count_vm_event(THP_FILE_FALLBACK); + count_vm_event(THP_FILE_FALLBACK_CHARGE); + } + goto error; + } + cgroup_throttle_swaprate(page, gfp); + do { void *entry; xas_lock_irq(&xas); @@ -648,12 +660,15 @@ static int shmem_add_to_page_cache(struct page *page, } while (xas_nomem(&xas, gfp)); if (xas_error(&xas)) { - page->mapping = NULL; - page_ref_sub(page, nr); - return xas_error(&xas); + error = xas_error(&xas); + goto error; } return 0; +error: + page->mapping = NULL; + page_ref_sub(page, nr); + return error; } /* @@ -1619,7 +1634,6 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm; - struct mem_cgroup *memcg; struct page *page; swp_entry_t swap; int error; @@ -1664,29 +1678,22 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, goto failed; } - error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); - if (!error) { - error = shmem_add_to_page_cache(page, mapping, index, - swp_to_radix_entry(swap), gfp); - /* - * We already confirmed swap under page lock, and make - * no memory allocation here, so usually no possibility - * of error; but free_swap_and_cache() only trylocks a - * page, so it is just possible that the entry has been - * truncated or holepunched since swap was confirmed. - * shmem_undo_range() will have done some of the - * unaccounting, now delete_from_swap_cache() will do - * the rest. - */ - if (error) { - mem_cgroup_cancel_charge(page, memcg); - delete_from_swap_cache(page); - } - } - if (error) + error = shmem_add_to_page_cache(page, mapping, index, + swp_to_radix_entry(swap), gfp, + charge_mm); + /* + * We already confirmed swap under page lock, and make no + * memory allocation here, so usually no possibility of error; + * but free_swap_and_cache() only trylocks a page, so it is + * just possible that the entry has been truncated or + * holepunched since swap was confirmed. shmem_undo_range() + * will have done some of the unaccounting, now + * delete_from_swap_cache() will do the rest. + */ + if (error) { + delete_from_swap_cache(page); goto failed; - - mem_cgroup_commit_charge(page, memcg, true); + } spin_lock_irq(&info->lock); info->swapped--; @@ -1733,7 +1740,6 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, struct shmem_inode_info *info = SHMEM_I(inode); struct shmem_sb_info *sbinfo; struct mm_struct *charge_mm; - struct mem_cgroup *memcg; struct page *page; enum sgp_type sgp_huge = sgp; pgoff_t hindex = index; @@ -1858,21 +1864,11 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, if (sgp == SGP_WRITE) __SetPageReferenced(page); - error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); - if (error) { - if (PageTransHuge(page)) { - count_vm_event(THP_FILE_FALLBACK); - count_vm_event(THP_FILE_FALLBACK_CHARGE); - } - goto unacct; - } error = shmem_add_to_page_cache(page, mapping, hindex, - NULL, gfp & GFP_RECLAIM_MASK); - if (error) { - mem_cgroup_cancel_charge(page, memcg); + NULL, gfp & GFP_RECLAIM_MASK, + charge_mm); + if (error) goto unacct; - } - mem_cgroup_commit_charge(page, memcg, false); lru_cache_add_anon(page); spin_lock_irq(&info->lock); @@ -2307,7 +2303,6 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, struct address_space *mapping = inode->i_mapping; gfp_t gfp = mapping_gfp_mask(mapping); pgoff_t pgoff = linear_page_index(dst_vma, dst_addr); - struct mem_cgroup *memcg; spinlock_t *ptl; void *page_kaddr; struct page *page; @@ -2357,16 +2352,10 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (unlikely(offset >= max_off)) goto out_release; - ret = mem_cgroup_try_charge_delay(page, dst_mm, gfp, &memcg); - if (ret) - goto out_release; - ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL, - gfp & GFP_RECLAIM_MASK); + gfp & GFP_RECLAIM_MASK, dst_mm); if (ret) - goto out_release_uncharge; - - mem_cgroup_commit_charge(page, memcg, false); + goto out_release; _dst_pte = mk_pte(page, dst_vma->vm_page_prot); if (dst_vma->vm_flags & VM_WRITE) @@ -2387,11 +2376,11 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, ret = -EFAULT; max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); if (unlikely(offset >= max_off)) - goto out_release_uncharge_unlock; + goto out_release_unlock; ret = -EEXIST; if (!pte_none(*dst_pte)) - goto out_release_uncharge_unlock; + goto out_release_unlock; lru_cache_add_anon(page); @@ -2412,12 +2401,10 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, ret = 0; out: return ret; -out_release_uncharge_unlock: +out_release_unlock: pte_unmap_unlock(dst_pte, ptl); ClearPageDirty(page); delete_from_page_cache(page); -out_release_uncharge: - mem_cgroup_cancel_charge(page, memcg); out_release: unlock_page(page); put_page(page); -- 2.26.0