Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp4075096ybz; Mon, 20 Apr 2020 15:14:21 -0700 (PDT) X-Google-Smtp-Source: APiQypIdOmOldXFdY0E8iNhxapPJ1G6PFYhOPsGC+naxMBXmR11hO2cZeyXxd7cpQ6uVXg1v+Qxk X-Received: by 2002:a05:6402:1a35:: with SMTP id be21mr16919269edb.354.1587420861477; Mon, 20 Apr 2020 15:14:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1587420861; cv=none; d=google.com; s=arc-20160816; b=TzQEzGMpyR/mNiX1JC2L5Ymy/cTFK/UtBgxzRhtoqiKh6IjGcAgDpWizpDY1QddwhQ f+rj2bz3OJZhQTJHyfAjX+AJ+NlKSEyU2mnNLDMq5EgvDS+NGAwSDndYOeKhzJ6EbE9q J9tvgN5kBu5gtlxFF7cNWdNw4iUU3RBjy9EWiqLRV38+Bq37WpcpB3dOSuMW7CkrqrRi mJv8XdN5KVHiWNvE+mbUnkD2YJzDJXtdtR6fY8b2zPl9xU4tR4bPf+OHxRXR5TJ1l2W+ YQBM79iNtfLs9Svk4Kqv/ZPNKkS0U9JdXO0LareplTUX7Zh9TE8mxd9n4Xxyi/FyKHRE AUlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=PncRoalTORTzbPDipW6LC/3hBRJEv36Hcb5+zF4gMQI=; b=ng8imMpn0N62LjsYFvw0mrv7kuDvDAkKuLX32CRTTLR04Ev4qzwZq2IoLJ7gL+PXAq kOuXIzziLbBOnq6o05ReIq5AM8KRfygTd9bloH9GwQUQIQ5GZuO1s27YJdTGxgOJlIvk /Mp4JxPeaA1JuDzKaH+EtjkdwMRE8Ku5YyM2QMM1xzxS9qDAI6ZpFjJYduDAaiYNwuv3 7AMM+nkdsF0DUfa9evX8oWVP9gG/FX98O+SvSQOhuQ6Jw4nHPCbx9KrdxgQs6irnvRV/ 2ZdC6CU+THR7zJFT0JIEwII7+3q1BEbrshB3WZDO/33RK2a4j8gaGPzhPKK5xIbHLCmZ 71kQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=lD2oKzCp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a6si462488edv.395.2020.04.20.15.13.57; Mon, 20 Apr 2020 15:14:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=lD2oKzCp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728464AbgDTWMW (ORCPT + 99 others); Mon, 20 Apr 2020 18:12:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728440AbgDTWMP (ORCPT ); Mon, 20 Apr 2020 18:12:15 -0400 Received: from mail-qk1-x744.google.com (mail-qk1-x744.google.com [IPv6:2607:f8b0:4864:20::744]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34989C061A0C for ; Mon, 20 Apr 2020 15:12:14 -0700 (PDT) Received: by mail-qk1-x744.google.com with SMTP id g74so12521127qke.13 for ; Mon, 20 Apr 2020 15:12:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PncRoalTORTzbPDipW6LC/3hBRJEv36Hcb5+zF4gMQI=; b=lD2oKzCp8iMw2/AoEsPsa7wNaiC7gPfC/xtQbGFpVKtiYZcZw1aaTPz3iutggIrJYj +F39OgqOjUV24z5rbhHJ0hRn7w+tBm0XBBT7c1Hmxu0UF5Y+Ggh+HK6Kostsdj/NyvyH EvFx5J+8al9GXzRRuJ3vDEWRZckh7TPr5J+sH+iEGksj7idRcraj57Nkv8ntPVxnw1hI Es0XUIHbguzDRlfAFCk9aHkdV2KUD9nI9sHGr9WFre3CS53I8JC2//bxVqEsp8NV4vLb erMHFO1mmEvAsoU7i168DpDvh3bNY1WtbSXX62iEASE2Lqz20vy20jkDFpc01nai5awo ERxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PncRoalTORTzbPDipW6LC/3hBRJEv36Hcb5+zF4gMQI=; b=km8rhIRPLTZ4Z99L7AdI4N3XCcmmoxocW59Bz4oQmDtn/gaegHa79DFhB2lj52Yf9/ HxdIoBLFCKrxK8UAuKhYaT0fuv2THhoX4oK7bNdm3mnoK4uh8R1StVqA+NwRztT54GcL 2VOQFRwPoz4z96sNRe0y9E2RObCcNoxi0EJL/UZAb2k1M4aZb7dAa7LmHXFCoO24AIHT HfT0Zgb60xCyvxutjiPeYX2VaiGLsl4149oBIMz2G3heHnPOhoiwgsGGMY96Vv4a/prF o2leLwT6CiCNiT83OOOAqyUXeS4G6VoI9v7J+0z1dQj/XUVF83zrKs3zuJVhjfsw83YL z3pw== X-Gm-Message-State: AGi0PuaX7WbjVvZrjbnDx4TGBIrbEDyyX0raNbjD1mm5+BsH77xgZIEi K0NS8NCu8BspAf3T8+rPAKLXjw== X-Received: by 2002:a37:e310:: with SMTP id y16mr18741966qki.275.1587420733327; Mon, 20 Apr 2020 15:12:13 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:e6b6]) by smtp.gmail.com with ESMTPSA id o68sm568301qka.110.2020.04.20.15.12.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Apr 2020 15:12:12 -0700 (PDT) From: Johannes Weiner To: Joonsoo Kim , Alex Shi Cc: Shakeel Butt , Hugh Dickins , Michal Hocko , "Kirill A. Shutemov" , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 17/18] mm: memcontrol: delete unused lrucare handling Date: Mon, 20 Apr 2020 18:11:25 -0400 Message-Id: <20200420221126.341272-18-hannes@cmpxchg.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200420221126.341272-1-hannes@cmpxchg.org> References: <20200420221126.341272-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Signed-off-by: Johannes Weiner --- include/linux/memcontrol.h | 5 ++-- kernel/events/uprobes.c | 3 +- mm/filemap.c | 2 +- mm/huge_memory.c | 7 ++--- mm/khugepaged.c | 4 +-- mm/memcontrol.c | 57 +++----------------------------------- mm/memory.c | 8 +++--- mm/migrate.c | 2 +- mm/shmem.c | 2 +- mm/swap_state.c | 2 +- mm/userfaultfd.c | 2 +- 11 files changed, 21 insertions(+), 73 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index d458f1d90aa4..4b868e5a687f 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -357,8 +357,7 @@ static inline unsigned long mem_cgroup_protection(struct mem_cgroup *memcg, enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root, struct mem_cgroup *memcg); -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, - bool lrucare); +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask); void mem_cgroup_uncharge(struct page *page); void mem_cgroup_uncharge_list(struct list_head *page_list); @@ -839,7 +838,7 @@ static inline enum mem_cgroup_protection mem_cgroup_protected( } static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, bool lrucare) + gfp_t gfp_mask) { return 0; } diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 4253c153e985..eddc8db96027 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -167,8 +167,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, addr + PAGE_SIZE); if (new_page) { - err = mem_cgroup_charge(new_page, vma->vm_mm, GFP_KERNEL, - false); + err = mem_cgroup_charge(new_page, vma->vm_mm, GFP_KERNEL); if (err) return err; } diff --git a/mm/filemap.c b/mm/filemap.c index a10bd6696049..f73b221314df 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -845,7 +845,7 @@ static int __add_to_page_cache_locked(struct page *page, page->index = offset; if (!huge) { - error = mem_cgroup_charge(page, current->mm, gfp_mask, false); + error = mem_cgroup_charge(page, current->mm, gfp_mask); if (error) goto error; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 0b33eaf0740a..35a716720e26 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -593,7 +593,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, VM_BUG_ON_PAGE(!PageCompound(page), page); - if (mem_cgroup_charge(page, vma->vm_mm, gfp, false)) { + if (mem_cgroup_charge(page, vma->vm_mm, gfp)) { put_page(page); count_vm_event(THP_FAULT_FALLBACK); count_vm_event(THP_FAULT_FALLBACK_CHARGE); @@ -1276,7 +1276,7 @@ static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, vmf->address, page_to_nid(page)); if (unlikely(!pages[i] || mem_cgroup_charge(pages[i], vma->vm_mm, - GFP_KERNEL, false))) { + GFP_KERNEL))) { if (pages[i]) put_page(pages[i]); while (--i >= 0) @@ -1430,8 +1430,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) goto out; } - if (unlikely(mem_cgroup_charge(new_page, vma->vm_mm, huge_gfp, - false))) { + if (unlikely(mem_cgroup_charge(new_page, vma->vm_mm, huge_gfp))) { put_page(new_page); split_huge_pmd(vma, vmf->pmd, vmf->address); if (page) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 5cf8082fb038..28c6d84db4ee 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -973,7 +973,7 @@ static void collapse_huge_page(struct mm_struct *mm, goto out_nolock; } - if (unlikely(mem_cgroup_charge(new_page, mm, gfp, false))) { + if (unlikely(mem_cgroup_charge(new_page, mm, gfp))) { result = SCAN_CGROUP_CHARGE_FAIL; goto out_nolock; } @@ -1527,7 +1527,7 @@ static void collapse_file(struct mm_struct *mm, goto out; } - if (unlikely(mem_cgroup_charge(new_page, mm, gfp, false))) { + if (unlikely(mem_cgroup_charge(new_page, mm, gfp))) { result = SCAN_CGROUP_CHARGE_FAIL; goto out; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1d7408a8744a..a8cce52b6b4d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2601,51 +2601,9 @@ static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages) css_put_many(&memcg->css, nr_pages); } -static void lock_page_lru(struct page *page, int *isolated) +static void commit_charge(struct page *page, struct mem_cgroup *memcg) { - pg_data_t *pgdat = page_pgdat(page); - - spin_lock_irq(&pgdat->lru_lock); - if (PageLRU(page)) { - struct lruvec *lruvec; - - lruvec = mem_cgroup_page_lruvec(page, pgdat); - ClearPageLRU(page); - del_page_from_lru_list(page, lruvec, page_lru(page)); - *isolated = 1; - } else - *isolated = 0; -} - -static void unlock_page_lru(struct page *page, int isolated) -{ - pg_data_t *pgdat = page_pgdat(page); - - if (isolated) { - struct lruvec *lruvec; - - lruvec = mem_cgroup_page_lruvec(page, pgdat); - VM_BUG_ON_PAGE(PageLRU(page), page); - SetPageLRU(page); - add_page_to_lru_list(page, lruvec, page_lru(page)); - } - spin_unlock_irq(&pgdat->lru_lock); -} - -static void commit_charge(struct page *page, struct mem_cgroup *memcg, - bool lrucare) -{ - int isolated; - VM_BUG_ON_PAGE(page->mem_cgroup, page); - - /* - * In some cases, SwapCache and FUSE(splice_buf->radixtree), the page - * may already be on some other mem_cgroup's LRU. Take care of it. - */ - if (lrucare) - lock_page_lru(page, &isolated); - /* * Nobody should be changing or seriously looking at * page->mem_cgroup at this point: @@ -2661,9 +2619,6 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg, * have the page locked */ page->mem_cgroup = memcg; - - if (lrucare) - unlock_page_lru(page, isolated); } #ifdef CONFIG_MEMCG_KMEM @@ -6433,22 +6388,18 @@ enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root, * @page: page to charge * @mm: mm context of the victim * @gfp_mask: reclaim mode - * @lrucare: page might be on the LRU already * * Try to charge @page to the memcg that @mm belongs to, reclaiming * pages according to @gfp_mask if necessary. * * Returns 0 on success. Otherwise, an error code is returned. */ -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, - bool lrucare) +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) { unsigned int nr_pages = hpage_nr_pages(page); struct mem_cgroup *memcg = NULL; int ret = 0; - VM_BUG_ON_PAGE(PageLRU(page) && !lrucare, page); - if (mem_cgroup_disabled()) goto out; @@ -6482,7 +6433,7 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, if (ret) goto out_put; - commit_charge(page, memcg, lrucare); + commit_charge(page, memcg); local_irq_disable(); mem_cgroup_charge_statistics(memcg, page, nr_pages); @@ -6685,7 +6636,7 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) page_counter_charge(&memcg->memsw, nr_pages); css_get_many(&memcg->css, nr_pages); - commit_charge(newpage, memcg, false); + commit_charge(newpage, memcg); local_irq_save(flags); mem_cgroup_charge_statistics(memcg, newpage, nr_pages); diff --git a/mm/memory.c b/mm/memory.c index 5d266532fc40..0ad4db56bea2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2677,7 +2677,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) } } - if (mem_cgroup_charge(new_page, mm, GFP_KERNEL, false)) + if (mem_cgroup_charge(new_page, mm, GFP_KERNEL)) goto oom_free_new; cgroup_throttle_swaprate(new_page, GFP_KERNEL); @@ -3136,7 +3136,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) /* Tell memcg to use swap ownership records */ SetPageSwapCache(page); err = mem_cgroup_charge(page, vma->vm_mm, - GFP_KERNEL, false); + GFP_KERNEL); ClearPageSwapCache(page); if (err) goto out_page; @@ -3360,7 +3360,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) if (!page) goto oom; - if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL, false)) + if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL)) goto oom_free_page; cgroup_throttle_swaprate(page, GFP_KERNEL); @@ -3856,7 +3856,7 @@ static vm_fault_t do_cow_fault(struct vm_fault *vmf) if (!vmf->cow_page) return VM_FAULT_OOM; - if (mem_cgroup_charge(vmf->cow_page, vma->vm_mm, GFP_KERNEL, false)) { + if (mem_cgroup_charge(vmf->cow_page, vma->vm_mm, GFP_KERNEL)) { put_page(vmf->cow_page); return VM_FAULT_OOM; } diff --git a/mm/migrate.c b/mm/migrate.c index a3361c744069..ced652d069ee 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2792,7 +2792,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, if (unlikely(anon_vma_prepare(vma))) goto abort; - if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL, false)) + if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL)) goto abort; /* diff --git a/mm/shmem.c b/mm/shmem.c index 966f150a4823..add10d448bc6 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -624,7 +624,7 @@ static int shmem_add_to_page_cache(struct page *page, page->index = index; if (!PageSwapCache(page)) { - error = mem_cgroup_charge(page, charge_mm, gfp, false); + error = mem_cgroup_charge(page, charge_mm, gfp); if (error) { if (PageTransHuge(page)) { count_vm_event(THP_FILE_FALLBACK); diff --git a/mm/swap_state.c b/mm/swap_state.c index f3b9073bfff3..26fded65c30d 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -427,7 +427,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, if (add_to_swap_cache(page, entry, gfp_mask & GFP_KERNEL)) goto fail_unlock; - if (mem_cgroup_charge(page, NULL, gfp_mask & GFP_KERNEL, false)) + if (mem_cgroup_charge(page, NULL, gfp_mask & GFP_KERNEL)) goto fail_delete; /* Initiate read into locked page */ diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 2745489415cc..7f5194046b01 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -96,7 +96,7 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm, __SetPageUptodate(page); ret = -ENOMEM; - if (mem_cgroup_charge(page, dst_mm, GFP_KERNEL, false)) + if (mem_cgroup_charge(page, dst_mm, GFP_KERNEL)) goto out_release; _dst_pte = pte_mkdirty(mk_pte(page, dst_vma->vm_page_prot)); -- 2.26.0