Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp135681pxv; Thu, 8 Jul 2021 17:07:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyN6joVCzasiujZtfizSz4zdZ7SuYBM320cpjZJFHxGfrXvkQqF+mzkbFCukSKxx2GKfbG1 X-Received: by 2002:a05:6e02:dd2:: with SMTP id l18mr23187836ilj.34.1625789231289; Thu, 08 Jul 2021 17:07:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625789231; cv=none; d=google.com; s=arc-20160816; b=gO4qWQlRu9TDWjVknPH3HO9EPnbvc8vLEr2FbXRGMipOyKl+Pli5IvTcOLFahgwZ4R 0fbsEcWPd4mf4Pa7cB5BcSTnfA3ad3Bb0mXfC/qGt01m+fHGCCPS+UA2TBh0qkAQ5cVz Vcfyoer3dwGprH9K4jYMWwRgmHYsTYd99hmCnyxfXZC70AmXEDyE84df/3YU36UB9TdP zrcaSupo/aGfBr41YQ2m9FMM451lr2ruuKh576A3pON9OjDmPftcv+VHI9IOi/AQJ/nC AxcJc6CROYpWQckUUcrQ40HRHfAChUKnoWIz27CJQ/gCIO3FAUb4mU7mqs3Uj8l3fJk5 qrHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:dkim-signature; bh=bUjUMz6cUI7H8dlLz2s05nSjYMcuDgwip5aFJgPl7pk=; b=wfBTtjezkq6bYF1tptZ8BI4DIJXwQNyUAmAxOgseJ94FJUXNygN83K+iNd857UdLHx vHPSRdnDgilkj5AEdHZR/E2adUp/hiQLD3n9DlSI1tM+2xgO5+Ywm4wY5L/jmmKYIRHr 0uMvLdKggorkEGfbaQ/JdYXT//jcPu0P84kQzenEMLQ14lLeTA2dWxSquL6m2BUGlm9T Bu79/N/6MwgWx07Gx+kPAj0BXrY6+UZN5Pb8Zc0THoHi6V5M2pdaABRdff45MYGjynAS A2fe7Pza+8oWgz4p8A9HPtM0ttFlAIZUiNXYrcKcgknnNkbgsWp13lothx04Ktv5UpxT Gm4g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=QNM4PuLJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d1si3811693ila.19.2021.07.08.17.06.59; Thu, 08 Jul 2021 17:07:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=QNM4PuLJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229830AbhGIAIF (ORCPT + 99 others); Thu, 8 Jul 2021 20:08:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229836AbhGIAIB (ORCPT ); Thu, 8 Jul 2021 20:08:01 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9EE6C061760 for ; Thu, 8 Jul 2021 17:05:18 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id h7-20020a5b0a870000b029054c59edf217so9139441ybq.3 for ; Thu, 08 Jul 2021 17:05:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=bUjUMz6cUI7H8dlLz2s05nSjYMcuDgwip5aFJgPl7pk=; b=QNM4PuLJZd/MvfHK1N/nuIGXiMdR0WUAn3o67cSIYU6e0Yn0dY5a91CfXK+8cT7Xo1 zY90AZROwL+PX+cCbiB75nD85CNafuwwShOD3nfp5eReaf0zvET3tZs/mViUxKteJuj+ aMH+FnH3IcG9ekUxbm5qSsVTNQ2VPqPp5FPFA62WJNUkRrgLP75omKOyaP64Wf6QhYji je3qLPwDa2Z4XjKck3xbCe+sbEvEhc5Kx5Y1Aw7XFFCrG2Xp/hBZw3O56fqZHfh+fw9U Fk7QB5Kd5Qfs9NWPXYBXJb/gBlIbcMSU8dGCCfsUgYbCHOQXLvLGNtnzP9lWMu/JrdrM e85A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bUjUMz6cUI7H8dlLz2s05nSjYMcuDgwip5aFJgPl7pk=; b=uhHcXLwo/Z92eBsgb326IkYWrNdsj/22DwvmMJ2z1Y0wM5Eag+xB7HSEkfgMp+lAGL yvLxlQDa/kBkYh+lpeBz1+ysKZw5MEGg7aPD5gB5XaTDQNQLxgP1Cfu78lUKJbVL05wK Xo6GsCb6v+AbDPD2/5V/jA1NCRpvHjIAMNgE4brMleei0OM13x8ye7va+xcZk4KDPISC TFN7h5U464sMXdZS7NKoMkbNEwNf+x09ACi9SzlTSSDOSQqbHqUo1uv8zDSoywnk5I7V E3h8QzPMs0bDgAwtNX8eqZBStmf8UKVrsY/iCQxjFdSiMX6RAc0NX1fQMR8iojinhlAE nLMg== X-Gm-Message-State: AOAM533KxBa6ObsL/XWFPiuRwlJ4GlF0v1J2Za5x6w/aXAShBPObk0D/ gofltPax+ngLDEjdd5Skf5jv6JGm6qs= X-Received: from surenb1.mtv.corp.google.com ([2620:15c:211:200:7a7f:fa1f:71a4:365b]) (user=surenb job=sendgmr) by 2002:a25:df11:: with SMTP id w17mr42318689ybg.314.1625789117804; Thu, 08 Jul 2021 17:05:17 -0700 (PDT) Date: Thu, 8 Jul 2021 17:05:08 -0700 In-Reply-To: <20210709000509.2618345-1-surenb@google.com> Message-Id: <20210709000509.2618345-3-surenb@google.com> Mime-Version: 1.0 References: <20210709000509.2618345-1-surenb@google.com> X-Mailer: git-send-email 2.32.0.93.g670b81a890-goog Subject: [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config From: Suren Baghdasaryan To: tj@kernel.org Cc: hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, akpm@linux-foundation.org, shakeelb@google.com, guro@fb.com, songmuchun@bytedance.com, shy828301@gmail.com, alexs@kernel.org, alexander.h.duyck@linux.intel.com, richard.weiyang@gmail.com, vbabka@suse.cz, axboe@kernel.dk, iamjoonsoo.kim@lge.com, david@redhat.com, willy@infradead.org, apopple@nvidia.com, minchan@kernel.org, linmiaohe@huawei.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, kernel-team@android.com, surenb@google.com Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions functions to perform mem_cgroup_disabled static key check inline before calling the main body of the function. This minimizes the memcg overhead in the pagefault and exit_mmap paths when memcgs are disabled using cgroup_disable=memory command-line option. This change results in ~0.4% overhead reduction when running PFT test comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory} configurationon on an 8-core ARM64 Android device. Signed-off-by: Suren Baghdasaryan --- include/linux/memcontrol.h | 54 ++++++++++++++++++++++++++++++++++---- mm/memcontrol.c | 43 +++--------------------------- 2 files changed, 53 insertions(+), 44 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index bfe5c486f4ad..480815feb116 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -693,13 +693,59 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) page_counter_read(&memcg->memory); } -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask); +struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); + +int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, + gfp_t gfp); +/** + * mem_cgroup_charge - charge a newly allocated page to a cgroup + * @page: page to charge + * @mm: mm context of the victim + * @gfp_mask: reclaim mode + * + * Try to charge @page to the memcg that @mm belongs to, reclaiming + * pages according to @gfp_mask if necessary. if @mm is NULL, try to + * charge to the active memcg. + * + * Do not use this for pages allocated for swapin. + * + * Returns 0 on success. Otherwise, an error code is returned. + */ +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, + gfp_t gfp_mask) +{ + struct mem_cgroup *memcg; + int ret; + + if (mem_cgroup_disabled()) + return 0; + + memcg = get_mem_cgroup_from_mm(mm); + ret = __mem_cgroup_charge(page, memcg, gfp_mask); + css_put(&memcg->css); + + return ret; +} + int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, gfp_t gfp, swp_entry_t entry); void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); -void mem_cgroup_uncharge(struct page *page); -void mem_cgroup_uncharge_list(struct list_head *page_list); +void __mem_cgroup_uncharge(struct page *page); +static inline void mem_cgroup_uncharge(struct page *page) +{ + if (mem_cgroup_disabled()) + return; + __mem_cgroup_uncharge(page); +} + +void __mem_cgroup_uncharge_list(struct list_head *page_list); +static inline void mem_cgroup_uncharge_list(struct list_head *page_list) +{ + if (mem_cgroup_disabled()) + return; + __mem_cgroup_uncharge_list(page_list); +} void mem_cgroup_migrate(struct page *oldpage, struct page *newpage); @@ -756,8 +802,6 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page) struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); -struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); - struct lruvec *lock_page_lruvec(struct page *page); struct lruvec *lock_page_lruvec_irq(struct page *page); struct lruvec *lock_page_lruvec_irqsave(struct page *page, diff --git a/mm/memcontrol.c b/mm/memcontrol.c index a228cd51c4bd..da677b55b2fe 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6701,8 +6701,8 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, atomic_long_read(&parent->memory.children_low_usage))); } -static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, - gfp_t gfp) +int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, + gfp_t gfp) { unsigned int nr_pages = thp_nr_pages(page); int ret; @@ -6722,35 +6722,6 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, return ret; } -/** - * mem_cgroup_charge - charge a newly allocated page to a cgroup - * @page: page to charge - * @mm: mm context of the victim - * @gfp_mask: reclaim mode - * - * Try to charge @page to the memcg that @mm belongs to, reclaiming - * pages according to @gfp_mask if necessary. if @mm is NULL, try to - * charge to the active memcg. - * - * Do not use this for pages allocated for swapin. - * - * Returns 0 on success. Otherwise, an error code is returned. - */ -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) -{ - struct mem_cgroup *memcg; - int ret; - - if (mem_cgroup_disabled()) - return 0; - - memcg = get_mem_cgroup_from_mm(mm); - ret = __mem_cgroup_charge(page, memcg, gfp_mask); - css_put(&memcg->css); - - return ret; -} - /** * mem_cgroup_swapin_charge_page - charge a newly allocated page for swapin * @page: page to charge @@ -6921,13 +6892,10 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) * * Uncharge a page previously charged with mem_cgroup_charge(). */ -void mem_cgroup_uncharge(struct page *page) +void __mem_cgroup_uncharge(struct page *page) { struct uncharge_gather ug; - if (mem_cgroup_disabled()) - return; - /* Don't touch page->lru of any random page, pre-check: */ if (!page_memcg(page)) return; @@ -6944,14 +6912,11 @@ void mem_cgroup_uncharge(struct page *page) * Uncharge a list of pages previously charged with * mem_cgroup_charge(). */ -void mem_cgroup_uncharge_list(struct list_head *page_list) +void __mem_cgroup_uncharge_list(struct list_head *page_list) { struct uncharge_gather ug; struct page *page; - if (mem_cgroup_disabled()) - return; - uncharge_gather_clear(&ug); list_for_each_entry(page, page_list, lru) uncharge_page(page, &ug); -- 2.32.0.93.g670b81a890-goog