Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp3915030pxj; Tue, 11 May 2021 15:02:33 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzuB0pCxD6gXLeZNadTz51ukFWdQeaBzcD5pLIP1xrqCjTvUXMkGnmytDGrC0Gwg02uIgyF X-Received: by 2002:a17:906:eda3:: with SMTP id sa3mr34225962ejb.415.1620770553071; Tue, 11 May 2021 15:02:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620770553; cv=none; d=google.com; s=arc-20160816; b=XnEXZkh2fHfE32pdRCTfNWXlsrtkJ8OsJ+o6KYYJ2Mw6PJp0eyP3BGhsMSvEoc/qrM LOweKOh0pU59Dl7bhYw0P4PV9XQ7lWdb+7IeE/wogf496TOMnC+qfBzaGqoJjpy9Dfpf XK1hQPxYDtbLOl6wTvQmFKLZo7WjJZjseEpp8k2FJEKOT2Wgp9Mrqpg6mtuvSfVNVz+G odSu/X6is7S9e1Ubd/kU+Ar3o8YGyPhkBbC6fltLqhyRwmth0b3HFlW6bORzLKE7TO3a AIKTtcDI0q8sOIgo8JsBL3/Qe2GM580faJXQLiy4iAltdJ5gzGWuzpzEqZN9rzZ3WhyY s41A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=mmbddohNEeRsDxRwAKXX/PXTh9KlAPs8HbwhV9igHZo=; b=GYneSYklfVI/XRlQC6bkBkaNmud8X5nf6tlks2Y3AfyAw8Rhn5Ab29MHFVY1PHCnl9 uq3YAw/0vOqGcJCnjdS+Eak/J3/TREcItrpP0uz34SZWrllMR78fuyC5ljnGWKBKJI1W sWsMpaNPpvP87KVnLwcbhecFIULtYCmMTD5R9xWUbN8NIMzwq8r+u3S0VDRkYyA0Be24 GncFepRqL2x7sIwynm1/Y9WxsryYlPW/ICJeKLVATutKyr2kOR9SaJqOjAdIkmV/FA49 3xPiI8KPvBdOFSR9GqsYpY09j8kJuVT2SwSd7NSLeXY9tsjKMBRY1ftqjZHXUS0t+9LC HvTA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Fq+HG68v; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q6si16497347ejn.673.2021.05.11.15.02.05; Tue, 11 May 2021 15:02:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Fq+HG68v; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229968AbhEKWBb (ORCPT + 99 others); Tue, 11 May 2021 18:01:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229736AbhEKWBb (ORCPT ); Tue, 11 May 2021 18:01:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 614C0C061574; Tue, 11 May 2021 15:00:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=mmbddohNEeRsDxRwAKXX/PXTh9KlAPs8HbwhV9igHZo=; b=Fq+HG68vLbqPw3VwBToQp47/fn 5TwNY9VijbOu16Gr8Qs4RKreUe30XCpo3RxP7V1pug6vxTk21MfeTg+GHPNiLw+mmMfZOT2pP6hoO EbhgUhuikeNz450RHGrt2lP/8ZjtJ6doRH+728Jd+rQum+WIjkC3q15mFxXg9RpY1kneuQHuvzQ1w xzvTa44PCmy2Ad9DV7I7CAALSjSMDgVFAI6RDpQs6klUCCIZa7g880p+HMv/UWGWmoCDxC7QRhNHJ JCDP/ZyCWABkxIVbuCNVjsJsmytYwUfA5XwKpoDsoSPiViIuZjWucakNLNX/Zx/zTk/uyIRpc9Rxj 6peJvZMQ==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lgaP6-007iH2-5t; Tue, 11 May 2021 21:59:06 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linux-foundation.org Cc: "Matthew Wilcox (Oracle)" , linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Hellwig , Jeff Layton Subject: [PATCH v10 17/33] mm/memcg: Add folio wrappers for various functions Date: Tue, 11 May 2021 22:47:19 +0100 Message-Id: <20210511214735.1836149-18-willy@infradead.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210511214735.1836149-1-willy@infradead.org> References: <20210511214735.1836149-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add new wrapper functions folio_memcg(), lock_folio_memcg(), unlock_folio_memcg(), mem_cgroup_folio_lruvec() and count_memcg_folio_event() Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton --- include/linux/memcontrol.h | 63 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 63 insertions(+) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index c193be760709..a3e627ea98e0 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -456,6 +456,11 @@ static inline struct mem_cgroup *page_memcg(struct page *page) return __page_memcg(page); } +static inline struct mem_cgroup *folio_memcg(struct folio *folio) +{ + return page_memcg(&folio->page); +} + /* * page_memcg_rcu - locklessly get the memory cgroup associated with a page * @page: a pointer to the page struct @@ -1058,6 +1063,15 @@ static inline void count_memcg_page_event(struct page *page, count_memcg_events(memcg, idx, 1); } +static inline void count_memcg_folio_event(struct folio *folio, + enum vm_event_item idx) +{ + struct mem_cgroup *memcg = folio_memcg(folio); + + if (memcg) + count_memcg_events(memcg, idx, folio_nr_pages(folio)); +} + static inline void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx) { @@ -1129,6 +1143,11 @@ static inline struct mem_cgroup *page_memcg(struct page *page) return NULL; } +static inline struct mem_cgroup *folio_memcg(struct folio *folio) +{ + return NULL; +} + static inline struct mem_cgroup *page_memcg_rcu(struct page *page) { WARN_ON_ONCE(!rcu_read_lock_held()); @@ -1477,6 +1496,22 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, } #endif /* CONFIG_MEMCG */ +static inline void lock_folio_memcg(struct folio *folio) +{ + lock_page_memcg(&folio->page); +} + +static inline void unlock_folio_memcg(struct folio *folio) +{ + unlock_page_memcg(&folio->page); +} + +static inline struct lruvec *mem_cgroup_folio_lruvec(struct folio *folio, + struct pglist_data *pgdat) +{ + return mem_cgroup_page_lruvec(&folio->page, pgdat); +} + static inline void __inc_lruvec_kmem_state(void *p, enum node_stat_item idx) { __mod_lruvec_kmem_state(p, idx, 1); @@ -1544,6 +1579,34 @@ static inline struct lruvec *relock_page_lruvec_irqsave(struct page *page, return lock_page_lruvec_irqsave(page, flags); } +static inline struct lruvec *folio_lock_lruvec(struct folio *folio) +{ + return lock_page_lruvec(&folio->page); +} + +static inline struct lruvec *folio_lock_lruvec_irq(struct folio *folio) +{ + return lock_page_lruvec_irq(&folio->page); +} + +static inline struct lruvec *folio_lock_lruvec_irqsave(struct folio *folio, + unsigned long *flagsp) +{ + return lock_page_lruvec_irqsave(&folio->page, flagsp); +} + +static inline struct lruvec *folio_relock_lruvec_irq(struct folio *folio, + struct lruvec *locked_lruvec) +{ + return relock_page_lruvec_irq(&folio->page, locked_lruvec); +} + +static inline struct lruvec *folio_relock_lruvec_irqsave(struct folio *folio, + struct lruvec *locked_lruvec, unsigned long *flagsp) +{ + return relock_page_lruvec_irqsave(&folio->page, locked_lruvec, flagsp); +} + #ifdef CONFIG_CGROUP_WRITEBACK struct wb_domain *mem_cgroup_wb_domain(struct bdi_writeback *wb); -- 2.30.2