Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp4500034pxf; Tue, 23 Mar 2021 12:06:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy1xeau4Mo5FLYtQfj69IB/RXduK4EwdjgR/yGmupoipzEiBLQJBd2aNCBiwblPpm9qpSL8 X-Received: by 2002:a17:906:a1c8:: with SMTP id bx8mr6162898ejb.381.1616526400472; Tue, 23 Mar 2021 12:06:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616526400; cv=none; d=google.com; s=arc-20160816; b=qQDlDprR/BJvfZiWIIxs+LmPUtNzPrhK4/jS0lxGayg8SChz4PCGL+ZNWXcK5GxZyz GmpWRSJdGo9XCB5zzSBKzc26G3GgO/ZfBS32dYGa9MzL8PHEwg5rR0rhnk4LVgzsnaFH IMEXqEpN1pRr7FYQ13rM/l0vEN4bPpLeixWqephnCTN2sweh8dnnCEB8ivZzgYU7nRuA TGfw/uAl5qpbxaNMWQiz1b+SXwOt6rnx+8ZkoKxmV1KZsmZZbFiZTGAvFL1rByNmAM/q SPJAIeHUrg+4cdCiU6a6w9sPk3iJeYL/VnYnuk7IESWYYvead0M0u2VZjC8W3mk6l2xl P/HA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=2NS3knvOsVC8w40oRTuJAV1fjp7aGSVyD5f77gkqujA=; b=an7XR0CzZhqREiUG7/GnoksG8IFdPxcD0Y7wkGPSuqma40w+IHeEO4AMWs4cGgwVlw ZnK0O8rvFc1ovY8aJhwrFv0vSsr8K7+W4dOWN025VKBKUY7ySIa4yGr9yMo9N80vbDmZ aQnTyDa6lhflyf1FKJEtID0yFoAdux4WTIWruXCW9NuOq/VFUwn9GLF8LykDiRFH20IQ qYQdQaZpBTgk7USIOi5rYXH0Jxkoz/SDmK7Ja+BpuU0JLR87wLucoomXCOAMNi3C+kIz gnYa+9SOPDSdVDXLXWUfHVBk8smKgddHL8vvl5Fn6b8+Bp/1v0foazYMFL9Nus8HcsHk T+pA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b="yS6el+/O"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v10si14079873edc.465.2021.03.23.12.06.16; Tue, 23 Mar 2021 12:06:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b="yS6el+/O"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232729AbhCWTDA (ORCPT + 99 others); Tue, 23 Mar 2021 15:03:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232836AbhCWTCf (ORCPT ); Tue, 23 Mar 2021 15:02:35 -0400 Received: from mail-qt1-x833.google.com (mail-qt1-x833.google.com [IPv6:2607:f8b0:4864:20::833]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1E1B4C061763 for ; Tue, 23 Mar 2021 12:02:34 -0700 (PDT) Received: by mail-qt1-x833.google.com with SMTP id 94so15804222qtc.0 for ; Tue, 23 Mar 2021 12:02:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=2NS3knvOsVC8w40oRTuJAV1fjp7aGSVyD5f77gkqujA=; b=yS6el+/OtIxurlpnOwHVJYnrbwKbkyYBMbMsL18oFonhgYPFzfpGanTtP6YNhIyUkf TEyNkeOkiDtVts7mqCYfAvg/Lg1EFX8JWMt7cPbFGO1hJH0owEWpRiNRK/S1wPWE36wh YmB0lud4tB4a6GgMDoQKYXdYjDxHDRhg0dMRIlyhbq36/lJ9LlZxuSk0h3h0v4ahykeZ yLwUdUlLL0sNwxT4MTWZwfAuV26ZsmKLPmRTM51g3XNTXtWgJyDHpNlqScOO6W4kvfTx xSJBP0N7kswDxh5FtjlrBso8m9S/JBCbqQd92QMlA6GKxAyd3aCIUmuPh7aKwCZeMrE4 INDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=2NS3knvOsVC8w40oRTuJAV1fjp7aGSVyD5f77gkqujA=; b=n9x5anfAOwK0fZnTboDjv68/qSlT3Q+4h4Kji7EOdkBA+QA16hiz95S/HVLW+XXy9H VNG/vcRFYBnfzruxo9dqQXLt/cE200NSI/NIrGnwwYuPxpSJXTeG7+ewD/9FG7aoNHoS 8s6m6a1SChAR0o4q3U7su5YPW4jW/DyzCCN5gJCTzqzTBV4dA2OmF39OkmmStygBNSxR rYCnnZ7Sf6CXRZToIs80U2O8XI9AV7Un18HKXRmtTlPanqA92lTr/kQEu/KssL0pxOj4 u1Hf1MOJF5ooAyI67GXovcSxWDMuoQ+w0AOPIfINTfnMS44ojhjti28zrOb974riBp0a 2eWg== X-Gm-Message-State: AOAM530KILCzNLNS03U3HFaRcRGek1FX48ObuaQiL0b96OJe4PGVCP26 dmD50qnR3fi3dbu6f9BqMrMKmQ== X-Received: by 2002:ac8:7215:: with SMTP id a21mr5870301qtp.199.1616526153240; Tue, 23 Mar 2021 12:02:33 -0700 (PDT) Received: from localhost (70.44.39.90.res-cmts.bus.ptd.net. [70.44.39.90]) by smtp.gmail.com with ESMTPSA id q2sm14075452qkq.59.2021.03.23.12.02.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Mar 2021 12:02:32 -0700 (PDT) Date: Tue, 23 Mar 2021 15:02:32 -0400 From: Johannes Weiner To: Hugh Dickins Cc: Andrew Morton , Matthew Wilcox , Michal Hocko , Zhou Guanghui , Zi Yan , Shakeel Butt , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: Re: [PATCH] mm: page_alloc: fix memcg accounting leak in speculative cache lookup Message-ID: References: <20210319071547.60973-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 19, 2021 at 06:52:58PM -0700, Hugh Dickins wrote: > On Fri, 19 Mar 2021, Johannes Weiner wrote: > > > When the freeing of a higher-order page block (non-compound) races > > with a speculative page cache lookup, __free_pages() needs to leave > > the first order-0 page in the chunk to the lookup but free the buddy > > pages that the lookup doesn't know about separately. > > > > However, if such a higher-order page is charged to a memcg (e.g. !vmap > > kernel stack)), only the first page of the block has page->memcg > > set. That means we'll uncharge only one order-0 page from the entire > > block, and leak the remainder. > > > > Add a split_page_memcg() to __free_pages() right before it starts > > taking the higher-order page apart and freeing its individual > > constituent pages. This ensures all of them will have the memcg > > linkage set up for correct uncharging. Also update the comments a bit > > to clarify what exactly is happening to the page during that race. > > > > This bug is old and has its roots in the speculative page cache patch > > and adding cgroup accounting of kernel pages. There are no known user > > reports. A backport to stable is therefor not warranted. > > > > Reported-by: Matthew Wilcox > > Signed-off-by: Johannes Weiner > > Acked-by: Hugh Dickins > > to the split_page_memcg() addition etc, but a doubt just hit me on the > original e320d3012d25 ("mm/page_alloc.c: fix freeing non-compound pages"): > see comment below. > > > --- > > mm/page_alloc.c | 33 +++++++++++++++++++++++++++------ > > 1 file changed, 27 insertions(+), 6 deletions(-) > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index c53fe4fa10bf..f4bd56656402 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -5112,10 +5112,9 @@ static inline void free_the_page(struct page *page, unsigned int order) > > * the allocation, so it is easy to leak memory. Freeing more memory > > * than was allocated will probably emit a warning. > > * > > - * If the last reference to this page is speculative, it will be released > > - * by put_page() which only frees the first page of a non-compound > > - * allocation. To prevent the remaining pages from being leaked, we free > > - * the subsequent pages here. If you want to use the page's reference > > + * This function isn't a put_page(). Don't let the put_page_testzero() > > + * fool you, it's only to deal with speculative cache references. It > > + * WILL free pages directly. If you want to use the page's reference > > * count to decide when to free the allocation, you should allocate a > > * compound page, and use put_page() instead of __free_pages(). > > * > > @@ -5124,11 +5123,33 @@ static inline void free_the_page(struct page *page, unsigned int order) > > */ > > void __free_pages(struct page *page, unsigned int order) > > { > > - if (put_page_testzero(page)) > > + /* > > + * Drop the base reference from __alloc_pages and free. In > > + * case there is an outstanding speculative reference, from > > + * e.g. the page cache, it will put and free the page later. > > + */ > > + if (likely(put_page_testzero(page))) { > > free_the_page(page, order); > > - else if (!PageHead(page)) > > + return; > > + } > > + > > + /* > > + * The speculative reference will put and free the page. > > + * > > + * However, if the speculation was into a higher-order page > > + * chunk that isn't marked compound, the other side will know > > + * nothing about our buddy pages and only free the order-0 > > + * page at the start of our chunk! We must split off and free > > + * the buddy pages here. > > + * > > + * The buddy pages aren't individually refcounted, so they > > + * can't have any pending speculative references themselves. > > + */ > > + if (!PageHead(page) && order > 0) { > > The put_page_testzero() has released our reference to the first > subpage of page: it's now under the control of the racing speculative > lookup. So it seems to me unsafe to be checking PageHead(page) here: > if it was actually a compound page, PageHead might already be cleared > by now, and we doubly free its tail pages below? I think we need to > use a "bool compound = PageHead(page)" on entry to __free_pages(). That's a good point. > And would it be wrong to fix that too in this patch? All aboard the mm-page_alloc-fix-stuff.patch! No, I think it's fine to sqash them and treat it as a supplement to Matthew's original patch (although technically it didn't make the memcg leak any worse). > Though it ought then to be backported to 5.10 stable. Sounds good. It depends on split_page_memcg(), but that patch is straight-forward enough to backport as well. --- From f6f062a3ec46f4fb083dcf6792fde9723f18cfc5 Mon Sep 17 00:00:00 2001 From: Johannes Weiner Date: Fri, 19 Mar 2021 02:17:00 -0400 Subject: [PATCH] mm: page_alloc: fix allocation imbalances from speculative cache lookup When the freeing of a higher-order page block (non-compound) races with a speculative page cache lookup, __free_pages() needs to leave the first order-0 page in the chunk to the lookup but free the buddy pages that the lookup doesn't know about separately. There are currently two problems with it: 1. It checks PageHead() to see whether we're dealing with a compound page after put_page_testzero(). But the speculative lookup could have freed the page after our put and cleared PageHead, in which case we would double free the tail pages. To fix this, test PageHead before the put and cache the result for afterwards. 2. If such a higher-order page is charged to a memcg (e.g. !vmap kernel stack)), only the first page of the block has page->memcg set. That means we'll uncharge only one order-0 page from the entire block, and leak the remainder. To fix this, add a split_page_memcg() before it starts freeing tail pages, to ensure they all have page->memcg set up. While at it, also update the comments a bit to clarify what exactly is happening to the page during that race. Fixes: e320d3012d25 mm/page_alloc.c: fix freeing non-compound pages Reported-by: Hugh Dickins Reported-by: Matthew Wilcox Signed-off-by: Johannes Weiner Cc: # 5.10+ --- mm/page_alloc.c | 41 +++++++++++++++++++++++++++++++++++------ 1 file changed, 35 insertions(+), 6 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c53fe4fa10bf..8aab1e87fa3c 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5112,10 +5112,9 @@ static inline void free_the_page(struct page *page, unsigned int order) * the allocation, so it is easy to leak memory. Freeing more memory * than was allocated will probably emit a warning. * - * If the last reference to this page is speculative, it will be released - * by put_page() which only frees the first page of a non-compound - * allocation. To prevent the remaining pages from being leaked, we free - * the subsequent pages here. If you want to use the page's reference + * This function isn't a put_page(). Don't let the put_page_testzero() + * fool you, it's only to deal with speculative cache references. It + * WILL free pages directly. If you want to use the page's reference * count to decide when to free the allocation, you should allocate a * compound page, and use put_page() instead of __free_pages(). * @@ -5124,11 +5123,41 @@ static inline void free_the_page(struct page *page, unsigned int order) */ void __free_pages(struct page *page, unsigned int order) { - if (put_page_testzero(page)) + bool compound = PageHead(page); + + /* + * Drop the base reference from __alloc_pages and free. In + * case there is an outstanding speculative reference, from + * e.g. the page cache, it will put and free the page later. + */ + if (likely(put_page_testzero(page))) { free_the_page(page, order); - else if (!PageHead(page)) + return; + } + + /* + * Ok, the speculative reference will put and free the page. + * + * - If this was an order-0 page, we're done. + * + * - If the page was compound, the other side will free the + * entire page and we're done here as well. Just note that + * freeing clears PG_head, so it can only be read reliably + * before the put_page_testzero(). + * + * - If the page was of higher order but NOT marked compound, + * the other side will know nothing about our buddy pages + * and only free the order-0 page at the start of our block. + * We must split off and free the buddy pages here. + * + * The buddy pages aren't individually refcounted, so they + * can't have any pending speculative references themselves. + */ + if (order > 0 && !compound) { + split_page_memcg(page, 1 << order); while (order-- > 0) free_the_page(page + (1 << order), order); + } } EXPORT_SYMBOL(__free_pages); -- 2.31.0