Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp3293429pxf; Mon, 22 Mar 2021 02:58:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzeOkPApUl+Ja0AJnuDZ9Bj9+h1Cmd9DZ9SSzNHxU5Iof1p3aFQ9sFRFQgR/000+2hdcRlP X-Received: by 2002:aa7:cd16:: with SMTP id b22mr25321083edw.357.1616407119865; Mon, 22 Mar 2021 02:58:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616407119; cv=none; d=google.com; s=arc-20160816; b=EqG+77BxEAZIV0lJ2RBCUde4HznXJoY0XokjNzc0XB6dC5sm6Tsd//hcN38vLs5K7b kobWAGLSh130hcUpXmkAdo7PLkPJvoU4ks7M+Z+/Cp8F1Vfi4ZFPdF/Ne3itC6nd+H6u D51+UModE9Q/JOUjeaf/dyRzsogj3bCW+01wSsRIFKvHIdjvOfrw5lC7PtAXsKV6l12j UXT4/KVwyvkPC166C8youjifscgGey5i1/McpACP+iFqw0HinoMMZWnXdxERexnouUq8 JGauvbn+LQxygdrpCSprgtJxlMd/hZmghv7+7lDOk2KOwKVDo49utdZ9ZWCdon/1kUok l9tA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=B/2ozR1dCacTTh1U7cY+Z5HNl+3fI6dqE+W2ZDjDPYM=; b=yGlHZuZI1iW1Sk6L4ot35RyEE724YiyBhD3cioqyrch4Yb3mPq9HGGo4YIZK3zK6iQ Gl9pUkkn6sNgf/ca7gT43icdl5xL90/879m0RNErwjDFvJk7P9ZIik69DI696Hs6YxqA BpVYg0y8LPCWyj6oANTDIWlsCP8JCo9PzhIojIQh9f05uaK1qVvWU+/Ht8yBWPvs+H+3 W83pqdccfGBx0JfPzz6GVfxOEWN62aCh77nh/0DI8OiM5pn8s4SHouaF4V18sUJ9Aa6l nzESE12uBmTLr5wnTHysslMt1g4TdN51PWAg/iIeq4cym2LApaXOI/YxPoANRvFZq4oP XWfA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=Z6gyafmx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=suse.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n6si10632904edq.597.2021.03.22.02.58.17; Mon, 22 Mar 2021 02:58:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=Z6gyafmx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=suse.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230051AbhCVJ4I (ORCPT + 99 others); Mon, 22 Mar 2021 05:56:08 -0400 Received: from mx2.suse.de ([195.135.220.15]:57292 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229913AbhCVJzt (ORCPT ); Mon, 22 Mar 2021 05:55:49 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1616406947; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=B/2ozR1dCacTTh1U7cY+Z5HNl+3fI6dqE+W2ZDjDPYM=; b=Z6gyafmxwULS2CLLmU6+9rxjA+6VdYu/ZVVzqZlu7ZW2khYcqWt3VZyW5URwptKGH6FfHt sOevZnUMpkVLYHYkXIePZhVkyMlsMuiNktVNXkSE6PL2a/NmVOtnGSkGFZjvGIPtrfHSNB S8V4aYfbUQ73RzmerBVBOySZw2B6MjI= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 817FFAD38; Mon, 22 Mar 2021 09:55:47 +0000 (UTC) Date: Mon, 22 Mar 2021 10:55:30 +0100 From: Michal Hocko To: Johannes Weiner Cc: Andrew Morton , Matthew Wilcox , Hugh Dickins , Zhou Guanghui , Zi Yan , Shakeel Butt , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: Re: [PATCH] mm: page_alloc: fix memcg accounting leak in speculative cache lookup Message-ID: References: <20210319071547.60973-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210319071547.60973-1-hannes@cmpxchg.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri 19-03-21 03:15:47, Johannes Weiner wrote: > When the freeing of a higher-order page block (non-compound) races > with a speculative page cache lookup, __free_pages() needs to leave > the first order-0 page in the chunk to the lookup but free the buddy > pages that the lookup doesn't know about separately. > > However, if such a higher-order page is charged to a memcg (e.g. !vmap > kernel stack)), only the first page of the block has page->memcg > set. That means we'll uncharge only one order-0 page from the entire > block, and leak the remainder. > > Add a split_page_memcg() to __free_pages() right before it starts > taking the higher-order page apart and freeing its individual > constituent pages. This ensures all of them will have the memcg > linkage set up for correct uncharging. Also update the comments a bit > to clarify what exactly is happening to the page during that race. > > This bug is old and has its roots in the speculative page cache patch > and adding cgroup accounting of kernel pages. There are no known user > reports. A backport to stable is therefor not warranted. > > Reported-by: Matthew Wilcox > Signed-off-by: Johannes Weiner Acked-by: Michal Hocko Thanks! > --- > mm/page_alloc.c | 33 +++++++++++++++++++++++++++------ > 1 file changed, 27 insertions(+), 6 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index c53fe4fa10bf..f4bd56656402 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -5112,10 +5112,9 @@ static inline void free_the_page(struct page *page, unsigned int order) > * the allocation, so it is easy to leak memory. Freeing more memory > * than was allocated will probably emit a warning. > * > - * If the last reference to this page is speculative, it will be released > - * by put_page() which only frees the first page of a non-compound > - * allocation. To prevent the remaining pages from being leaked, we free > - * the subsequent pages here. If you want to use the page's reference > + * This function isn't a put_page(). Don't let the put_page_testzero() > + * fool you, it's only to deal with speculative cache references. It > + * WILL free pages directly. If you want to use the page's reference > * count to decide when to free the allocation, you should allocate a > * compound page, and use put_page() instead of __free_pages(). > * > @@ -5124,11 +5123,33 @@ static inline void free_the_page(struct page *page, unsigned int order) > */ > void __free_pages(struct page *page, unsigned int order) > { > - if (put_page_testzero(page)) > + /* > + * Drop the base reference from __alloc_pages and free. In > + * case there is an outstanding speculative reference, from > + * e.g. the page cache, it will put and free the page later. > + */ > + if (likely(put_page_testzero(page))) { > free_the_page(page, order); > - else if (!PageHead(page)) > + return; > + } > + > + /* > + * The speculative reference will put and free the page. > + * > + * However, if the speculation was into a higher-order page > + * chunk that isn't marked compound, the other side will know > + * nothing about our buddy pages and only free the order-0 > + * page at the start of our chunk! We must split off and free > + * the buddy pages here. > + * > + * The buddy pages aren't individually refcounted, so they > + * can't have any pending speculative references themselves. > + */ > + if (!PageHead(page) && order > 0) { > + split_page_memcg(page, 1 << order); > while (order-- > 0) > free_the_page(page + (1 << order), order); > + } > } > EXPORT_SYMBOL(__free_pages); > > -- > 2.30.1 > -- Michal Hocko SUSE Labs