Received: by 2002:a05:6a10:d5a5:0:0:0:0 with SMTP id gn37csp1219984pxb; Thu, 7 Oct 2021 03:26:12 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxYelG9B5cdxzVCxKtqGwGcuW2T+GfwiUs9SdfFQAqRcbZlG0/ENEmy+IYryALkdUG3LKSw X-Received: by 2002:a17:906:3fd7:: with SMTP id k23mr4736353ejj.176.1633602371775; Thu, 07 Oct 2021 03:26:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1633602371; cv=none; d=google.com; s=arc-20160816; b=BnM21Gdt2zlymaQp/WoPZLwL1ybtZHtUxe4htcQBTAoKyYp5N673HK4fhBxZaVZwAx 97TmaCTDkcEhkclPs39cAMj7ZWiIJijyphmIsHVJF6XlDMMgf05NYE0TSiIR4vDBvHwa noJm759NODMQtIoorv3GWsX7e4qigpzMbkMLMWpvGzeiTzs78cF9yM2Ovn1DqE77m4UQ 8ZwoXN3TB/xrVXKgxMfQDcqBwcVdkmRwzps0Kr6+3/BPhJq00ufiQNpc4Cgvez6jFORs 96sl0SRcvpkxsS55NjmtzBwbHf9y+pzFjig+augrmo/tpADSVg/etfZbe4+RFwjCrbTF BJ6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=ak/EOBOsnsumi2EByV4ryvnOm8qzYku7pBbOcCuh2HE=; b=QUnrz/h2fqnH0r2Zy3lgCgPxGQUZvNyn40KjNzJJXQx2kjFbbaiMK7Oj75OIl3H4H3 8sgbSutBzC0AYG/DklaKCeG3hMJzCBCuh0cXWZxMBehDmGirueM1IjRaac8vk/HfVrby Z7frB3iDXD7PzNcBXbMEVW4q/hDjvrpwn0fJLfVWB0XPnHuYvW3+DHKa4/KHt9Sl8AqI VkQL/v2fxTZM6gXOwOFNd9Rtgy51Koe605uAMPf2QXSzRPs3sgQP77suCP//aY06HcjD E/Cwf2oQOXwvzEYqID3er2bvedQDnv+iOc3eonb+hn76D32kvVtckR8dHg21v9YzaKjV X6YA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q9si21066067ejr.274.2021.10.07.03.25.45; Thu, 07 Oct 2021 03:26:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240848AbhJGKWw (ORCPT + 99 others); Thu, 7 Oct 2021 06:22:52 -0400 Received: from outbound-smtp50.blacknight.com ([46.22.136.234]:53707 "EHLO outbound-smtp50.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240818AbhJGKWk (ORCPT ); Thu, 7 Oct 2021 06:22:40 -0400 Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp50.blacknight.com (Postfix) with ESMTPS id B2EA2FB6D6 for ; Thu, 7 Oct 2021 11:20:45 +0100 (IST) Received: (qmail 10512 invoked from network); 7 Oct 2021 10:20:45 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.17.29]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 7 Oct 2021 10:20:45 -0000 Date: Thu, 7 Oct 2021 11:20:44 +0100 From: Mel Gorman To: Vasily Averin Cc: Michal Hocko , Johannes Weiner , Vladimir Davydov , Andrew Morton , Cgroups , Linux MM , "linux-kernel@vger.kernel.org" , kernel@openvz.org, Mel Gorman , Uladzislau Rezki Subject: Re: memcg memory accounting in vmalloc is broken Message-ID: <20211007102044.GR3959@techsingularity.net> References: <953ef8e2-1221-a12c-8f71-e34e477a52e8@virtuozzo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <953ef8e2-1221-a12c-8f71-e34e477a52e8@virtuozzo.com> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Oct 07, 2021 at 11:50:44AM +0300, Vasily Averin wrote: > On 10/7/21 11:16 AM, Michal Hocko wrote: > > Cc Mel and Uladzislau > > > > On Thu 07-10-21 10:13:23, Michal Hocko wrote: > >> On Thu 07-10-21 11:04:40, Vasily Averin wrote: > >>> vmalloc was switched to __alloc_pages_bulk but it does not account the memory to memcg. > >>> > >>> Is it known issue perhaps? > >> > >> No, I think this was just overlooked. Definitely doesn't look > >> intentional to me. > > I use following patch as a quick fix, > it helps though it is far from ideal and can be optimized. Thanks Vasily. This papers over the problem but it could certainly be optimized. At minimum; 1. Test (memcg_kmem_enabled() && (gfp & __GFP_ACCOUNT)) in the function preamble and store the result in a bool 2. Avoid the temptation to batch the accounting because if the accounting fails, there is no information on how many pages could be allocated before the limits were hit. I guess you could pre-charge the pages and uncharging the number of pages that failed to be allocated but it should be a separate patch. 3. If an allocation fails due to memcg accounting, break out of the loop because all remaining bulk allocations are also likely to fail. As it's not vmalloc's fault, I would suggest the patch have Fixes: 387ba26fb1cb ("mm/page_alloc: add a bulk page allocator") and Cc: Note the Cc should just be in the patch and not mailed directly to stable@ as it'll simply trigger a form letter about the patch having to be merged to mainline first. -- Mel Gorman SUSE Labs