Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp7875146pxb; Fri, 19 Feb 2021 01:19:06 -0800 (PST) X-Google-Smtp-Source: ABdhPJyaq4hkOHbisH+Kzvp1W9zuZuecH5tgRVxUOuNg/+bdU4RYRGoJDrarG+SzCpEYCv9fcwyz X-Received: by 2002:a17:906:2a01:: with SMTP id j1mr7815229eje.416.1613726346247; Fri, 19 Feb 2021 01:19:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1613726346; cv=none; d=google.com; s=arc-20160816; b=LBqWfvueULhddlImhiyDVLuvgbu23Utl0D7ssMBnerOx0BmbwA885P4WsvS8mJ8iPa PU45rEVy1XDv9+m9JjBEUPrCLhCZwSI5Fer3GmO9Hot9FLDTTg0r2r7VDZQYr8JUqUxO CfurXnFTWQQdH/raXuukoDkX1BX4oX8yir1megMH69FDk2KK2i6DXBcib1YDZG8K1iDE u28wv6/oBQJQn2eA3m36dGlvtMruhVcvjqZbkqBTpoFjHJqZ0kDncSY1g4CqJHVCL8E1 aRbtn2yDNyXJs/7D2b5+hv12ZfONP2ELiecAZa950uf22t23B8VyzaVt67/cjEwT3/KE adLg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=yYsAIWuqG5g0PtLS3vC3gPQPYDZPxGBMvNHr6Q07to4=; b=xfRhJjndTmPIO2IbHW+zg4O37QPyHHEwXpmKRXfqk355Zg3gSrVBUeOYXI3NN7vGSA u7kD2BYusvm1b415tqq+Ua1Zzt/+xFCuwLOcNtqK6VAb+kdbFx7yoq7aLWvokEglt9Dn AiAM16Ndcxn+DkQHg8sCZvXDWWsgNfxMzOPHaYC3vAwglASeeiUtCCCHUV1G7/8mCXZK kmWY8YY4wIuw7yQ//kdNtF2mYsLhL/1HgfCMUDOcLaXxl8iaBRYAFNVMSmXLcAZ2/f/p B8IWPrNwMXW0xSBkTL1YH40IXtGNE9U2oVwLg2Q1EPrINOOiVK+loInrm+88JdjTJFmG 1dgg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=nlWS+saa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=suse.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y25si5128594ejb.546.2021.02.19.01.18.42; Fri, 19 Feb 2021 01:19:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=nlWS+saa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=suse.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229800AbhBSJSC (ORCPT + 99 others); Fri, 19 Feb 2021 04:18:02 -0500 Received: from mx2.suse.de ([195.135.220.15]:49700 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229527AbhBSJQ5 (ORCPT ); Fri, 19 Feb 2021 04:16:57 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1613726169; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=yYsAIWuqG5g0PtLS3vC3gPQPYDZPxGBMvNHr6Q07to4=; b=nlWS+saaxoksajbabAIYBwVBt+uZsQwz6nCOYG34Y1s6kCiPBQcqrzCDyycIUsjJ5+4hfk bDqBQs0nNzI8lq6UX+lrfdDaZdW5D4rc2SO5ABAN/7CpJ6Y6Y5+yItAIevIcXgkXRqIUHP lPdZSsse/w5ZJPF/PUShm0iC9ip8ODI= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 4A49EACF6; Fri, 19 Feb 2021 09:16:09 +0000 (UTC) Date: Fri, 19 Feb 2021 10:16:08 +0100 From: Michal Hocko To: Tim Chen Cc: Andrew Morton , Johannes Weiner , Vladimir Davydov , Dave Hansen , Ying Huang , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 3/3] mm: Fix missing mem cgroup soft limit tree updates Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 17-02-21 12:41:36, Tim Chen wrote: > On a per node basis, the mem cgroup soft limit tree on each node tracks > how much a cgroup has exceeded its soft limit memory limit and sorts > the cgroup by its excess usage. On page release, the trees are not > updated right away, until we have gathered a batch of pages belonging to > the same cgroup. This reduces the frequency of updating the soft limit tree > and locking of the tree and associated cgroup. > > However, the batch of pages could contain pages from multiple nodes but > only the soft limit tree from one node would get updated. Change the > logic so that we update the tree in batch of pages, with each batch of > pages all in the same mem cgroup and memory node. An update is issued for > the batch of pages of a node collected till now whenever we encounter > a page belonging to a different node. Note that this batching for > the same node logic is only relevant for v1 cgroup that has a memory > soft limit. Let me paste the discussion related to this patch from other reply: > >> For patch 3 regarding the uncharge_batch, it > >> is more of an observation that we should uncharge in batch of same node > >> and not prompted by actual workload. > >> Thinking more about this, the worst that could happen > >> is we could have some entries in the soft limit tree that overestimate > >> the memory used. The worst that could happen is a soft page reclaim > >> on that cgroup. The overhead from extra memcg event update could > >> be more than a soft page reclaim pass. So let's drop patch 3 > >> for now. > > > > I would still prefer to handle that in the soft limit reclaim path and > > check each memcg for the soft limit reclaim excess before the reclaim. > > > > Something like this? > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 8bddee75f5cb..b50cae3b2a1a 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -3472,6 +3472,14 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, > if (!mz) > break; > > + /* > + * Soft limit tree is updated based on memcg events sampling. > + * We could have missed some updates on page uncharge and > + * the cgroup is below soft limit. Skip useless soft reclaim. > + */ > + if (!soft_limit_excess(mz->memcg)) > + continue; > + > nr_scanned = 0; > reclaimed = mem_cgroup_soft_reclaim(mz->memcg, pgdat, Yes I meant something like this but then I have looked more closely and this shouldn't be needed afterall. __mem_cgroup_largest_soft_limit_node already does all the work if (!soft_limit_excess(mz->memcg) || !css_tryget(&mz->memcg->css)) goto retry; so this shouldn't really happen. -- Michal Hocko SUSE Labs