Received: by 10.213.65.68 with SMTP id h4csp1370630imn; Mon, 19 Mar 2018 02:09:42 -0700 (PDT) X-Google-Smtp-Source: AG47ELsgnXh+6ml508p+Oiro3oei89AQLaD33vmJfD3iYoUtE1fR2dyohKj2jJfpa5QdPS5h3esx X-Received: by 10.99.116.30 with SMTP id p30mr8608469pgc.60.1521450582336; Mon, 19 Mar 2018 02:09:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521450582; cv=none; d=google.com; s=arc-20160816; b=RVcPA7KPOKALSxWz4Df2FSNt/fLZ9NVLSTkJz100vgMFfI76ZWOhNchOqrzs5jETz1 j7Ya4euwgjtyCi81mQkzvutW5xcCs8P83mWobgYrCq7W7sfX2TsLi1yluKp4eRHHsK8w 6yD22W2BVE91+NdM1qXTpCp8wZa3JuY28iwuWWw3Yl1phCD+Q2Vy6EBWqqIgFXxw2T/A lRDuIy5deU90JbQJliCtBafxccZRU0o/klRml3n2NwmLJ6T2gvidigufhGYpALNrD623 EdACoEuNXWYH2/jaab74pdwxUoOak/7fJc6ZpcMl4iY4uDrVV+/ChhZxgmsctuUe8TtU c2uA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=kHDN9cuQW/eHMBa2MQ1PLKfgiwsB5MfKRgK2co3nt2c=; b=IvVTioCn714qIjuA1sR/QOTy2+jpbV3XALYxKJhrgwL9UU+N0EdUaV7bi9C+zm+UIt Io0Qa5wm6JZJsPFGFmdH+eNLUXnkWM/9s9Wi6p1+OaWqJRefrMgVRpJnlFR8H9+di8xm An2dewCCMxVt+4isGuD3Ea0Wl3A8iN6FSG8ZcoqYRlD09E/mpx5FLw6aizTrvVeJrzII jo1Pi8n3CPJ3jkCrZOYs98gWYsPviN9TDyvJ48gbn8+ORe+BYo2mepSIPauRUCut8JSC tUflM3zexuWBIrhseYRexb1bSh/pUoM0Etu2UGzAUNdHz7jAT0DEcl4w0Pz2u8AV8qAw n6/g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s78si10182996pfa.374.2018.03.19.02.09.27; Mon, 19 Mar 2018 02:09:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932825AbeCSIyA (ORCPT + 99 others); Mon, 19 Mar 2018 04:54:00 -0400 Received: from mx2.suse.de ([195.135.220.15]:35996 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751617AbeCSIx5 (ORCPT ); Mon, 19 Mar 2018 04:53:57 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 9F8A9AB35; Mon, 19 Mar 2018 08:53:55 +0000 (UTC) Date: Mon, 19 Mar 2018 09:53:55 +0100 From: Michal Hocko To: Li RongQing Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, hannes@cmpxchg.org, Andrey Ryabinin Subject: Re: [PATCH] mm/memcontrol.c: speed up to force empty a memory cgroup Message-ID: <20180319085355.GQ23100@dhcp22.suse.cz> References: <1521448170-19482-1-git-send-email-lirongqing@baidu.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1521448170-19482-1-git-send-email-lirongqing@baidu.com> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon 19-03-18 16:29:30, Li RongQing wrote: > mem_cgroup_force_empty() tries to free only 32 (SWAP_CLUSTER_MAX) pages > on each iteration, if a memory cgroup has lots of page cache, it will > take many iterations to empty all page cache, so increase the reclaimed > number per iteration to speed it up. same as in mem_cgroup_resize_limit() > > a simple test show: > > $dd if=aaa of=bbb bs=1k count=3886080 > $rm -f bbb > $time echo 100000000 >/cgroup/memory/test/memory.limit_in_bytes > > Before: 0m0.252s ===> after: 0m0.178s Andrey was proposing something similar [1]. My main objection was that his approach might lead to over-reclaim. Your approach is more conservative because it just increases the batch size. The size is still rather arbitrary. Same as SWAP_CLUSTER_MAX but that one is a commonly used unit of reclaim in the MM code. I would be really curious about more detailed explanation why having a larger batch yields to a better performance because we are doingg SWAP_CLUSTER_MAX batches at the lower reclaim level anyway. [1] http://lkml.kernel.org/r/20180119132544.19569-2-aryabinin@virtuozzo.com > > Signed-off-by: Li RongQing > --- > mm/memcontrol.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 670e99b68aa6..8910d9e8e908 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -2480,7 +2480,7 @@ static int mem_cgroup_resize_limit(struct mem_cgroup *memcg, > if (!ret) > break; > > - if (!try_to_free_mem_cgroup_pages(memcg, 1, > + if (!try_to_free_mem_cgroup_pages(memcg, 1024, > GFP_KERNEL, !memsw)) { > ret = -EBUSY; > break; > @@ -2610,7 +2610,7 @@ static int mem_cgroup_force_empty(struct mem_cgroup *memcg) > if (signal_pending(current)) > return -EINTR; > > - progress = try_to_free_mem_cgroup_pages(memcg, 1, > + progress = try_to_free_mem_cgroup_pages(memcg, 1024, > GFP_KERNEL, true); > if (!progress) { > nr_retries--; > -- > 2.11.0 -- Michal Hocko SUSE Labs