Received: by 10.213.65.68 with SMTP id h4csp236307imn; Fri, 23 Mar 2018 03:35:16 -0700 (PDT) X-Google-Smtp-Source: AG47ELuRoLKmbr/ejY8DLAgIh52IwicgRbUghEbzP4SzFIC3a8KyUStwR5I1/xlCKb37YdER7YMT X-Received: by 10.98.76.155 with SMTP id e27mr23779185pfj.45.1521801316561; Fri, 23 Mar 2018 03:35:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521801316; cv=none; d=google.com; s=arc-20160816; b=E9nd7sC+CdeIkXKyDTngoHlAFwG2Hd8Rhf15cumQmxMnHfX39K41WB7lmSMQGBsKnJ d8As0446W6BEQIWyyDQz+WyTGsXvqXhILBKx5LE3MNT/p07YLwvdh4V4mUoHL3f+DNUI 56tW7XPxbveexi4AA6NKo/QkudXcRD0rdB6YjLge7TpkjpVhZ42Uk6O50OYoiuetiJ5o cpzKDW/EgazD59WQ84SJlCezjE8MZK+RpY+UNr4zDMIrDl6FPv7ZGTUbWXEeY4VCKpfx heXZwOAkUIw0JMpMI5QDzQghVcDz2gjO/b/kyfyNH+8aZLnasO+VXzVSvEe+DVzOUPLt JL6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=X5wRQwoh8HkNpQCoOFjNRyxdK73Osk9i57onoXLYhEE=; b=0fjAe7x/5lOvdd4zZol/f41suJzU54VwCYUEQcNZ4HvmJOaUOCC5268F8RprUsnhCl DHUnW6pbfLyzIS3yljEaOmoueYfvBFTfGB53pX+plFFBVt7fPJ/6WbOgMBhFcrT2UK03 zME1GjW+6aw13RcjgWlFRym9QtF+dB+vG/vfsZoBSY4eIh7m7cv7cKz7eS2RP2R34T+k E2XwCiet2fKDWbU+KdbdxZ+2K4WMPnfZ7MtryrUfEGMwQgp38vV+6Rc6I08EF8ahYNf2 E+7AppquHph8+5rMxSaR9kw1tsum0dB0FGZvmvzk2heW1/HvjlnaOyfRIVd8XIYcmNDO iS/w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n1-v6si8494188pld.238.2018.03.23.03.35.01; Fri, 23 Mar 2018 03:35:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756239AbeCWKeM (ORCPT + 99 others); Fri, 23 Mar 2018 06:34:12 -0400 Received: from mx2.suse.de ([195.135.220.15]:52402 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752538AbeCWKeK (ORCPT ); Fri, 23 Mar 2018 06:34:10 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 27C31AE46; Fri, 23 Mar 2018 10:34:09 +0000 (UTC) Date: Fri, 23 Mar 2018 11:34:07 +0100 From: Michal Hocko To: Li RongQing Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, hannes@cmpxchg.org, Andrey Ryabinin Subject: Re: [PATCH] mm/memcontrol.c: speed up to force empty a memory cgroup Message-ID: <20180323103407.GP23100@dhcp22.suse.cz> References: <1521448170-19482-1-git-send-email-lirongqing@baidu.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1521448170-19482-1-git-send-email-lirongqing@baidu.com> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon 19-03-18 16:29:30, Li RongQing wrote: > mem_cgroup_force_empty() tries to free only 32 (SWAP_CLUSTER_MAX) pages > on each iteration, if a memory cgroup has lots of page cache, it will > take many iterations to empty all page cache, so increase the reclaimed > number per iteration to speed it up. same as in mem_cgroup_resize_limit() > > a simple test show: > > $dd if=aaa of=bbb bs=1k count=3886080 > $rm -f bbb > $time echo 100000000 >/cgroup/memory/test/memory.limit_in_bytes > > Before: 0m0.252s ===> after: 0m0.178s One more note. I have only now realized that increasing the patch size might have another negative side effect. Memcg reclaim bails out early when the required target has been reclaimed and so we might skip memcgs in the hierarchy and could end up hamering one child in the hierarchy much more than others. Our current code is not ideal and we workaround this by a smaller target and caching the last reclaimed memcg so the imbalance is not so visible at least. This is not something that couldn't be fixed and maybe 1M chunk would be acceptable as well. I dunno. Let's focus on the main bottleneck first before we start doing these changes though. -- Michal Hocko SUSE Labs