Received: by 10.213.65.68 with SMTP id h4csp268394imn; Fri, 23 Mar 2018 04:20:48 -0700 (PDT) X-Google-Smtp-Source: AG47ELtOdDJwm4Hr9/6CqmdBe2j99bdnaXXYNBARSQZkZIrNRJ6dktFo8DO0MCSsBI+XsIc5NTFG X-Received: by 10.99.140.77 with SMTP id q13mr7916330pgn.44.1521804048698; Fri, 23 Mar 2018 04:20:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521804048; cv=none; d=google.com; s=arc-20160816; b=E7C8ORfL60EfcKzS9SUgkrW0oRla7tAjgR0bm+onMejeBHYMeKvSQ57odbZLI6cXBP qR542cHjzYrrXO5cGMzir5XNqqy84S7KA9dsK2sgJ/F40aIcpq6VPCv+iM/vQxDF6Ai0 ivvfXOdUIghvAR9RRZi8Ev/lT3UsUJTpUQcTv3P2kzS4WorZeWLIJSfONCuCINxkznkK xoPEVUmbnpwXSC3oiNnou7N/auicY7pUILMMfyYjxBIw7nRiuRKYylGJRgq39b5N4XP9 lfY4AhlDMIBFyKYBapGkKkYNRcsmcM0GN3O660cinxKithobTCtCW9iiSnNFYmvVgJOR vPXg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date :arc-authentication-results; bh=Zahm/BVMVX+ZskS1D/a9roC2NfYfROxYjrGAR4pVY54=; b=Is5LItR/9LYMIwNdjbstK7lRy5+HWG51GxfyWhItCFnx1NINnMDqgl5XrWTOanwRXu Z6zJMvCghTMZtF5IxbMLtGFmap7a6KGwFPOfyssPN+MLVoJHdznyreFplBoI5tQ3uN7j 6xsTvLPcPXJBPJWSFlSiIw0uBLjEyFlVMB3ITact4F77vLsnrpm5tTkO79veG51GDNNS bISwod4kgIe4E3NwmoxYIx9/HxqZ/ncPDcKfLlZV5AiUFPfsZbJn6XcppcWsJQO+Lo8e 4plEmX7mPm9ENOy1CeONsFehGfqbG+UYlUL18ZnHwnVdZ7FhzGvKCio2BZcwepwTp9Pp g02w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c2-v6si8101493plo.116.2018.03.23.04.20.34; Fri, 23 Mar 2018 04:20:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755638AbeCWKIs (ORCPT + 99 others); Fri, 23 Mar 2018 06:08:48 -0400 Received: from mx2.suse.de ([195.135.220.15]:47244 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755583AbeCWKIm (ORCPT ); Fri, 23 Mar 2018 06:08:42 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id D1C4EAD45; Fri, 23 Mar 2018 10:08:40 +0000 (UTC) Date: Fri, 23 Mar 2018 11:08:39 +0100 From: Michal Hocko To: "Li,Rongqing" Cc: "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "cgroups@vger.kernel.org" , "hannes@cmpxchg.org" , Andrey Ryabinin Subject: Re: =?utf-8?B?562U5aSNOiDnrZTlpI06IFtQQVRD?= =?utf-8?Q?H=5D?= mm/memcontrol.c: speed up to force empty a memory cgroup Message-ID: <20180323100839.GO23100@dhcp22.suse.cz> References: <1521448170-19482-1-git-send-email-lirongqing@baidu.com> <20180319085355.GQ23100@dhcp22.suse.cz> <2AD939572F25A448A3AE3CAEA61328C23745764B@BC-MAIL-M28.internal.baidu.com> <20180319103756.GV23100@dhcp22.suse.cz> <2AD939572F25A448A3AE3CAEA61328C2374589DC@BC-MAIL-M28.internal.baidu.com> <2AD939572F25A448A3AE3CAEA61328C2374832C1@BC-MAIL-M28.internal.baidu.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <2AD939572F25A448A3AE3CAEA61328C2374832C1@BC-MAIL-M28.internal.baidu.com> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri 23-03-18 02:58:36, Li,Rongqing wrote: > > > > -----邮件原件----- > > 发件人: linux-kernel-owner@vger.kernel.org > > [mailto:linux-kernel-owner@vger.kernel.org] 代表 Li,Rongqing > > 发送时间: 2018年3月19日 18:52 > > 收件人: Michal Hocko > > 抄送: linux-kernel@vger.kernel.org; linux-mm@kvack.org; > > cgroups@vger.kernel.org; hannes@cmpxchg.org; Andrey Ryabinin > > > > 主题: 答复: 答复: [PATCH] mm/memcontrol.c: speed up to force empty a > > memory cgroup > > > > > > > > > -----邮件原件----- > > > 发件人: Michal Hocko [mailto:mhocko@kernel.org] > > > 发送时间: 2018年3月19日 18:38 > > > 收件人: Li,Rongqing > > > 抄送: linux-kernel@vger.kernel.org; linux-mm@kvack.org; > > > cgroups@vger.kernel.org; hannes@cmpxchg.org; Andrey Ryabinin > > > > > > 主题: Re: 答复: [PATCH] mm/memcontrol.c: speed up to force empty a > > memory > > > cgroup > > > > > > On Mon 19-03-18 10:00:41, Li,Rongqing wrote: > > > > > > > > > > > > > -----邮件原件----- > > > > > 发件人: Michal Hocko [mailto:mhocko@kernel.org] > > > > > 发送时间: 2018年3月19日 16:54 > > > > > 收件人: Li,Rongqing > > > > > 抄送: linux-kernel@vger.kernel.org; linux-mm@kvack.org; > > > > > cgroups@vger.kernel.org; hannes@cmpxchg.org; Andrey Ryabinin > > > > > > > > > > 主题: Re: [PATCH] mm/memcontrol.c: speed up to force empty a > > > memory > > > > > cgroup > > > > > > > > > > On Mon 19-03-18 16:29:30, Li RongQing wrote: > > > > > > mem_cgroup_force_empty() tries to free only 32 > > > (SWAP_CLUSTER_MAX) > > > > > > pages on each iteration, if a memory cgroup has lots of page > > > > > > cache, it will take many iterations to empty all page cache, so > > > > > > increase the reclaimed number per iteration to speed it up. same > > > > > > as in > > > > > > mem_cgroup_resize_limit() > > > > > > > > > > > > a simple test show: > > > > > > > > > > > > $dd if=aaa of=bbb bs=1k count=3886080 > > > > > > $rm -f bbb > > > > > > $time echo > > > 100000000 >/cgroup/memory/test/memory.limit_in_bytes > > > > > > > > > > > > Before: 0m0.252s ===> after: 0m0.178s > > > > > > > > > > Andrey was proposing something similar [1]. My main objection was > > > > > that his approach might lead to over-reclaim. Your approach is > > > > > more conservative because it just increases the batch size. The > > > > > size is still rather arbitrary. Same as SWAP_CLUSTER_MAX but that > > > > > one is a commonly used unit of reclaim in the MM code. > > > > > > > > > > I would be really curious about more detailed explanation why > > > > > having a larger batch yields to a better performance because we > > > > > are doingg SWAP_CLUSTER_MAX batches at the lower reclaim level > > anyway. > > > > > > > > > > > > > Although SWAP_CLUSTER_MAX is used at the lower level, but the call > > > > stack of try_to_free_mem_cgroup_pages is too long, increase the > > > > nr_to_reclaim can reduce times of calling > > > > function[do_try_to_free_pages, shrink_zones, hrink_node ] > > > > > > > > mem_cgroup_resize_limit > > > > --->try_to_free_mem_cgroup_pages: .nr_to_reclaim = max(1024, > > > > --->SWAP_CLUSTER_MAX), > > > > ---> do_try_to_free_pages > > > > ---> shrink_zones > > > > --->shrink_node > > > > ---> shrink_node_memcg > > > > ---> shrink_list <-------loop will happen in this place > > > [times=1024/32] > > > > ---> shrink_page_list > > > > > > Can you actually measure this to be the culprit. Because we should > > > rethink our call path if it is too complicated/deep to perform well. > > > Adding arbitrary batch sizes doesn't sound like a good way to go to me. > > > > Ok, I will try > > > http://pasted.co/4edbcfff > > This is result from ftrace graph, it maybe prove that the deep call > path leads to low performance. Does it? Let's have a look at the condensed output: 6) | try_to_free_mem_cgroup_pages() { 6) | mem_cgroup_select_victim_node() { 6) 0.320 us | mem_cgroup_node_nr_lru_pages(); 6) 0.151 us | mem_cgroup_node_nr_lru_pages(); 6) 2.190 us | } 6) | do_try_to_free_pages() { 6) | shrink_node() { 6) | shrink_node_memcg() { 6) | shrink_inactive_list() { 6) + 23.131 us | shrink_page_list(); 6) + 33.960 us | } 6) + 39.203 us | } 6) | shrink_slab() { 6) + 72.955 us | } 6) ! 116.529 us | } 6) | shrink_node() { 6) 0.050 us | mem_cgroup_iter(); 6) 0.035 us | mem_cgroup_low(); 6) | shrink_node_memcg() { 6) 3.955 us | } 6) | shrink_slab() { 6) + 54.296 us | } 6) + 61.502 us | } 6) ! 185.020 us | } 6) ! 188.165 us | } try_to_free_mem_cgroup_pages is the full memcg reclaim path taking 188,165 us. The pure reclaim path is shrink_node and that took 116+61 = 177 us. So we have 11us spent on the way. Is this really making such a difference? How does the profile look when we do larger batches? > And when increase reclaiming page in try_to_free_mem_cgroup_pages, it > can reduce calling of shrink_slab, which save times, in my cases, page > caches occupy most memory, slab is little, but shrink_slab will be > called everytime OK, that makes more sense! shrink_slab is clearly visible here. It is more expensive than the page reclaim. This is something to look into. Thanks! -- Michal Hocko SUSE Labs