Received: by 10.213.65.68 with SMTP id h4csp715875imn; Tue, 20 Mar 2018 13:31:26 -0700 (PDT) X-Google-Smtp-Source: AG47ELs+plfyouu7corbIZ/SElYCEHR2mIHmRrjfpmwtk2QwnqcfloGbH1CkD9GU6zaeKE7Zi8+B X-Received: by 2002:a17:902:108a:: with SMTP id c10-v6mr12733197pla.22.1521577886788; Tue, 20 Mar 2018 13:31:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521577886; cv=none; d=google.com; s=arc-20160816; b=rngX6WcuQXlbMgxAL7KC1rUzaxC/BoA/LLpgSlt1lHVT2dxJwF1wfvu5YO7i5i9ZeA VvCSrgpsu65r9Gj0bjm/VWtcYCtIbYzGVvnKb4xP47o09qaq1GNDKHVFEcybH50K7NhO pLlUFnZs103Pk3QmCAUS5+I4i5iY01R9uiN7faNKjhNYLA3zPY1GpbswsprvCgdyZGHB VxgOv2lDv4gpItcVRCgXPmt2heUbfwN3GJbGVUvnoHkzjvcVbu3RApUI91xzrpwmwcnf zqKAcceO8h5Feg9182AEmQVV6vQ+q7cwBca1yxfocmp6Ez17x7QU0HdxyJQnw7fNHJc9 Mmzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date:dkim-signature :arc-authentication-results; bh=33LzgS+OPlkDfGcWsDLn61givyON0hNZrXrsALJWxL8=; b=Y3XXc4waoJBnIvosQBfx3Mmc25QY75XH61F5/z9tAY8qtAIVrwghJMfFeHTtrpEt5F WhhRwtejIpn1JUCB2U3wCU+n8PhnvGVdVkjaaxoY3o/sAkVfF4+ZNCM9tW+F39Prw2OH SNTp6ZMLe8xdROOVk0QleDSZupWeqf/YbexlkjtTBU6WCyqRsp28My3JpOSVccmyg7to nVVCGLNg4a3R0zrbYonEMkTOOAp08L62PS1mcq5jCDOJ55oQjwYXmnp2Pgk/QQwQAhrO q34GMfA72y0GE2chEaAyfSlsWOG0sKGqJp7CWmHnLPCT2xz/kvpvQNf/+aJnFa6T/o/m 6RnQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=sqRWBKFo; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i35-v6si2295982plg.144.2018.03.20.13.31.12; Tue, 20 Mar 2018 13:31:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=sqRWBKFo; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751888AbeCTUaC (ORCPT + 99 others); Tue, 20 Mar 2018 16:30:02 -0400 Received: from mail-pf0-f181.google.com ([209.85.192.181]:33777 "EHLO mail-pf0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751475AbeCTUaA (ORCPT ); Tue, 20 Mar 2018 16:30:00 -0400 Received: by mail-pf0-f181.google.com with SMTP id 123so1117120pfe.0 for ; Tue, 20 Mar 2018 13:29:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=33LzgS+OPlkDfGcWsDLn61givyON0hNZrXrsALJWxL8=; b=sqRWBKFoore0U2zV4k0z/VflGgMZYEutNJu62QejWk29VmgPaytA1jqBvHecKJEmL7 DtabhzzDtPD703hOwppqX9DdQRmxTx9ElMiEAuB9Ym4KYT1hD9KvLHXnegixV72rhcY2 upUIqfnesf0c7J9oprMVQRnCzHXyDO1fdhcnHTGEryHc0sBBOIiSsmZWZYaTWFP+1/S+ 8Air8v7jJX36cWtoRkv/8HkLL4igLsQnCgRDGrYJchHSl9i2o5unJnfWW7lQva2VgLRX mkrqYIunFr6tH49kBhqDAr5Pi/vPMEf8ND9rwEmYVpf7pW8wh0PcBlHsYrwZzBWRl+M4 yUQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=33LzgS+OPlkDfGcWsDLn61givyON0hNZrXrsALJWxL8=; b=r0Xj5O29p4+AT7Femqc0yS5VVvijmJiaXEIqfCLi+9TJFjR/RNfgCqPLASHkgVOT/F k8Nu/Ddhjs701oaFQYSOHhjqFVG4B5jMxREa6tn4MYdo6GWbINU6+ZgtaCmx6VB/TmY0 q2IspoNEeSTAl4vp7PTQb14Y7xaj65icz9FVYH/LcHKS1ii3l6G4f4adfUJHEDZzWnfe rtzJTsQZQcr4qJL8SK7h83YakXP5H2IzOY8nxH47Sg3FK4wS0fFEQzfsfmyf+TtR9mmL LxeD9j+Mg9CLpadyTbDJWr9OPhlNFQSJ63/qhDho7qXuda3hZNmsIgeGo8Ue1CCU+lb3 GVgA== X-Gm-Message-State: AElRT7EPwoMFk/yzrTntIJggLLqGqjaY9A8Jfr4rN0venhYLUJlmncIZ drKrRReqGZosnFD464DZ5hnK9sNsRJA= X-Received: by 10.98.144.65 with SMTP id a62mr14620499pfe.96.1521577799165; Tue, 20 Mar 2018 13:29:59 -0700 (PDT) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id l188sm676778pgl.75.2018.03.20.13.29.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 20 Mar 2018 13:29:58 -0700 (PDT) Date: Tue, 20 Mar 2018 13:29:57 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Michal Hocko cc: "Li,Rongqing" , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "cgroups@vger.kernel.org" , "hannes@cmpxchg.org" , Andrey Ryabinin Subject: =?UTF-8?Q?Re=3A_=E7=AD=94=E5=A4=8D=3A_=E7=AD=94=E5=A4=8D=3A_=5BPATCH=5D_mm=2Fmemcontrol=2Ec=3A_speed_up_to_force_empty_a_memory_cgroup?= In-Reply-To: <20180320083950.GD23100@dhcp22.suse.cz> Message-ID: References: <1521448170-19482-1-git-send-email-lirongqing@baidu.com> <20180319085355.GQ23100@dhcp22.suse.cz> <2AD939572F25A448A3AE3CAEA61328C23745764B@BC-MAIL-M28.internal.baidu.com> <20180319103756.GV23100@dhcp22.suse.cz> <2AD939572F25A448A3AE3CAEA61328C2374589DC@BC-MAIL-M28.internal.baidu.com> <20180320083950.GD23100@dhcp22.suse.cz> User-Agent: Alpine 2.20 (DEB 67 2015-01-07) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 20 Mar 2018, Michal Hocko wrote: > > > > > Although SWAP_CLUSTER_MAX is used at the lower level, but the call > > > > > stack of try_to_free_mem_cgroup_pages is too long, increase the > > > > > nr_to_reclaim can reduce times of calling > > > > > function[do_try_to_free_pages, shrink_zones, hrink_node ] > > > > > > > > > > mem_cgroup_resize_limit > > > > > --->try_to_free_mem_cgroup_pages: .nr_to_reclaim = max(1024, > > > > > --->SWAP_CLUSTER_MAX), > > > > > ---> do_try_to_free_pages > > > > > ---> shrink_zones > > > > > --->shrink_node > > > > > ---> shrink_node_memcg > > > > > ---> shrink_list <-------loop will happen in this place > > > > [times=1024/32] > > > > > ---> shrink_page_list > > > > > > > > Can you actually measure this to be the culprit. Because we should rethink > > > > our call path if it is too complicated/deep to perform well. > > > > Adding arbitrary batch sizes doesn't sound like a good way to go to me. > > > > > > Ok, I will try > > > > > > > Looping in mem_cgroup_resize_limit(), which takes memcg_limit_mutex on > > every iteration which contends with lowering limits in other cgroups (on > > our systems, thousands), calling try_to_free_mem_cgroup_pages() with less > > than SWAP_CLUSTER_MAX is lame. > > Well, if the global lock is a bottleneck in your deployments then we > can come up with something more clever. E.g. per hierarchy locking > or even drop the lock for the reclaim altogether. If we reclaim in > SWAP_CLUSTER_MAX then the potential over-reclaim risk quite low when > multiple users are shrinking the same (sub)hierarchy. > I don't believe this to be a bottleneck if nr_pages is increased in mem_cgroup_resize_limit(). > > It would probably be best to limit the > > nr_pages to the amount that needs to be reclaimed, though, rather than > > over reclaiming. > > How do you achieve that? The charging path is not synchornized with the > shrinking one at all. > The point is to get a better guess at how many pages, up to SWAP_CLUSTER_MAX, that need to be reclaimed instead of 1. > > If you wanted to be invasive, you could change page_counter_limit() to > > return the count - limit, fix up the callers that look for -EBUSY, and > > then use max(val, SWAP_CLUSTER_MAX) as your nr_pages. > > I am not sure I understand > Have page_counter_limit() return the number of pages over limit, i.e. count - limit, since it compares the two anyway. Fix up existing callers and then clamp that value to SWAP_CLUSTER_MAX in mem_cgroup_resize_limit(). It's a more accurate guess than either 1 or 1024.