Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp508205pxk; Wed, 9 Sep 2020 11:00:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzXDitqqXc32bAbawgipr40fEKte4ieWwHjn2PTYZookkZWPcvYWZIOKhwfIgwITSc2sD2C X-Received: by 2002:a50:9f22:: with SMTP id b31mr5372068edf.345.1599674453232; Wed, 09 Sep 2020 11:00:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1599674453; cv=none; d=google.com; s=arc-20160816; b=s4ipV52iMDOBHoOjBoNaN5VFX47hPkgYR7CtUrT32GaU88FvvX9hFQ5PiIxZbskGBs xn6Ap5FREfnTPojmNG8fOaGCaNvgI+XBX2fqdBYKfh0jreWT2HiD5Pb3gu47QQfRcjSU hYH/ogRx6MGH3tsGsufYBm95iBtnuHWTiW4jP45w+2o1l2aqV+zRyDDNxanyBr4YkKfo 05Ha3dRokNLtaYxrRFXpaCTnyb95bpw82wR+lD6nBpH+ZDt4WicfHDHQeLmngPLSYxDB tdqQCpFEfwQapLNZVCneeyuDhaylrHhPZFCbFAMp3CTcTUBGFyDP+TnI3x2bI5TzJOeU rS9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=ytlEcaE//xAk0UiUUDfdU01itE4mF9+ohRVHx/JhmJk=; b=n4fhxnOUAgU/LMb775T9G3vzZ7Kzl/lmapHqJz/hOAPGfS1/tKeXee7OzbD/HgQG0Z G2XyvHcjZ11FoTIP7SCcKajgLXML143ruDQX0XgstAdGEub8r98Rd75XRSGPhcttnxGN SviKr+Myh/GB1MPtWPxtOLf3rpTTsqbcV5NL/rybn0vMEvW3uT0G9MdV4m4CqcqsBUat J7Cbzw66ytENlrHLwoBuAF4C3lSHLlgbR3UNcBNdJ3a1WYbHWGi89kFDLW7LFQzAHjOh M+Uq9yWXwZ8K0LniOf+XIzEl1KPkP9H37j3bSLZd7GT/CfC0g02jXRoJzzofS5lI75rj EYBQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h11si2025196edr.319.2020.09.09.11.00.29; Wed, 09 Sep 2020 11:00:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726605AbgIIR7t (ORCPT + 99 others); Wed, 9 Sep 2020 13:59:49 -0400 Received: from mx2.suse.de ([195.135.220.15]:35182 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726408AbgIIR7t (ORCPT ); Wed, 9 Sep 2020 13:59:49 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 5822FAD20; Wed, 9 Sep 2020 18:00:02 +0000 (UTC) Subject: Re: [PATCH v2] mm/vmscan: fix infinite loop in drop_slab_node To: zangchunxin@bytedance.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Muchun Song References: <20200909152047.27905-1-zangchunxin@bytedance.com> From: Vlastimil Babka Message-ID: <16906d44-9e3c-76a1-f1a9-ced61e865467@suse.cz> Date: Wed, 9 Sep 2020 19:59:44 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: <20200909152047.27905-1-zangchunxin@bytedance.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 9/9/20 5:20 PM, zangchunxin@bytedance.com wrote: > From: Chunxin Zang > > On our server, there are about 10k memcg in one machine. They use memory > very frequently. When I tigger drop caches,the process will infinite loop > in drop_slab_node. > > There are two reasons: > 1.We have too many memcgs, even though one object freed in one memcg, the > sum of object is bigger than 10. > > 2.We spend a lot of time in traverse memcg once. So, the memcg who > traversed at the first have been freed many objects. Traverse memcg next > time, the freed count bigger than 10 again. > > We can get the following info through 'ps': > > root:~# ps -aux | grep drop > root 357956 ... R Aug25 21119854:55 echo 3 > /proc/sys/vm/drop_caches > root 1771385 ... R Aug16 21146421:17 echo 3 > /proc/sys/vm/drop_caches > root 1986319 ... R 18:56 117:27 echo 3 > /proc/sys/vm/drop_caches > root 2002148 ... R Aug24 5720:39 echo 3 > /proc/sys/vm/drop_caches > root 2564666 ... R 18:59 113:58 echo 3 > /proc/sys/vm/drop_caches > root 2639347 ... R Sep03 2383:39 echo 3 > /proc/sys/vm/drop_caches > root 3904747 ... R 03:35 993:31 echo 3 > /proc/sys/vm/drop_caches > root 4016780 ... R Aug21 7882:18 echo 3 > /proc/sys/vm/drop_caches > > Use bpftrace follow 'freed' value in drop_slab_node: > > root:~# bpftrace -e 'kprobe:drop_slab_node+70 {@ret=hist(reg("bp")); }' > Attaching 1 probe... > ^B^C > > @ret: > [64, 128) 1 | | > [128, 256) 28 | | > [256, 512) 107 |@ | > [512, 1K) 298 |@@@ | > [1K, 2K) 613 |@@@@@@@ | > [2K, 4K) 4435 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| > [4K, 8K) 442 |@@@@@ | > [8K, 16K) 299 |@@@ | > [16K, 32K) 100 |@ | > [32K, 64K) 139 |@ | > [64K, 128K) 56 | | > [128K, 256K) 26 | | > [256K, 512K) 2 | | > > In the while loop, we can check whether the TASK_KILLABLE signal is set, > if so, we should break the loop. That's definitely a good change, thanks. I would just maybe consider: - Test in the memcg iteration loop? If you have 10k memcgs as you mention, this can still take long until the test happens? - Exit also on other signals such as SIGABRT, SIGTERM? If I write to drop_caches and think it's too long, I would prefer to kill it by ctrl-c and not just kill -9. Dunno if the canonical way of testing for this is if (signal_pending(current)) or differently. - IMHO it's still worth to bail out in your scenario even without a signal, e.g. by the doubling of threshold. But it can be a separate patch. Thanks! > Signed-off-by: Chunxin Zang > Signed-off-by: Muchun Song > --- > changelogs in v2: > 1) Via check TASK_KILLABLE signal break loop. > > mm/vmscan.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index b6d84326bdf2..c3ed8b45d264 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -704,6 +704,9 @@ void drop_slab_node(int nid) > do { > struct mem_cgroup *memcg = NULL; > > + if (fatal_signal_pending(current)) > + return; > + > freed = 0; > memcg = mem_cgroup_iter(NULL, NULL, NULL); > do { >