Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp4178423pxk; Tue, 8 Sep 2020 12:43:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzzt2v/UsYyeMdiHRty8nWUr4OFFfwmOO5ijjQ1aqqfVsE0GNS+l6dJrfGQkqzJFM6zuEJ/ X-Received: by 2002:a17:906:1b58:: with SMTP id p24mr36199ejg.77.1599594224830; Tue, 08 Sep 2020 12:43:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1599594224; cv=none; d=google.com; s=arc-20160816; b=0TS7Xv0YuWgHcEJpcXwJ+4iFARPC+gmwuHlBd0z/tW79+g7AAIkb/VvtwYEjQBwULS 6y0uzSBj468Iak0UOVDZoevyDGpqwh6oZBmtYeMR74DpGOY8V6ELAPw0GZ06E5EyMQpZ ZxGUCjO+RagGGPDxbiSydd6A8unkSL5tb9HTc/8i7ZUybfxlngtW0v/ujVWi2i6HdhdY q1yRAdc1W9eqHzR0/wqgrBcNtYzB/6r8k7I3mjjve4QXxUOCeM0o9W6MlgiJK8h48i7V Je0dzV/w//zIJMHxG70CX54axV7HYQ0gRPPvPDKhoFTDe5szYAYFdfJ2RzJtALLOfZeg Nsuw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=hAfUmYx2npGdDa8w5ADnzQKOz5BpAc0p25T8iasT8Xs=; b=ltqEuc/gU+cijtf7OsFcqDU7pVjB9o0WkHGpalMM3EW19ThMIl7cTENcix4X8WR3lm USw78jk25b3utclTahEptTf07Ba3fsZEc5DcRZp13sCaTI/HGQk5ctd+qM39pzvSmbeK ejYtwGZOIbEawxBuRQ4YbnH5Td7ZRBKCqau1cO6VU7YPoOuEn4sB7enJXxQRbl+Jg46h zBE4Jc2ROpSaCVhRjm0ShHdalIDdnLNBxiLzl/PKAvJf9D+SBJN5+bwDhUHpjceHJPqI iXypueffUhv9xmrjBcJKpbehRoUbLCXxB+wNFCuh5QsKF/XT59MWmEl3SGdgJAIO0HgQ HtBg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z9si42208ede.431.2020.09.08.12.43.22; Tue, 08 Sep 2020 12:43:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732372AbgIHTl3 (ORCPT + 99 others); Tue, 8 Sep 2020 15:41:29 -0400 Received: from mx2.suse.de ([195.135.220.15]:47734 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730912AbgIHPxl (ORCPT ); Tue, 8 Sep 2020 11:53:41 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 2D8E6AC26; Tue, 8 Sep 2020 15:42:07 +0000 (UTC) Subject: Re: [PATCH] mm/vmscan: fix infinite loop in drop_slab_node To: Chris Down , zangchunxin@bytedance.com Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Muchun Song References: <20200908142456.89626-1-zangchunxin@bytedance.com> <20200908150945.GA1301981@chrisdown.name> From: Vlastimil Babka Message-ID: <07c6ebf1-e2b3-11a2-538f-4ac542a4373b@suse.cz> Date: Tue, 8 Sep 2020 17:42:05 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: <20200908150945.GA1301981@chrisdown.name> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 9/8/20 5:09 PM, Chris Down wrote: > drop_caches by its very nature can be extremely performance intensive -- if > someone wants to abort after trying too long, they can just send a > TASK_KILLABLE signal, no? If exiting the loop and returning to usermode doesn't > reliably work when doing that, then _that's_ something to improve, but this > looks premature to me until that's demonstrated not to work. Hm there might be existings scripts (even though I dislike those) running drop_caches periodically, and they are currently not set up to be killed, so one day it might surprise someone. Dropping should be a one-time event, not a continual reclaim. Maybe we could be a bit smarter and e.g. double the threshold currently hardcoded as "10" with each iteration? > zangchunxin@bytedance.com writes: >>In one drop caches action, only traverse memcg once maybe is better. >>If user need more memory, they can do drop caches again. > > Can you please provide some measurements of the difference in reclamation in > practice? >