Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp1178966imu; Wed, 9 Jan 2019 13:08:28 -0800 (PST) X-Google-Smtp-Source: ALg8bN46ycY20qpkXUtEvOSQw1FBttpILM010w/rDq/SB+ma7R5Pt8ZGkW4MBFbxEhzbKrJzfK53 X-Received: by 2002:a63:5c22:: with SMTP id q34mr6778411pgb.417.1547068108239; Wed, 09 Jan 2019 13:08:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547068108; cv=none; d=google.com; s=arc-20160816; b=oZEqr0wpKIb4SfcA+Qy2rsIRj9HyHrTVOEg+zOlI6rE6pJcRLijhYML/zXTwgFYueL hL/+F3TCCASkK4jaKa7tMoScGWQaSWvdkrRqDUiVmqzS9h52LmufyntyDpxaHjjgN/o1 gzXMelI41wYV+s2oFYlzZp0cPPSFwmogo+hy0hu5TSzt+V6gxymUThsl98lFwo0xpadu /CjW5mutQFsEaKAePH7pPbJkPHkAFJzdaIVS4YZwhzFS3ude1Bd6oLpw77UzIMeeK/3L znZcic74xMBOFQQmC4iPXHk8obZ+oXRiA2B6Y6lj8bGvR5AZmDdaPdau3fQucD9iBP45 wNAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=66nXbCMgInMKVA4znC/Bzn/Lok2fge7Hhzpf2o6Ijak=; b=z92FVoQhjojCgun1Wes5cJ6/oYrFvcPQCCr6XSey683+jrxctCCwAaZU8zywoAT1pZ zUqdegPaWuB4Y7Hql6FzM0Mle0xM+NaLuKZcoigKdwaCEDxAWAsi1d2THPzBJDJmgPCB Tw1SWgPOTK4JZh3rUG3f7cmW+zdh9orCtDcv3LiKjQQAOR3VGW6IY5BUZx+H9/Ju5OAX OQrXvbG010Q4LrTdjNsRSx0Yp6z3Bq2PrGhWRj0JNqAXloklAXXNdohZGAFiBc5w43v9 WoxRxgSgSDD3ArW+tVU+0XotrSpu148dlkWlIWXigURXc+BUWJM05HDdzdMZInlyJfYd 4Q1w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 63si69229808pla.65.2019.01.09.13.08.12; Wed, 09 Jan 2019 13:08:28 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726986AbfAIUlR (ORCPT + 99 others); Wed, 9 Jan 2019 15:41:17 -0500 Received: from out30-132.freemail.mail.aliyun.com ([115.124.30.132]:56699 "EHLO out30-132.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726548AbfAIUlR (ORCPT ); Wed, 9 Jan 2019 15:41:17 -0500 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R421e4;CH=green;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01424;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0THuJmIi_1547066172; Received: from US-143344MP.local(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0THuJmIi_1547066172) by smtp.aliyun-inc.com(127.0.0.1); Thu, 10 Jan 2019 04:36:15 +0800 Subject: Re: [RFC v3 PATCH 0/5] mm: memcontrol: do memory reclaim when offlining To: Johannes Weiner Cc: mhocko@suse.com, shakeelb@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <1547061285-100329-1-git-send-email-yang.shi@linux.alibaba.com> <20190109193247.GA16319@cmpxchg.org> From: Yang Shi Message-ID: Date: Wed, 9 Jan 2019 12:36:11 -0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <20190109193247.GA16319@cmpxchg.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 1/9/19 11:32 AM, Johannes Weiner wrote: > On Thu, Jan 10, 2019 at 03:14:40AM +0800, Yang Shi wrote: >> We have some usecases which create and remove memcgs very frequently, >> and the tasks in the memcg may just access the files which are unlikely >> accessed by anyone else. So, we prefer force_empty the memcg before >> rmdir'ing it to reclaim the page cache so that they don't get >> accumulated to incur unnecessary memory pressure. Since the memory >> pressure may incur direct reclaim to harm some latency sensitive >> applications. > We have kswapd for exactly this purpose. Can you lay out more details > on why that is not good enough, especially in conjunction with tuning > the watermark_scale_factor etc.? watermark_scale_factor does help out for some workloads in general. However, memcgs might be created then do memory allocation faster than kswapd in some our workloads. And, the tune may work for one kind machine or workload, but may not work for others. But, we may have different kind workloads (for example, latency-sensitive and batch jobs) run on the same machine, so it is kind of hard for us to guarantee all the workloads work well together by relying on kswapd and watermark_scale_factor only. And, we know the page cache access pattern would be one-off for some memcgs, and those page caches are unlikely shared by others, so why not just drop them when the memcg is offlined. Reclaiming those cold page caches earlier would also improve the efficiency of memcg creation for long run. > > We've been pretty adamant that users shouldn't use drop_caches for > performance for example, and that the need to do this usually is > indicative of a problem or suboptimal tuning in the VM subsystem. > > How is this different? IMHO, that depends on the usecases and workloads. As I mentioned above, if we know some page caches from some memcgs are referenced one-off and unlikely shared, why just keep them around to increase memory pressure? Thanks, Yang