Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp923232ybl; Wed, 28 Aug 2019 07:15:52 -0700 (PDT) X-Google-Smtp-Source: APXvYqx3UNjTkLBN6nJeYEndRDd4ETRmsJzO+Ju5OF0/pqUTCcHVTtmrn1kJGKI/NhVzXw75c6km X-Received: by 2002:a17:902:b715:: with SMTP id d21mr4420102pls.50.1567001752827; Wed, 28 Aug 2019 07:15:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1567001752; cv=none; d=google.com; s=arc-20160816; b=zoMOoi+hZttPlREPcoCziAKFMsNv5nWcEj4YcFlfF1W5e9WlMOn6QOyRyWX+CyB6NN J+girlthA8/9Zqe7Qr7ZZJQ8QApNU2OMTIsu4HeeXPjF+E/N/SGwIKb39yir0Mc615yz 1QTh4qZWZ66j99GxuVYzeAilvxBQ3MFG+JpUCWcggM/ofx6NdU2K2yURvD5KUJcqNHlf CVVmIfAiVAZ/4IRcR0S0LOKGs5lTceKCQdCqIPdbY/TXzIh4ewl4hTaXW0ABu404EPxA ZJQb1GT4uFZoUZxtelOnuVy3aAh10YR/WOIcmfnqlIl0km9cIuZasaIx+HfA7Jkx655x pUBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=pkIz52X+T6laFH5hcewXlwdHCtmp8YRwjH5rhK7MEus=; b=aJ2meX0UFfKRzrf32T9Zbbl3q3HtwP0oNHcoFEl9Dw6JJIL650ObqgECKcdO1iwTMY 2G9Af0cA46JRbO9z0naDFXOLeVw6yFfpDrmDvuoZudddd2dYiVp1fIVACmM8ziKRaEik Td4vilA6ZOWtSo6H92XPeDsovQVBdvX3bMkggf5utKSl+zibQ7VSX2iU2jiK5+n4wTXl xi1FJiNUhtFMZGZ6nt3SSCGlxmyi49S4gmNoBAPs6yip3LZCuTRdKOPl2R/DVYuO1XKR cMAfFfkOBJHcAimnzDTuqKgakpIYK7tdhKobjcM4ORECwEgCYCStlBvWE4HdBfUofJum QaTg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t11si2296731pgu.16.2019.08.28.07.15.36; Wed, 28 Aug 2019 07:15:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726555AbfH1OM4 (ORCPT + 99 others); Wed, 28 Aug 2019 10:12:56 -0400 Received: from mx2.suse.de ([195.135.220.15]:43640 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726341AbfH1OM4 (ORCPT ); Wed, 28 Aug 2019 10:12:56 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 16B1EAE92; Wed, 28 Aug 2019 14:12:54 +0000 (UTC) Date: Wed, 28 Aug 2019 16:12:53 +0200 From: Michal Hocko To: "Kirill A. Shutemov" Cc: Yang Shi , Vlastimil Babka , kirill.shutemov@linux.intel.com, hannes@cmpxchg.org, rientjes@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [v2 PATCH -mm] mm: account deferred split THPs into MemAvailable Message-ID: <20190828141253.GM28313@dhcp22.suse.cz> References: <20190826131538.64twqx3yexmhp6nf@box> <20190827060139.GM7538@dhcp22.suse.cz> <20190827110210.lpe36umisqvvesoa@box> <20190827120923.GB7538@dhcp22.suse.cz> <20190827121739.bzbxjloq7bhmroeq@box> <20190827125911.boya23eowxhqmopa@box> <20190828075708.GF7386@dhcp22.suse.cz> <20190828140329.qpcrfzg2hmkccnoq@box> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190828140329.qpcrfzg2hmkccnoq@box> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 28-08-19 17:03:29, Kirill A. Shutemov wrote: > On Wed, Aug 28, 2019 at 09:57:08AM +0200, Michal Hocko wrote: > > On Tue 27-08-19 10:06:20, Yang Shi wrote: > > > > > > > > > On 8/27/19 5:59 AM, Kirill A. Shutemov wrote: > > > > On Tue, Aug 27, 2019 at 03:17:39PM +0300, Kirill A. Shutemov wrote: > > > > > On Tue, Aug 27, 2019 at 02:09:23PM +0200, Michal Hocko wrote: > > > > > > On Tue 27-08-19 14:01:56, Vlastimil Babka wrote: > > > > > > > On 8/27/19 1:02 PM, Kirill A. Shutemov wrote: > > > > > > > > On Tue, Aug 27, 2019 at 08:01:39AM +0200, Michal Hocko wrote: > > > > > > > > > On Mon 26-08-19 16:15:38, Kirill A. Shutemov wrote: > > > > > > > > > > Unmapped completely pages will be freed with current code. Deferred split > > > > > > > > > > only applies to partly mapped THPs: at least on 4k of the THP is still > > > > > > > > > > mapped somewhere. > > > > > > > > > Hmm, I am probably misreading the code but at least current Linus' tree > > > > > > > > > reads page_remove_rmap -> [page_remove_anon_compound_rmap ->\ deferred_split_huge_page even > > > > > > > > > for fully mapped THP. > > > > > > > > Well, you read correctly, but it was not intended. I screwed it up at some > > > > > > > > point. > > > > > > > > > > > > > > > > See the patch below. It should make it work as intened. > > > > > > > > > > > > > > > > It's not bug as such, but inefficientcy. We add page to the queue where > > > > > > > > it's not needed. > > > > > > > But that adding to queue doesn't affect whether the page will be freed > > > > > > > immediately if there are no more partial mappings, right? I don't see > > > > > > > deferred_split_huge_page() pinning the page. > > > > > > > So your patch wouldn't make THPs freed immediately in cases where they > > > > > > > haven't been freed before immediately, it just fixes a minor > > > > > > > inefficiency with queue manipulation? > > > > > > Ohh, right. I can see that in free_transhuge_page now. So fully mapped > > > > > > THPs really do not matter and what I have considered an odd case is > > > > > > really happening more often. > > > > > > > > > > > > That being said this will not help at all for what Yang Shi is seeing > > > > > > and we need a more proactive deferred splitting as I've mentioned > > > > > > earlier. > > > > > It was not intended to fix the issue. It's fix for current logic. I'm > > > > > playing with the work approach now. > > > > Below is what I've come up with. It appears to be functional. > > > > > > > > Any comments? > > > > > > Thanks, Kirill and Michal. Doing split more proactive is definitely a choice > > > to eliminate huge accumulated deferred split THPs, I did think about this > > > approach before I came up with memcg aware approach. But, I thought this > > > approach has some problems: > > > > > > First of all, we can't prove if this is a universal win for the most > > > workloads or not. For some workloads (as I mentioned about our usecase), we > > > do see a lot THPs accumulated for a while, but they are very short-lived for > > > other workloads, i.e. kernel build. > > > > > > Secondly, it may be not fair for some workloads which don't generate too > > > many deferred split THPs or those THPs are short-lived. Actually, the cpu > > > time is abused by the excessive deferred split THPs generators, isn't it? > > > > Yes this is indeed true. Do we have any idea on how much time that > > actually is? > > For uncontented case, splitting 1G worth of pages (2MiB x 512) takes a bit > more than 50 ms in my setup. But it's best-case scenario: pages not shared > across multiple processes, no contention on ptl, page lock, etc. Any idea about a bad case? > > > With memcg awareness, the deferred split THPs actually are isolated and > > > capped by memcg. The long-lived deferred split THPs can't be accumulated too > > > many due to the limit of memcg. And, cpu time spent in splitting them would > > > just account to the memcgs who generate that many deferred split THPs, who > > > generate them who pay for it. This sounds more fair and we could achieve > > > much better isolation. > > > > On the other hand, deferring the split and free up a non trivial amount > > of memory is a problem I consider quite serious because it affects not > > only the memcg workload which has to do the reclaim but also other > > consumers of memory beucase large memory blocks could be used for higher > > order allocations. > > Maybe instead of drive the split from number of pages on queue we can take > a hint from compaction that is struggles to get high order pages? This is still unbounded in time. > We can also try to use schedule_delayed_work() instead of plain > schedule_work() to give short-lived page chance to get freed before > splitting attempt. No problem with that as long as this is well bound in time. -- Michal Hocko SUSE Labs