Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp964252ybl; Wed, 28 Aug 2019 07:48:16 -0700 (PDT) X-Google-Smtp-Source: APXvYqx91UjJE1J+uNloFQtMszGy56452oNyROIaLDudJjInO82k8GPrYXDnY75sy9s4sGIEpWAV X-Received: by 2002:a17:902:fe8c:: with SMTP id x12mr4895637plm.55.1567003696501; Wed, 28 Aug 2019 07:48:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1567003696; cv=none; d=google.com; s=arc-20160816; b=TcIMMmTQ5xQrfcuiGYmOEj9IY2tUcsHWBRNdwqmTtn5e+NhfnmtqTJa+mtypDQIk6D Vcyz+Z1rw1U1dpTbsNnVTSIdY/8IckH1FzX94wZyIMgvo3CDQ0yEJeWapRdSob8wKOap tLjcuZXf+uyaiY1eRtUUPVN+AruJcrDrcIn0yfaXox4Kgu89m6n9MpFoSVAJd1werwPY WB6m1+myAV2SHVGQWdwyK4b14/gwc1/tq83T7pTOahhGNPjvgxed5DhfqzcIDJsozaci +22W1EDW4yCByfLXKLDoQS33Hxq+NtHNwzlIr37OAE81x1VEf6tuyo+G2CdVHYJHbTL0 94TA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=VXVv8wJl9kDaEqpoQyfQVRoQx1iU6FsF/MwZHkhkd5w=; b=ISiC3ycoQT0tD494z4bHcSji3EZU/eXH0PuSEZ9m/LPdtHKWP/0V6OgqxO6yCYEagH cO2VEKf5MIRdJhUZ+Ddz6ycR0kvRhehboW4lrXfYGyZVVD8TIwQL4z6tuIPbO6MRLOUU TATbVymK9CXLn1EYTRuYukK+73s64ocsiK56QFKdGOHpuSis/rdORUzGpKDO7M3CY0mF 45twclLn7zxn0mFo78iym8ADUfTXTKO6JlxL1YFy0FiVkmafs1QETCg/4FOWmyYr71in 8GfM2dUaAWMuMZVONl6aHALbIjU89mH8UweIcNl/jo3hloeqctiQyYkiy8zz3DEAbF/v o4Ww== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w3si2045648plq.237.2019.08.28.07.47.59; Wed, 28 Aug 2019 07:48:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726515AbfH1OrD (ORCPT + 99 others); Wed, 28 Aug 2019 10:47:03 -0400 Received: from mga02.intel.com ([134.134.136.20]:4275 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726326AbfH1OrD (ORCPT ); Wed, 28 Aug 2019 10:47:03 -0400 X-Amp-Result: UNSCANNABLE X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Aug 2019 07:47:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,441,1559545200"; d="scan'208";a="192613347" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga002.jf.intel.com with ESMTP; 28 Aug 2019 07:47:00 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 69717EC; Wed, 28 Aug 2019 17:46:59 +0300 (EEST) Date: Wed, 28 Aug 2019 17:46:59 +0300 From: "Kirill A. Shutemov" To: Michal Hocko Cc: "Kirill A. Shutemov" , Yang Shi , Vlastimil Babka , hannes@cmpxchg.org, rientjes@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [v2 PATCH -mm] mm: account deferred split THPs into MemAvailable Message-ID: <20190828144658.ar4fajfuffn6k2ki@black.fi.intel.com> References: <20190827060139.GM7538@dhcp22.suse.cz> <20190827110210.lpe36umisqvvesoa@box> <20190827120923.GB7538@dhcp22.suse.cz> <20190827121739.bzbxjloq7bhmroeq@box> <20190827125911.boya23eowxhqmopa@box> <20190828075708.GF7386@dhcp22.suse.cz> <20190828140329.qpcrfzg2hmkccnoq@box> <20190828141253.GM28313@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190828141253.GM28313@dhcp22.suse.cz> User-Agent: NeoMutt/20170714-126-deb55f (1.8.3) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Aug 28, 2019 at 02:12:53PM +0000, Michal Hocko wrote: > On Wed 28-08-19 17:03:29, Kirill A. Shutemov wrote: > > On Wed, Aug 28, 2019 at 09:57:08AM +0200, Michal Hocko wrote: > > > On Tue 27-08-19 10:06:20, Yang Shi wrote: > > > > > > > > > > > > On 8/27/19 5:59 AM, Kirill A. Shutemov wrote: > > > > > On Tue, Aug 27, 2019 at 03:17:39PM +0300, Kirill A. Shutemov wrote: > > > > > > On Tue, Aug 27, 2019 at 02:09:23PM +0200, Michal Hocko wrote: > > > > > > > On Tue 27-08-19 14:01:56, Vlastimil Babka wrote: > > > > > > > > On 8/27/19 1:02 PM, Kirill A. Shutemov wrote: > > > > > > > > > On Tue, Aug 27, 2019 at 08:01:39AM +0200, Michal Hocko wrote: > > > > > > > > > > On Mon 26-08-19 16:15:38, Kirill A. Shutemov wrote: > > > > > > > > > > > Unmapped completely pages will be freed with current code. Deferred split > > > > > > > > > > > only applies to partly mapped THPs: at least on 4k of the THP is still > > > > > > > > > > > mapped somewhere. > > > > > > > > > > Hmm, I am probably misreading the code but at least current Linus' tree > > > > > > > > > > reads page_remove_rmap -> [page_remove_anon_compound_rmap ->\ deferred_split_huge_page even > > > > > > > > > > for fully mapped THP. > > > > > > > > > Well, you read correctly, but it was not intended. I screwed it up at some > > > > > > > > > point. > > > > > > > > > > > > > > > > > > See the patch below. It should make it work as intened. > > > > > > > > > > > > > > > > > > It's not bug as such, but inefficientcy. We add page to the queue where > > > > > > > > > it's not needed. > > > > > > > > But that adding to queue doesn't affect whether the page will be freed > > > > > > > > immediately if there are no more partial mappings, right? I don't see > > > > > > > > deferred_split_huge_page() pinning the page. > > > > > > > > So your patch wouldn't make THPs freed immediately in cases where they > > > > > > > > haven't been freed before immediately, it just fixes a minor > > > > > > > > inefficiency with queue manipulation? > > > > > > > Ohh, right. I can see that in free_transhuge_page now. So fully mapped > > > > > > > THPs really do not matter and what I have considered an odd case is > > > > > > > really happening more often. > > > > > > > > > > > > > > That being said this will not help at all for what Yang Shi is seeing > > > > > > > and we need a more proactive deferred splitting as I've mentioned > > > > > > > earlier. > > > > > > It was not intended to fix the issue. It's fix for current logic. I'm > > > > > > playing with the work approach now. > > > > > Below is what I've come up with. It appears to be functional. > > > > > > > > > > Any comments? > > > > > > > > Thanks, Kirill and Michal. Doing split more proactive is definitely a choice > > > > to eliminate huge accumulated deferred split THPs, I did think about this > > > > approach before I came up with memcg aware approach. But, I thought this > > > > approach has some problems: > > > > > > > > First of all, we can't prove if this is a universal win for the most > > > > workloads or not. For some workloads (as I mentioned about our usecase), we > > > > do see a lot THPs accumulated for a while, but they are very short-lived for > > > > other workloads, i.e. kernel build. > > > > > > > > Secondly, it may be not fair for some workloads which don't generate too > > > > many deferred split THPs or those THPs are short-lived. Actually, the cpu > > > > time is abused by the excessive deferred split THPs generators, isn't it? > > > > > > Yes this is indeed true. Do we have any idea on how much time that > > > actually is? > > > > For uncontented case, splitting 1G worth of pages (2MiB x 512) takes a bit > > more than 50 ms in my setup. But it's best-case scenario: pages not shared > > across multiple processes, no contention on ptl, page lock, etc. > > Any idea about a bad case? Not really. How bad you want it to get? How many processes share the page? Access pattern? Locking situation? Worst case scenarion: no progress on splitting due to pins or locking conflicts (trylock failure). > > > > With memcg awareness, the deferred split THPs actually are isolated and > > > > capped by memcg. The long-lived deferred split THPs can't be accumulated too > > > > many due to the limit of memcg. And, cpu time spent in splitting them would > > > > just account to the memcgs who generate that many deferred split THPs, who > > > > generate them who pay for it. This sounds more fair and we could achieve > > > > much better isolation. > > > > > > On the other hand, deferring the split and free up a non trivial amount > > > of memory is a problem I consider quite serious because it affects not > > > only the memcg workload which has to do the reclaim but also other > > > consumers of memory beucase large memory blocks could be used for higher > > > order allocations. > > > > Maybe instead of drive the split from number of pages on queue we can take > > a hint from compaction that is struggles to get high order pages? > > This is still unbounded in time. I'm not sure we should focus on time. We need to make sure that we don't overal system health worse. Who cares if we have pages on deferred split list as long as we don't have other user for the memory? > > We can also try to use schedule_delayed_work() instead of plain > > schedule_work() to give short-lived page chance to get freed before > > splitting attempt. > > No problem with that as long as this is well bound in time. > -- > Michal Hocko > SUSE Labs -- Kirill A. Shutemov