Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp729255imu; Fri, 9 Nov 2018 05:14:48 -0800 (PST) X-Google-Smtp-Source: AJdET5cIoiBIbIgcv6VFxI4aQDIPwSBwLSEiOqWyuSxgcD3+wRuDvBuRk8P9dLCZvpVSZPY6MwjM X-Received: by 2002:a17:902:8698:: with SMTP id g24-v6mr1382180plo.96.1541769288789; Fri, 09 Nov 2018 05:14:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541769288; cv=none; d=google.com; s=arc-20160816; b=uqOL9AIqeOP9gmfNiZzkukCF1001pIxaB3WddCzzCp/AJhxceRWddHnAaQRuuh6gDH H4ImKK1CQBFmlGqMpGZkrB9Ed2sCu8d+EjIULEL+KIOahRD4JzLAyzXTTMByUwsOb5/Q bWDFW1tLQuhgIHL0t4/ejv5e1C9L8Xlt1aCY8R+IQsZJBZiLlmtU0mlXvbbVx4EBinbv z0hrJ79traPt7Bu3eUtPDY/NyXm/RNPOM/hoG0YSM0WS9qzeiO2facPo/eO3mBZ9KX36 aJUyt6Yo8O6B09TAk/Tj6rZmwc2MKwHhiA4qRVk0oeaoWx77fUeOb6RER8pFDojlpeJp 1U8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=N3Ij9ttKCJqm3uMTaAYvK9OpPXXZUgKbbxYuq/hcVDA=; b=aidUCyED2qwdnLkXbKzrhf53xDuSh2vy5dbdEnV6K16euPLx7ak2HrXQwB6VzcI2qi E7apICKU0yuS2sxz2UmugLuBAgqorUvPxIZ2u5frYc8CLDtH1TETdvAmseTRVTEr/iPv 6kpsSZKzTjLp2mDiWICV6aTtDOC+MBIumsUPyYC/bw7YXg1M5Dr+Ja9TgTvH0Gi44med f7wHvBtpIx8pM6oisNBoI1s+BYitUxSOUx7C7jV4w8a4SOKf8nweLfKaOxIJl57Nv5gb GuvjtJphIfZ06MmFkQ1UE04bYMtpq8VoosQWsIzmqg4ZjOX4Xmj/obmVu+jvW//KSoFs D0bQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a3-v6si6053454pgg.413.2018.11.09.05.14.10; Fri, 09 Nov 2018 05:14:48 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727951AbeKIWwH (ORCPT + 99 others); Fri, 9 Nov 2018 17:52:07 -0500 Received: from outbound-smtp16.blacknight.com ([46.22.139.233]:41357 "EHLO outbound-smtp16.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727784AbeKIWwH (ORCPT ); Fri, 9 Nov 2018 17:52:07 -0500 Received: from mail.blacknight.com (pemlinmail03.blacknight.ie [81.17.254.16]) by outbound-smtp16.blacknight.com (Postfix) with ESMTPS id 4D0F71C31E7 for ; Fri, 9 Nov 2018 13:11:30 +0000 (GMT) Received: (qmail 21651 invoked from network); 9 Nov 2018 13:11:30 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[37.228.229.69]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 9 Nov 2018 13:11:30 -0000 Date: Fri, 9 Nov 2018 13:11:28 +0000 From: Mel Gorman To: "Kirill A. Shutemov" Cc: Anthony Yznaga , linux-mm@kvack.org, linux-kernel@vger.kernel.org, aarcange@redhat.com, aneesh.kumar@linux.ibm.com, akpm@linux-foundation.org, jglisse@redhat.com, khandual@linux.vnet.ibm.com, kirill.shutemov@linux.intel.com, mhocko@kernel.org, minchan@kernel.org, peterz@infradead.org, rientjes@google.com, vbabka@suse.cz, willy@infradead.org, ying.huang@intel.com, nitingupta910@gmail.com Subject: Re: [RFC PATCH] mm: thp: implement THP reservations for anonymous memory Message-ID: <20181109131128.GE23260@techsingularity.net> References: <1541746138-6706-1-git-send-email-anthony.yznaga@oracle.com> <20181109121318.3f3ou56ceegrqhcp@kshutemo-mobl1> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20181109121318.3f3ou56ceegrqhcp@kshutemo-mobl1> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Nov 09, 2018 at 03:13:18PM +0300, Kirill A. Shutemov wrote: > On Thu, Nov 08, 2018 at 10:48:58PM -0800, Anthony Yznaga wrote: > > The basic idea as outlined by Mel Gorman in [2] is: > > > > 1) On first fault in a sufficiently sized range, allocate a huge page > > sized and aligned block of base pages. Map the base page > > corresponding to the fault address and hold the rest of the pages in > > reserve. > > 2) On subsequent faults in the range, map the pages from the reservation. > > 3) When enough pages have been mapped, promote the mapped pages and > > remaining pages in the reservation to a huge page. > > 4) When there is memory pressure, release the unused pages from their > > reservations. > > I haven't yet read the patch in details, but I'm skeptical about the > approach in general for few reasons: > > - PTE page table retracting to replace it with huge PMD entry requires > down_write(mmap_sem). It makes the approach not practical for many > multi-threaded workloads. > > I don't see a way to avoid exclusive lock here. I will be glad to > be proved otherwise. > That problem is somewhat fundamental to the mmap_sem itself and conceivably it could be alleviated by range-locking (if that gets completed). The other thing to bear in mind is the timing. If the promotion is in-place due to reservations, there isn't the allocation overhead and the hold times *should* be short. > - The promotion will also require TLB flush which might be prohibitively > slow on big machines. > Which may be offset by either a) setting the threshold to 1 in cases where the promtotion should always be immediate or b) offset by reduced memory consumption potentially avoiding premature reclaim in others. > - Short living processes will fail to benefit from THP with the policy, > even with plenty of free memory in the system: no time to promote to THP > or, with synchronous promotion, cost will overweight the benefit. > Short-lived processes are also not going to be dominated by the TLB refill cost so I think that's somewhat unfair. Potential means of mediating this include per-task promotion thresholds via either prctl or a task-wide policy inherited across exec > The goal to reduce memory overhead of THP is admirable, but we need to be > careful not to kill THP benefit itself. The approach will reduce number of > THP mapped in the system and/or shift their allocation to later stage of > process lifetime. > While I agree with you, I also had suggested in review that the threshold initially be set to 1 so it can be experiemented with by people who are more concerned about memory consumption than reduced TLB misses. While the general idea is not free of problems, I believe they are fixable rather than fundamental. > Prove me wrong with performance data. :) > Agreed that this should be accompanied by performance data but I think I laid out a reasonable approach here. If the default is a threshold of 1 and that is shown to be performance-neutral then incremental progress can be made as opposed to an "all or nothing" approach. -- Mel Gorman SUSE Labs