Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp747705ybz; Wed, 15 Apr 2020 18:05:27 -0700 (PDT) X-Google-Smtp-Source: APiQypJx3GI5RODym94LNqqtz8NGmvPC42enFtpqmLkSajiEqfZCIq2QnPgQqwe3xrxEMYAijV/8 X-Received: by 2002:a05:6402:1657:: with SMTP id s23mr25677050edx.74.1586999127313; Wed, 15 Apr 2020 18:05:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1586999127; cv=none; d=google.com; s=arc-20160816; b=apeKEPw+oYi3vMliNXl+Gtp/DvrQWtcvDY2sLTKyytekEamHvenfMlLj1YZYpni6J9 mEUbncPffwqCrGYzsx6wDu8cgFn771mlKWvad1ONwzj/wQHyGtvMT50fk20AxmuvkIgN D4vwdRUtsFv878htB1zVO/AK4M4od/lVGJimr2YNU+1Ti1p4XLQi5TbzC/BkC8BjdOh+ p+KPJBucuNgHsgH1oX6Cav+54jVjWxHDWJCvOF/o1HUcRn0JuW5M1GCSuLT1u40xYqKZ 7HQ5bzut46X0hRFJ2JX3qCeoM9GGcTRtKrWDpgBdAxDpInTk4fAzpjkOPdsxBZCXhtA9 B3VQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=iogf5zaO2aTCZfBiMcwHjHhnIT2sAeD6hGCkNZYB1XM=; b=GkbkQCF/F6qkQc2+hJOv4odQF6jWrFj1Q9dDwyOyPSxBT5Zp3KBKkaag24qwy9+wMZ 0hmEu5ZqPO9NTp8tr/FWaldGAgaHVU1dBw0L7dy/YQ9cuMVSSuGZ5LGYZ//72AYskGpy 3Q9xbCS2F3fFg3tmpZ4qkQUxlqZazses5b7KTlhsVabT7yBdMPhdy9qkdewI+FVnHVeJ Jb5rj8hxtHptVyw/7FD3fPXsy59O5UB+1IhSrvMj7v27mMBB3w6nPLB9zAEMjlC3lIOI TIQ7kf0X42y3OrjZ7gOtHjWNYGjZm+ZanodTj/mRzy9B3Ut2qdhk1Krq/JrOxbazEdg2 BFww== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b9si8870008edf.455.2020.04.15.18.05.02; Wed, 15 Apr 2020 18:05:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2636956AbgDOUcL (ORCPT + 99 others); Wed, 15 Apr 2020 16:32:11 -0400 Received: from out30-133.freemail.mail.aliyun.com ([115.124.30.133]:35900 "EHLO out30-133.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2438404AbgDOUbn (ORCPT ); Wed, 15 Apr 2020 16:31:43 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R201e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07484;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0TvdzT39_1586982696; Received: from US-143344MP.local(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0TvdzT39_1586982696) by smtp.aliyun-inc.com(127.0.0.1); Thu, 16 Apr 2020 04:31:39 +0800 Subject: Re: [PATCHv3, RESEND 2/8] khugepaged: Do not stop collapse if less than half PTEs are referenced To: "Kirill A. Shutemov" , akpm@linux-foundation.org, Andrea Arcangeli Cc: Zi Yan , Ralph Campbell , John Hubbard , William Kucharski , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20200413125220.663-1-kirill.shutemov@linux.intel.com> <20200413125220.663-3-kirill.shutemov@linux.intel.com> From: Yang Shi Message-ID: <902cad73-c3ef-c274-7483-c948167639e9@linux.alibaba.com> Date: Wed, 15 Apr 2020 13:31:33 -0700 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <20200413125220.663-3-kirill.shutemov@linux.intel.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 4/13/20 5:52 AM, Kirill A. Shutemov wrote: > __collapse_huge_page_swapin() checks the number of referenced PTE to > decide if the memory range is hot enough to justify swapin. > > We have few problems with the approach: > > - It is way too late: we can do the check much earlier and safe time. > khugepaged_scan_pmd() already knows if we have any pages to swap in > and number of referenced page. > > - It stops collapse altogether if there's not enough referenced pages, > not only swappingin. > > Fix it by making the right check early. We also can avoid additional > page table scanning if khugepaged_scan_pmd() haven't found any swap > entries. > > Signed-off-by: Kirill A. Shutemov > Fixes: 0db501f7a34c ("mm, thp: convert from optimistic swapin collapsing to conservative") > --- > mm/khugepaged.c | 25 ++++++++++--------------- > 1 file changed, 10 insertions(+), 15 deletions(-) Acked-by: Yang Shi Just a nit below. > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index 99bab7e4d05b..5968ec5ddd6b 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -902,11 +902,6 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm, > .pgoff = linear_page_index(vma, address), > }; > > - /* we only decide to swapin, if there is enough young ptes */ > - if (referenced < HPAGE_PMD_NR/2) { > - trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0); > - return false; > - } > vmf.pte = pte_offset_map(pmd, address); > for (; vmf.address < address + HPAGE_PMD_NR*PAGE_SIZE; > vmf.pte++, vmf.address += PAGE_SIZE) { > @@ -946,7 +941,7 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm, > static void collapse_huge_page(struct mm_struct *mm, > unsigned long address, > struct page **hpage, > - int node, int referenced) > + int node, int referenced, int unmapped) > { > pmd_t *pmd, _pmd; > pte_t *pte; > @@ -1003,7 +998,8 @@ static void collapse_huge_page(struct mm_struct *mm, > * If it fails, we release mmap_sem and jump out_nolock. > * Continuing to collapse causes inconsistency. > */ > - if (!__collapse_huge_page_swapin(mm, vma, address, pmd, referenced)) { > + if (unmapped && !__collapse_huge_page_swapin(mm, vma, address, > + pmd, referenced)) { > mem_cgroup_cancel_charge(new_page, memcg, true); > up_read(&mm->mmap_sem); > goto out_nolock; > @@ -1214,22 +1210,21 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, > mmu_notifier_test_young(vma->vm_mm, address)) > referenced++; > } > - if (writable) { > - if (referenced) { > + if (!writable) { > + result = SCAN_PAGE_RO; > + } else if (!referenced || (unmapped && referenced < HPAGE_PMD_NR/2)) { > + result = SCAN_LACK_REFERENCED_PAGE; > + } else { > result = SCAN_SUCCEED; > ret = 1; Shall fix the indentation for the above two statements? > - } else { > - result = SCAN_LACK_REFERENCED_PAGE; > - } > - } else { > - result = SCAN_PAGE_RO; > } > out_unmap: > pte_unmap_unlock(pte, ptl); > if (ret) { > node = khugepaged_find_target_node(); > /* collapse_huge_page will return with the mmap_sem released */ > - collapse_huge_page(mm, address, hpage, node, referenced); > + collapse_huge_page(mm, address, hpage, node, > + referenced, unmapped); > } > out: > trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced,