Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp4291063ybb; Tue, 14 Apr 2020 04:29:32 -0700 (PDT) X-Google-Smtp-Source: APiQypLz45LLZjd76elu9bHg0ILf3NVtBL+ASYCeuEZIgKff/XCgceq7/IHW9OunSDMAR//AccfN X-Received: by 2002:a05:6402:31b6:: with SMTP id dj22mr8043331edb.258.1586863772569; Tue, 14 Apr 2020 04:29:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1586863772; cv=none; d=google.com; s=arc-20160816; b=wkutq8J1vGAZWW8eaFBGBn70ury9HG1ILKKdy1uMUbedYPydN0ij/c3zwQcyBaSg4L /+h9lfee6ML9Cr7cEwHf7w2qfDID/z7uWQVmxGseYPqK612KEF7CgXVkVU5+0SYDOg/E pfHCdtRiAXCrtpwj1z3I61OdiWL2brVEvfORPqP5x0cuuKAQynCQ/iSFVNkZX11vlxEc OIWkj624hK++/OpHfhRFIBPa1c698lU2SW3PVbJ4v4szOneV7w1iLwAIJkh9kY3+Lggw YVPFFahGTi4NkmZZIi1TrtS1Msh62dQ0Mq8Sotf9Aa+0qkCFu9aydZWLKlgK9OjjE8Tl 3Klg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=l9tZp9e6xRI5KcQp28af1SvA2zXa2xwp44F888oG4OU=; b=JIAfABZhQ3k7ZdyVM6QwuAvadTdlXXx2wswdtNKyYrtTix5gnzWlBTmwlBM4sKtWmJ sPlhZHh/gpjmNoCRvsm4OsOKXV4eBh7nxUNmTKYu5kkfPB9VIkTDMXJ/8do4GHjduW8s mCwFPdzEt3UarcMvOyzrhpQHWUE27M2FWkiql3i1/TP/EqLVOBaVeATuB9Of23WkPA08 RHUXQi0qGmdx19M4bJBj/R5FzoReJlVr7GUDeGZwsZESCznXgAfNzu3uHvpIm0Ct6oNh Za9PSWuw7BaiYt/OTmA8AIIT9TFg5eHbYV65YigdiPV3UDR9XzQl121w/taCWbaftF3u e3gw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n15si2168860edq.470.2020.04.14.04.29.09; Tue, 14 Apr 2020 04:29:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728691AbgDMMwo (ORCPT + 99 others); Mon, 13 Apr 2020 08:52:44 -0400 Received: from mga11.intel.com ([192.55.52.93]:39736 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728637AbgDMMw0 (ORCPT ); Mon, 13 Apr 2020 08:52:26 -0400 IronPort-SDR: LkzxczfKZThHCqiUwkgO3CZcGLD1cVKFLA7eDbYUvp60KouiE5XDbTOK6FKcc43SBtviKfaFc4 U/6gXwlwFC0g== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2020 05:52:25 -0700 IronPort-SDR: Sw1w1gHL1Y/+z9MAVlg8NCeAPLMnCsYjBYCosQxv9MZCgvWDSaHVoQzoDOHqj1iRKhWO7jg02M O7WKyq9u1LFw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,378,1580803200"; d="scan'208";a="252864073" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga003.jf.intel.com with ESMTP; 13 Apr 2020 05:52:22 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id CEFB8E1; Mon, 13 Apr 2020 15:52:21 +0300 (EEST) From: "Kirill A. Shutemov" To: akpm@linux-foundation.org, Andrea Arcangeli Cc: Zi Yan , Yang Shi , Ralph Campbell , John Hubbard , William Kucharski , linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv3, RESEND 2/8] khugepaged: Do not stop collapse if less than half PTEs are referenced Date: Mon, 13 Apr 2020 15:52:14 +0300 Message-Id: <20200413125220.663-3-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200413125220.663-1-kirill.shutemov@linux.intel.com> References: <20200413125220.663-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org __collapse_huge_page_swapin() checks the number of referenced PTE to decide if the memory range is hot enough to justify swapin. We have few problems with the approach: - It is way too late: we can do the check much earlier and safe time. khugepaged_scan_pmd() already knows if we have any pages to swap in and number of referenced page. - It stops collapse altogether if there's not enough referenced pages, not only swappingin. Fix it by making the right check early. We also can avoid additional page table scanning if khugepaged_scan_pmd() haven't found any swap entries. Signed-off-by: Kirill A. Shutemov Fixes: 0db501f7a34c ("mm, thp: convert from optimistic swapin collapsing to conservative") --- mm/khugepaged.c | 25 ++++++++++--------------- 1 file changed, 10 insertions(+), 15 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 99bab7e4d05b..5968ec5ddd6b 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -902,11 +902,6 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm, .pgoff = linear_page_index(vma, address), }; - /* we only decide to swapin, if there is enough young ptes */ - if (referenced < HPAGE_PMD_NR/2) { - trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0); - return false; - } vmf.pte = pte_offset_map(pmd, address); for (; vmf.address < address + HPAGE_PMD_NR*PAGE_SIZE; vmf.pte++, vmf.address += PAGE_SIZE) { @@ -946,7 +941,7 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm, static void collapse_huge_page(struct mm_struct *mm, unsigned long address, struct page **hpage, - int node, int referenced) + int node, int referenced, int unmapped) { pmd_t *pmd, _pmd; pte_t *pte; @@ -1003,7 +998,8 @@ static void collapse_huge_page(struct mm_struct *mm, * If it fails, we release mmap_sem and jump out_nolock. * Continuing to collapse causes inconsistency. */ - if (!__collapse_huge_page_swapin(mm, vma, address, pmd, referenced)) { + if (unmapped && !__collapse_huge_page_swapin(mm, vma, address, + pmd, referenced)) { mem_cgroup_cancel_charge(new_page, memcg, true); up_read(&mm->mmap_sem); goto out_nolock; @@ -1214,22 +1210,21 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, mmu_notifier_test_young(vma->vm_mm, address)) referenced++; } - if (writable) { - if (referenced) { + if (!writable) { + result = SCAN_PAGE_RO; + } else if (!referenced || (unmapped && referenced < HPAGE_PMD_NR/2)) { + result = SCAN_LACK_REFERENCED_PAGE; + } else { result = SCAN_SUCCEED; ret = 1; - } else { - result = SCAN_LACK_REFERENCED_PAGE; - } - } else { - result = SCAN_PAGE_RO; } out_unmap: pte_unmap_unlock(pte, ptl); if (ret) { node = khugepaged_find_target_node(); /* collapse_huge_page will return with the mmap_sem released */ - collapse_huge_page(mm, address, hpage, node, referenced); + collapse_huge_page(mm, address, hpage, node, + referenced, unmapped); } out: trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced, -- 2.26.0