Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp3726875ybb; Mon, 6 Apr 2020 14:32:48 -0700 (PDT) X-Google-Smtp-Source: APiQypK2R8zSz/44E0LjDLWJfQCW7r4JkUviXvgZVBQcSamTX1SLNgAjKelZYnWnfVERlIHPXTVS X-Received: by 2002:a4a:929b:: with SMTP id i27mr995714ooh.95.1586208768007; Mon, 06 Apr 2020 14:32:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1586208767; cv=none; d=google.com; s=arc-20160816; b=uVTGUN49CXCyE7RuiHFs3YGgV55oCaCe+37fpsw+uB7/8faVfk5n1UfgyNnowWcMru Kz3S7GbVDP9NK+a+keh3kqx2ns1Fn5WnXom94uIvzW9SMM2kdopkh3SVIVS/m1gnjusO spYqjkXPpMEfj4FETmNStuM+pOLsJKxizfCJR1N6Kty5yRJWqEqEBm7uFeP/80uzwIzR bu89uv8N/4mBs26LbrNbVXH9k5VH4L3EeRlexBu/B4DRjAjbcpTjBNXfOp6bidJWcvPK Pw/8K0BH8PMDyhD2bYQb7I32KJvCRcaYleRjrNEQn4w7tshEM3IsJJVMaLRkgHdudI49 YUhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=LBAGWn1YAE5YWfeZ7nIgtxQX8b6/AI2BeWNRvWT0J+o=; b=bgQHl5Scc1p5OsdYddnqxyFhrrDttHyIULrOe3NiNnBSXtZZjyLJONhs2LDEZNnwzV 2wot+kkNCWcjYEzH3ORuUQNz9k/EjIe6FjK+1tW+oLKcv7Sz1bEU7mqrnX52420rZbUr sM6YWCpE/Dd+13T+dfa2tmUsNVgRde4Fx9Yve99A2RkxbAG/eprOsgKtJyLeBj0wY6O+ AdHAFMmWn2YvxplPpBrB+l84Mfxu+mxssP11KSki3URUEakaS5sNewJqhPquX9fXAdyF 1BrRn/e/mCo6+aBbC1JJjtV/fQa2MfjrbIwtfxyDs9BfOq5wmIcBMrGtx4Or1q2ZmBKY /w1g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b="p/z31+jz"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e193si7264370oib.276.2020.04.06.14.32.35; Mon, 06 Apr 2020 14:32:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b="p/z31+jz"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726393AbgDFVaJ (ORCPT + 99 others); Mon, 6 Apr 2020 17:30:09 -0400 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:4644 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725933AbgDFVaJ (ORCPT ); Mon, 6 Apr 2020 17:30:09 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 06 Apr 2020 14:28:27 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 06 Apr 2020 14:30:08 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 06 Apr 2020 14:30:08 -0700 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 6 Apr 2020 21:30:08 +0000 Received: from [10.2.60.145] (172.20.13.39) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 6 Apr 2020 21:30:07 +0000 Subject: Re: [PATCHv2 5/8] khugepaged: Allow to callapse a page shared across fork To: "Kirill A. Shutemov" , , Andrea Arcangeli CC: Zi Yan , Yang Shi , , , "Kirill A. Shutemov" References: <20200403112928.19742-1-kirill.shutemov@linux.intel.com> <20200403112928.19742-6-kirill.shutemov@linux.intel.com> X-Nvconfidentiality: public From: John Hubbard Message-ID: <5a57635b-ed75-8f09-6f0c-5623f557fc55@nvidia.com> Date: Mon, 6 Apr 2020 14:30:07 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0 MIME-Version: 1.0 In-Reply-To: <20200403112928.19742-6-kirill.shutemov@linux.intel.com> X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To DRHQMAIL107.nvidia.com (10.27.9.16) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1586208507; bh=LBAGWn1YAE5YWfeZ7nIgtxQX8b6/AI2BeWNRvWT0J+o=; h=X-PGP-Universal:Subject:To:CC:References:X-Nvconfidentiality:From: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=p/z31+jzqY/wgSr589kzTamsXjx6DdgS8Alysxj3pJ4/edz839OXXMVDJ5F5cbz8f zH7YqoqxJ5zQtVSHwfmyy0xFcSQqO5vvG+33+T+Pyh5b60TpZjrBTylyJNfzG1+h+6 T7sE09x7nglK1ySqN42fcw9ZMTigsXrl0dfKH8wwfBMvy21k/fz0/x2AW/k7toWpzq j7BAd4xM+JYoia+rIXvZTt58eLYh3v7/NkJXtM8NXCho9bGz9cYRkor2gT7dZ30l/F sXF1yS7xBcDYU8JPKStAZGb85hkByOdjWfa/q5y+LcjQWSeZ7JnvRQXY8xD4XqiY0Q 0qX6R9jv/zoWA== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 4/3/20 4:29 AM, Kirill A. Shutemov wrote: > The page can be included into collapse as long as it doesn't have extra > pins (from GUP or otherwise). Hi Kirill, s/callapse/collapse/ in the Subject line. The commit message should mention that you're also removing a VM_BUG_ON_PAGE(). > > Signed-off-by: Kirill A. Shutemov > --- > mm/khugepaged.c | 25 ++++++++++++++----------- > 1 file changed, 14 insertions(+), 11 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index 57ff287caf6b..1e7e6543ebca 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -581,11 +581,18 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, > } > > /* > - * cannot use mapcount: can't collapse if there's a gup pin. > - * The page must only be referenced by the scanned process > - * and page swap cache. > + * Check if the page has any GUP (or other external) pins. > + * > + * The page table that maps the page has been already unlinked > + * from the page table tree and this process cannot get > + * additinal pin on the page. I'd recommend this wording instead, for the last two lines: * from the page table tree. Therefore, this page will not * normally receive any additional pins. > + * > + * New pins can come later if the page is shared across fork, > + * but not for the this process. It is fine. The other process > + * cannot write to the page, only trigger CoW. > */ > - if (page_count(page) != 1 + PageSwapCache(page)) { > + if (total_mapcount(page) + PageSwapCache(page) != > + page_count(page)) { I think it's time to put that logic ( "does this page have any extra references") into a small function. It's already duplicated once below. And the documentation is duplicated as well. I took a quick peek at this patch because, after adding pin_user_pages*() APIs earlier to complement get_user_pages*(), I had a moment of doubt here: what if I'd done it in a way that required additional logic here? Fortunately, that's not the case: all pin_user_pages() calls on huge pages take a "primary/real" refcount, in addition to scribbling into the compound_pincount_ptr() area. whew. :) > unlock_page(page); > result = SCAN_PAGE_COUNT; > goto out; > @@ -672,7 +679,6 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page, > } else { > src_page = pte_page(pteval); > copy_user_highpage(page, src_page, address, vma); > - VM_BUG_ON_PAGE(page_mapcount(src_page) != 1, src_page); > release_pte_page(src_page); > /* > * ptl mostly unnecessary, but preempt has to > @@ -1206,12 +1212,9 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, > goto out_unmap; > } > > - /* > - * cannot use mapcount: can't collapse if there's a gup pin. > - * The page must only be referenced by the scanned process > - * and page swap cache. > - */ > - if (page_count(page) != 1 + PageSwapCache(page)) { > + /* Check if the page has any GUP (or other external) pins */ > + if (total_mapcount(page) + PageSwapCache(page) != > + page_count(page)) {> result = SCAN_PAGE_COUNT; > goto out_unmap; > } > thanks, -- John Hubbard NVIDIA