Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp669059ybt; Fri, 10 Jul 2020 09:22:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzfGJKJuVK5srqkpeqPykD8InG79AgQr8ZN5fu7AOMjU455yFuN+EYfX5xOb0txIHhyf9z9 X-Received: by 2002:aa7:c3d8:: with SMTP id l24mr69635419edr.97.1594398160417; Fri, 10 Jul 2020 09:22:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594398160; cv=none; d=google.com; s=arc-20160816; b=HQpKuJTo/HVGns+JwGdCrD4jMltiDEDEo+MxiWVu7v6L2XiJ3aDvXnfXh62oAuf5P2 mqX7G4xitnFO93pgaTPXFTVvIXOBQBDDL5O5yWBBXzh2H6YeHFdfODYjqBGys93IrLuZ NLVJIGxoyrOtT96RknB9Gpj8VEXm/9ipFqfGmlGB9llCcPsudneb5LyoqXnQ9fTn5h1+ fzet3RBmcz1EW0S7IDKMN9C7p1RJuZdhSMk4DgfGarbEsM1D1Gs4LATlQWkIF04DC+Bj ARQMYMUExea7lVT24RhgFgFkiUqJIblBI61OIw5Z+433oyD7nr/dayMCeD4PLnLVeSV9 nrpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=Ue/frzFxUQ5xNe2FlvXb4r/DHKPE3j0pZwT+2iMRx4s=; b=e7j52SYTofQ5glOTmDTOv5HZNiLJSwtcTrM7KkntBWmLFNN0qoFHX3BcZYBYeSZDOs gH0TvNQkYwsvTXrFxmdXsnDf9Z+WdNb8ZoOmaVgVaZx8ACZCO1Iv/jpf6f+46nwY5yCh Suit0Z//AY0E15ZvEgT0UlU2LWdp9j8bwu5dSp5YxLAgEjsyw1E+xSyum5hLBIiotGGj SiZ0xXWOssYWHcxmvAM5SP1md8HfIML9RH/eOFrVHBtFcI/S/nrcrh2nK5WYGHJKhC5R a5OZQdJqa4QU6fj03REpE/aytu52D6+8/+nMt2vvNt3xk+8/ZtLqCz+FoONuIGNMDdbX PUFw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=CsSiPpre; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w12si4195007eji.586.2020.07.10.09.22.15; Fri, 10 Jul 2020 09:22:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=CsSiPpre; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727771AbgGJQUF (ORCPT + 99 others); Fri, 10 Jul 2020 12:20:05 -0400 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:2162 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726820AbgGJQUE (ORCPT ); Fri, 10 Jul 2020 12:20:04 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Fri, 10 Jul 2020 09:18:15 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Fri, 10 Jul 2020 09:20:04 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Fri, 10 Jul 2020 09:20:04 -0700 Received: from rcampbell-dev.nvidia.com (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 10 Jul 2020 16:19:57 +0000 Subject: Re: [PATCH 1/2] mm/migrate: optimize migrate_vma_setup() for holes To: CC: , , , , "Jerome Glisse" , John Hubbard , "Christoph Hellwig" , Jason Gunthorpe , Shuah Khan , Andrew Morton References: <20200709165711.26584-1-rcampbell@nvidia.com> <20200709165711.26584-2-rcampbell@nvidia.com> <20200710063509.GE7902@in.ibm.com> From: Ralph Campbell X-Nvconfidentiality: public Message-ID: <72557537-3d64-7082-11f7-d70b41f7d0e6@nvidia.com> Date: Fri, 10 Jul 2020 09:19:56 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.2 MIME-Version: 1.0 In-Reply-To: <20200710063509.GE7902@in.ibm.com> X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1594397895; bh=Ue/frzFxUQ5xNe2FlvXb4r/DHKPE3j0pZwT+2iMRx4s=; h=X-PGP-Universal:Subject:To:CC:References:From:X-Nvconfidentiality: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=CsSiPpre9qxK1jS02numnr4cfl7QmxJ0/oYmekUMhjtq34QymiInNuhlkiCDn0E7p Bl+cgS3sGUonpda8dBJFP7ibbOFONADHdtgHtgcRKvXQArVDB0uEePCFyqO8B5UHlb KR1pBKdSmVPdg062vI5u3K6IVK2DExOIGzrUf2dX5pQySstXbXRvhrXpWDd6xWF+c8 TMjfbHedmpP4pZt21f75Zm+hj4lVqe8Ny94gFMGjxbEqdutYx1QT/u3g8DBXsjb5a0 DtGKN67dVzTn8mziAEBNJI/0nEz200XIpALyu/RdCHrI+Df8RJivh9s4L720y+2BV3 M0oTYVkJHwS9g== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 7/9/20 11:35 PM, Bharata B Rao wrote: > On Thu, Jul 09, 2020 at 09:57:10AM -0700, Ralph Campbell wrote: >> When migrating system memory to device private memory, if the source >> address range is a valid VMA range and there is no memory or a zero page, >> the source PFN array is marked as valid but with no PFN. This lets the >> device driver allocate private memory and clear it, then insert the new >> device private struct page into the CPU's page tables when >> migrate_vma_pages() is called. migrate_vma_pages() only inserts the >> new page if the VMA is an anonymous range. There is no point in telling >> the device driver to allocate device private memory and then not migrate >> the page. Instead, mark the source PFN array entries as not migrating to >> avoid this overhead. >> >> Signed-off-by: Ralph Campbell >> --- >> mm/migrate.c | 6 +++++- >> 1 file changed, 5 insertions(+), 1 deletion(-) >> >> diff --git a/mm/migrate.c b/mm/migrate.c >> index b0125c082549..8aa434691577 100644 >> --- a/mm/migrate.c >> +++ b/mm/migrate.c >> @@ -2204,9 +2204,13 @@ static int migrate_vma_collect_hole(unsigned long start, >> { >> struct migrate_vma *migrate = walk->private; >> unsigned long addr; >> + unsigned long flags; >> + >> + /* Only allow populating anonymous memory. */ >> + flags = vma_is_anonymous(walk->vma) ? MIGRATE_PFN_MIGRATE : 0; >> >> for (addr = start; addr < end; addr += PAGE_SIZE) { >> - migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE; >> + migrate->src[migrate->npages] = flags; > > I see a few other such cases where we directly populate MIGRATE_PFN_MIGRATE > w/o a pfn in migrate_vma_collect_pmd() and wonder why the vma_is_anonymous() > check can't help there as well? > > 1. pte_none() check in migrate_vma_collect_pmd() > 2. is_zero_pfn() check in migrate_vma_collect_pmd() > > Regards, > Bharata. For case 1, this seems like a useful addition. For case 2, the zero page is only inserted if the VMA is marked read-only and anonymous so I don't think the check is needed. I'll post a v2 with the change. Thanks for the suggestions!