Received: by 2002:ac0:a679:0:0:0:0:0 with SMTP id p54csp80031imp; Tue, 19 Feb 2019 18:37:45 -0800 (PST) X-Google-Smtp-Source: AHgI3IbJ1iigHaYJdgZ1uPtvK5ryGUmECHH2gjaFNBH9FgrTiBHrynCEt/hQQStuf/hxyV8rMVFk X-Received: by 2002:a17:902:b097:: with SMTP id p23mr32856641plr.36.1550630265693; Tue, 19 Feb 2019 18:37:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550630265; cv=none; d=google.com; s=arc-20160816; b=DriL7ewNTksB0mhhLC85l09GljNfNelam525APUs1iqGiAjIRQ30myqcYT0NbsMB/H om8wJ3ffOuOJuFlCij8WabvU953yUUYahHg4Z0LSsT9++bppQBepH9owK3DoVo8v5uTu H5DZ/8RSbL2Y6W/6+Bp5CGKj8Ua/iXtBwUkykBj7UQgAXjpSwfQP6pI2ETCHw0VZQBXu 4dYsVtBOqQuc2hbbSuV0+j3cQLI4tlAmOQjkN9DJeG/rM+22ba2mQXUglF4U0Ldo6etF wcFPx5fqvwB/GWn8z0AQ1X2ciBYHfOGSund4tS48SQsMQlnGzBRqdqkNp582uRqnm2l6 B6Lg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from; bh=l8yJ4E9Gd0N6lJ+NedKrddnw4p9+bSj8oBtj6wXakR4=; b=PR1Kz4YHx8bw0ruhdh1cByaJ1HxSbKPpgiF2sYXN3oLHIz3Do5Ogh0cfQzn1+Axq66 CmxZx/7Iifo+LDXqk4ClURjwb7GE6FxBYMI+rMGGyWymUffDQue8gIgmDdxvGO1aT51+ Xer/ene2FWupyzKMGCyyZEC10VhYlpEvANkgB+1MUicTSwt1HsZgLDw/eZbk5kQvzp4K Vzw4ksR5kvLerzzIgCqo9f/VuuIhUg9IOCr3Uje0UIajPR4PqdNY/PUm9aLpbOCa8QQz DVNmZ1fjSw5zCpuUk3rZDJn1sUzTQLe75jib0GmPfGS4caPLFXI7/Fxu90tfHq/eeCvJ lPyQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=YeTT7dsq; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 61si14911763plc.364.2019.02.19.18.37.30; Tue, 19 Feb 2019 18:37:45 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=YeTT7dsq; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730294AbfBTCdj (ORCPT + 99 others); Tue, 19 Feb 2019 21:33:39 -0500 Received: from hqemgate16.nvidia.com ([216.228.121.65]:10837 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726312AbfBTCdi (ORCPT ); Tue, 19 Feb 2019 21:33:38 -0500 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 19 Feb 2019 18:33:42 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 19 Feb 2019 18:33:36 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 19 Feb 2019 18:33:36 -0800 Received: from [10.2.173.71] (172.20.13.39) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 20 Feb 2019 02:33:35 +0000 From: Zi Yan To: Mike Kravetz CC: , , Dave Hansen , Michal Hocko , "Kirill A . Shutemov" , Andrew Morton , Vlastimil Babka , Mel Gorman , John Hubbard , Mark Hairgrove , Nitin Gupta , David Nellans Subject: Re: [RFC PATCH 00/31] Generating physically contiguous memory after page allocation Date: Tue, 19 Feb 2019 18:33:35 -0800 X-Mailer: MailMate (1.12.4r5594) Message-ID: In-Reply-To: References: <20190215220856.29749-1-zi.yan@sent.com> MIME-Version: 1.0 X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL103.nvidia.com (172.20.187.11) To HQMAIL101.nvidia.com (172.20.187.10) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: quoted-printable DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1550630022; bh=l8yJ4E9Gd0N6lJ+NedKrddnw4p9+bSj8oBtj6wXakR4=; h=X-PGP-Universal:From:To:CC:Subject:Date:X-Mailer:Message-ID: In-Reply-To:References:MIME-Version:X-Originating-IP: X-ClientProxiedBy:Content-Type:Content-Transfer-Encoding; b=YeTT7dsqrxdOlG7U3ElpvLQar1W0yp4b0LPVU3EaXdmcu56b8WJocffDvHBE85mhg ADcEfFHLjTk7d69jU3S/tFDWmyfrfNWlMilYKwHv+CEINY0Jgn8ThNt2r7qV0UwmfM nxqtIJFI66ZPOUFr5S2n6eAogaLSgTztRmecHD2dHEo73OiAhzq8dGLSe6KRiT2sWs stlIpRJzpu15unEUOKkcXoByd/dfTKJv7I3dHzelhJXHoxZ6a7mxHT1zwlJxz6g98u TFewHzbeS0JSTLZ+OypZ1NVVEjqdPvYiHWGM8V0q2GGLnPNN9k9mw2NVTz1NwsjFwg I56qbsJ0xCSkA== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 19 Feb 2019, at 17:42, Mike Kravetz wrote: > On 2/15/19 2:08 PM, Zi Yan wrote: > > Thanks for working on this issue! > > I have not yet had a chance to take a look at the code. However, I do=20 > have > some general questions/comments on the approach. Thanks for replying. The code is very intrusive and has a lot of hacks,=20 so it is OK for us to discuss the general idea first. :) >> Patch structure >> ---- >> >> The patchset I developed to generate physically contiguous=20 >> memory/arbitrary >> sized pages merely moves pages around. There are three components in=20 >> this >> patchset: >> >> 1) a new page migration mechanism, called exchange pages, that=20 >> exchanges the >> content of two in-use pages instead of performing two back-to-back=20 >> page >> migration. It saves on overheads and avoids page reclaim and memory=20 >> compaction >> in the page allocation path, although it is not strictly required if=20 >> enough >> free memory is available in the system. >> >> 2) a new mechanism that utilizes both page migration and exchange=20 >> pages to >> produce physically contiguous memory/arbitrary sized pages without=20 >> allocating >> any new pages, unlike what khugepaged does. It works on per-VMA=20 >> basis, creating >> physically contiguous memory out of each VMA, which is virtually=20 >> contiguous. >> A simple range tree is used to ensure no two VMAs are overlapping=20 >> with each >> other in the physical address space. > > This appears to be a new approach to generating contiguous areas. =20 > Previous > attempts had relied on finding a contiguous area that can then be used=20 > for > various purposes including user mappings. Here, you take an existing=20 > mapping > and make it contiguous. [RFC PATCH 04/31] mm: add mem_defrag=20 > functionality > talks about creating a (VPN, PFN) anchor pair for each vma and then=20 > using > this pair as the base for creating a contiguous area. > > I'm curious, how 'fixed' is the anchor? As you know, there could be a > non-movable page in the PFN range. As a result, you will not be able=20 > to > create a contiguous area starting at that PFN. In such a case, do we=20 > try > another PFN? I know this could result in much page shuffling. I'm=20 > just > trying to figure out how we satisfy a user who really wants a=20 > contiguous > area. Is there some method to keep trying? Good question. The anchor is determined on a per-VMA basis, which can be=20 changed easily, but in this patchiest, I used a very simple strategy =E2=80=94 making all V= MAs=20 not overlapping in the physical address space to get maximum overall contiguity and not=20 changing anchors even if non-moveable pages are encountered when generating physically=20 contiguous pages. Basically, first VMA1 in the virtual address space has its anchor as=20 (VMA1_start_VPN, ZONE_start_PFN), second VMA1 has its anchor as (VMA2_start_VPN, ZONE_start_PFN +=20 VMA1_size), and so on. This makes all VMA not overlapping in physical address space during=20 contiguous memory generation. When there is a non-moveable page, the anchor will not be=20 changed, because no matter whether we assign a new anchor or not, the contiguous pages=20 stops at the non-moveable page. If we are trying to get a new anchor, more effort=20 is needed to avoid overlapping new anchor with existing contiguous pages. Any=20 overlapping will nullify the existing contiguous pages. To satisfy a user who wants a contiguous area with N pages, the minimal=20 distance between any two non-moveable pages should be bigger than N pages in the system=20 memory. Otherwise, nothing would work. If there is such an area (PFN1, PFN1+N) in the=20 physical address space, you can set the anchor to (VPN_USER, PFN1) and use exchange_pages() to=20 generate a contiguous area with N pages. Instead, alloc_contig_pages(PFN1, PFN1+N, =E2=80=A6) cou= ld=20 also work, but only at page allocation time. It also requires the system has N free=20 pages when alloc_contig_pages() are migrating the pages in (PFN1, PFN1+N) away, or=20 you need to swap pages to make the space. Let me know if this makes sense to you. -- Best Regards, Yan Zi