Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp1424763pxu; Mon, 23 Nov 2020 22:55:18 -0800 (PST) X-Google-Smtp-Source: ABdhPJyLRlTy/oleV8jEGrEElb6ppz5nJMrD2U3iv5fM0zglSiZKSzmOimpvqxnAEMcCBTgrsySS X-Received: by 2002:a17:906:660b:: with SMTP id b11mr3053214ejp.190.1606200917765; Mon, 23 Nov 2020 22:55:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606200917; cv=none; d=google.com; s=arc-20160816; b=NfGFE+cJZ7yEG93YkSoB7vKOxQREcDu9wIrHpiUQLo5DF0RUHHxKZAYe6ZAdFbPA3R 0gdvFEz5EszXTLm4BzPiIVJQ1tpcaHkBFGCEaBKjkfBCMnfGhFUmj2Gr/PLqC+cdwDEy 1yMvCC/N8QASA6A0fXvWHs4gRmubcPpOl7vnQdXHyQxLY36M0jxIFYguhvDxEMVrhemw 3q2YVaxoC+rtE06e3fqjPhc9EYmk1UP0X7lZQT0kyi9EDYsYUgqFkr+sBWY7NG3EYvKf NRgdDyjasBb5i+p5Ve/iUjPmg5jqHXRqDMA8ioN9xl6RsPNCx/qCRlHpVKs8+T1z0vUo sGvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:dkim-signature:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:to:subject; bh=+E7P+/MVvdxye/PKfE7rKDuvOl7P/QbV6+/0lHewOps=; b=morg2B4IWvwi3GiHwV1rLiNIxHjI7BgLXUq7E3y2nA2US5/e6potAROhKfbMWPcfBv lR+eN4fDbrVY+114ZQaOW5D4GI234oVeX0TrSu3LhzXMcYYMFgJWxaHrDfqyt60f7lxU XEv0nYNRmFh2PL1ntYtcjI6ZD/HyZM+6Swud8tVntzm4MH2BgaLjuBwaEkDQx+wMMt1R Gfp6pIakZvBOCM0QSBECXjpwDAxx8iYL7mTYOI9ri6rvIiQqNTbC2akmzLYEZgfts3mx oORAa183Y8zT6ZXyXdyU2DE1FCdbf2fEfFLN1N5SGCRlCDAPqgngYX1iJWTaCIWfzJfS lY2w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=EqpuXuLr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id pj7si7626541ejb.254.2020.11.23.22.54.54; Mon, 23 Nov 2020 22:55:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=EqpuXuLr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729843AbgKXGto (ORCPT + 99 others); Tue, 24 Nov 2020 01:49:44 -0500 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:10245 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725786AbgKXGtn (ORCPT ); Tue, 24 Nov 2020 01:49:43 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Mon, 23 Nov 2020 22:49:43 -0800 Received: from [10.2.51.94] (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 24 Nov 2020 06:49:35 +0000 Subject: Re: Pinning ZONE_MOVABLE pages To: Pavel Tatashin , linux-mm , Andrew Morton , Vlastimil Babka , LKML , Michal Hocko , "David Hildenbrand" , Oscar Salvador , "Dan Williams" , Sasha Levin , "Tyler Hicks" , Joonsoo Kim , References: From: John Hubbard Message-ID: <89be454b-4464-0c50-c910-917373f29ba5@nvidia.com> Date: Mon, 23 Nov 2020 22:49:35 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:83.0) Gecko/20100101 Thunderbird/83.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1606200583; bh=+E7P+/MVvdxye/PKfE7rKDuvOl7P/QbV6+/0lHewOps=; h=Subject:To:References:From:Message-ID:Date:User-Agent: MIME-Version:In-Reply-To:Content-Type:Content-Language: Content-Transfer-Encoding:X-Originating-IP:X-ClientProxiedBy; b=EqpuXuLrMAZQJhZ58grdtyOXe2xPWi8HtCe3slxg3EkeTSDGvGK6zwWbEL0aXzyp8 8x1U0aJ/t8D40tjCBYM5vp0CuxiJpldjW9zZsU7WUllFUIYsS/SY4azpt9qjBy4Mrk XxCU5jSK7tHu1/O173JwwKNOZ8THOaJhfCRUVh3pG3wUFMfwbLKVR4mnAOxTbDObWn DcC/wk5rAuoaPC7Whsc2KVqokOQRxTxRmfakN8dlg9PdXWfqjn9cO1xDMCiCrOVu6N bkRUafXXDLqv/5z0ypTJZ6yKfNwRrzwVd4KPz0kjf25MXUBBeojkgHvjaAVyYZzXM2 ILCPG4PQhTK+w== Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11/20/20 12:27 PM, Pavel Tatashin wrote: > Recently, I encountered a hang that is happening during memory hot > remove operation. It turns out that the hang is caused by pinned user > pages in ZONE_MOVABLE. > > Kernel expects that all pages in ZONE_MOVABLE can be migrated, but > this is not the case if a user applications such as through dpdk > libraries pinned them via vfio dma map. Kernel keeps trying to > hot-remove them, but refcnt never gets to zero, so we are looping > until the hardware watchdog kicks in. > > We cannot do dma unmaps before hot-remove, because hot-remove is a > slow operation, and we have thousands for network flows handled by > dpdk that we just cannot suspend for the duration of hot-remove > operation. > > The solution is for dpdk to allocate pages from a zone below > ZONE_MOVAVLE, i.e. ZONE_NORMAL/ZONE_HIGHMEM, but this is not possible. > There is no user interface that we have that allows applications to > select what zone the memory should come from. > > I've spoken with Stephen Hemminger, and he said that DPDK is moving in > the direction of using transparent huge pages instead of HugeTLBs, > which means that we need to allow at least anonymous, and anonymous > transparent huge pages to come from non-movable zones on demand. > > Here is what I am proposing: > 1. Add a new flag that is passed through pin_user_pages_* down to > fault handlers, and allow the fault handler to allocate from a > non-movable zone. I like where the discussion so far (in the other threads) has taken this. And the current plan also implies, I think, that you can probably avoid any new flags at all: just check that both FOLL_LONGTERM and FOLL_PIN are set, and if they are, then make your attempt to migrate away from ZONE_MOVABLE. > > Sample function stacks through which this info needs to be passed is this: > > pin_user_pages_remote(gup_flags) > __get_user_pages_remote(gup_flags) > __gup_longterm_locked(gup_flags) > __get_user_pages_locked(gup_flags) > __get_user_pages(gup_flags) > faultin_page(gup_flags) > Convert gup_flags into fault_flags > handle_mm_fault(fault_flags) I'm pleased that the gup_flags have pretty much been plumbed through all the main places that they were missing, so there shouldn't be too much required at this point. > > From handle_mm_fault(), the stack diverges into various faults, > examples include: > > Transparent Huge Page > handle_mm_fault(fault_flags) > __handle_mm_fault(fault_flags) > Create: struct vm_fault vmf, use fault_flags to specify correct gfp_mask > create_huge_pmd(vmf); > do_huge_pmd_anonymous_page(vmf); > mm_get_huge_zero_page(vma->vm_mm); -> flag is lost, so flag from > vmf.gfp_mask should be passed as well. > > There are several other similar paths in a transparent huge page, also > there is a named path where allocation is based on filesystems, and > the flag should be honored there as well, but it does not have to be > added at the same time. > > Regular Pages > handle_mm_fault(fault_flags) > __handle_mm_fault(fault_flags) > Create: struct vm_fault vmf, use fault_flags to specify correct gfp_mask > handle_pte_fault(vmf) > do_anonymous_page(vmf); > page = alloc_zeroed_user_highpage_movable(vma, vmf->address); -> > replace change this call according to gfp_mask. > > The above only take care of the case if user application faults on the > page during pinning time, but there are also cases where pages already > exist. > > 2. Add an internal move_pages_zone() similar to move_pages() syscall > but instead of migrating to a different NUMA node, migrate pages from > ZONE_MOVABLE to another zone. > Call move_pages_zone() on demand prior to pinning pages from > vfio_pin_map_dma() for instance. > > 3. Perhaps, it also makes sense to add madvise() flag, to allocate > pages from non-movable zone. When a user application knows that it > will do DMA mapping, and pin pages for a long time, the memory that it > allocates should never be migrated or hot-removed, so make sure that > it comes from the appropriate place. > The benefit of adding madvise() flag is that we won't have to deal > with slow page migration during pin time, but the disadvantage is that > we would need to change the user interface. > > Before I start working on the above approaches, I would like to get an > opinion from the community on an appropriate path forward for this > problem. If what I described sounds reasonable, or if there are other > ideas on how to address the problem that I am seeing. > I'm also in favor with avoiding (3) for now and maybe forever, depending on how it goes. Good luck... :) thanks, -- John Hubbard NVIDIA