Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp908924pxu; Mon, 23 Nov 2020 07:08:07 -0800 (PST) X-Google-Smtp-Source: ABdhPJxbFqHkmjqfo4HU3GJCpO5wMgW3IFsSpeB4qj4Iqi2F/lBwzYPKpgjIoFp1ph6ruBHMmf40 X-Received: by 2002:a50:fe14:: with SMTP id f20mr45229074edt.61.1606144086834; Mon, 23 Nov 2020 07:08:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606144086; cv=none; d=google.com; s=arc-20160816; b=tzd5Wqx4TQYt1w3qAhfQEwGv84QhLb8ZLTyRZUccMCMbkVASiePmYWC9Vw/XYSEJiC 8NPMAj1ynW1Qew5/WYPR1UmqP4btNLeEHb19I7pkqxauiDiErzXojBSh0dqdzfusCkJ1 UFTuGvM1TQ/hN/mnnW/V4k2ByLmOvkgbcPTn7oh0A0zp9ut1eVmUj+ujnEgTzMALKkwy q3mwX4W5FrCnoqy2eHBqR6S+DBNhe2dav4l9O4enCOK/yfnhnQFXcX96gmleirdQ7LOi G/B2sILNcM6DPwt3MnDAq7twaLyL7AATVegDMNpN0J3CsJVP5wH3+Z/k3UEaTKhcrzzr OWow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:subject:from :references:to; bh=Cpn/bne0s3ccrYtqm2YMgr3Yu/VizjmaLH7IiDuCmVw=; b=yMg83cDsoxPo258mjIiOBTpTQf9qPZUIMR/YLmMvBmHjN6V5KTl1K4tDboSmtDbfuE Az2tj5fpvcsDuwRuaVhNNf0sCtgkzXrPcw21QBVa1jw2h1fKfUDgj4LeV7PCh2o9deqg UQ/6GCGDBGKxcFuW4PqgbbRwXlYDGTSy01qbjnRRLOMqW6PwcaPy17aSP3ShpSBAyh7k BiqPTz4xHk0YLkN03qsWPn6QfQUIhQ/6R5ka3HjEe6OzRwjXybF+OOQZNQZF79yF9TUf TfF7UAZLU9p2TfXqTFaqZMv6/9L5ZW6789byVCC09jWEwmi8U9lHmRrPM/HnZDDeB3Pt w4ug== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a3si6802852edq.430.2020.11.23.07.07.41; Mon, 23 Nov 2020 07:08:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389325AbgKWPEM (ORCPT + 99 others); Mon, 23 Nov 2020 10:04:12 -0500 Received: from mx2.suse.de ([195.135.220.15]:40654 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729244AbgKWPEM (ORCPT ); Mon, 23 Nov 2020 10:04:12 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id F119DAF0D; Mon, 23 Nov 2020 15:04:10 +0000 (UTC) To: Pavel Tatashin , linux-mm , Andrew Morton , LKML , Michal Hocko , David Hildenbrand , Oscar Salvador , Dan Williams , Sasha Levin , Tyler Hicks , Joonsoo Kim , sthemmin@microsoft.com, John Hubbard References: From: Vlastimil Babka Subject: Re: Pinning ZONE_MOVABLE pages Message-ID: Date: Mon, 23 Nov 2020 16:04:07 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.4.3 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org +CC John Hubbard On 11/20/20 9:27 PM, Pavel Tatashin wrote: > Recently, I encountered a hang that is happening during memory hot > remove operation. It turns out that the hang is caused by pinned user > pages in ZONE_MOVABLE. > > Kernel expects that all pages in ZONE_MOVABLE can be migrated, but > this is not the case if a user applications such as through dpdk > libraries pinned them via vfio dma map. Kernel keeps trying to > hot-remove them, but refcnt never gets to zero, so we are looping > until the hardware watchdog kicks in. > > We cannot do dma unmaps before hot-remove, because hot-remove is a > slow operation, and we have thousands for network flows handled by > dpdk that we just cannot suspend for the duration of hot-remove > operation. > > The solution is for dpdk to allocate pages from a zone below > ZONE_MOVAVLE, i.e. ZONE_NORMAL/ZONE_HIGHMEM, but this is not possible. > There is no user interface that we have that allows applications to > select what zone the memory should come from. > > I've spoken with Stephen Hemminger, and he said that DPDK is moving in > the direction of using transparent huge pages instead of HugeTLBs, > which means that we need to allow at least anonymous, and anonymous > transparent huge pages to come from non-movable zones on demand. > > Here is what I am proposing: > 1. Add a new flag that is passed through pin_user_pages_* down to > fault handlers, and allow the fault handler to allocate from a > non-movable zone. > > Sample function stacks through which this info needs to be passed is this: > > pin_user_pages_remote(gup_flags) > __get_user_pages_remote(gup_flags) > __gup_longterm_locked(gup_flags) > __get_user_pages_locked(gup_flags) > __get_user_pages(gup_flags) > faultin_page(gup_flags) > Convert gup_flags into fault_flags > handle_mm_fault(fault_flags) > > From handle_mm_fault(), the stack diverges into various faults, > examples include: > > Transparent Huge Page > handle_mm_fault(fault_flags) > __handle_mm_fault(fault_flags) > Create: struct vm_fault vmf, use fault_flags to specify correct gfp_mask > create_huge_pmd(vmf); > do_huge_pmd_anonymous_page(vmf); > mm_get_huge_zero_page(vma->vm_mm); -> flag is lost, so flag from > vmf.gfp_mask should be passed as well. > > There are several other similar paths in a transparent huge page, also > there is a named path where allocation is based on filesystems, and > the flag should be honored there as well, but it does not have to be > added at the same time. > > Regular Pages > handle_mm_fault(fault_flags) > __handle_mm_fault(fault_flags) > Create: struct vm_fault vmf, use fault_flags to specify correct gfp_mask > handle_pte_fault(vmf) > do_anonymous_page(vmf); > page = alloc_zeroed_user_highpage_movable(vma, vmf->address); -> > replace change this call according to gfp_mask. > > The above only take care of the case if user application faults on the > page during pinning time, but there are also cases where pages already > exist. Makes sense, as this means no userspace change. > 2. Add an internal move_pages_zone() similar to move_pages() syscall > but instead of migrating to a different NUMA node, migrate pages from > ZONE_MOVABLE to another zone. > Call move_pages_zone() on demand prior to pinning pages from > vfio_pin_map_dma() for instance. As others already said, migrating away before the longterm pin should be the solution. IIRC it was one of the goals of long term pinning api proposed long time ago by Peter Ziljstra I think? The implementation that was merged relatively recently doesn't do that (yet?) for all movable pages, just CMA, but it could. > 3. Perhaps, it also makes sense to add madvise() flag, to allocate > pages from non-movable zone. When a user application knows that it > will do DMA mapping, and pin pages for a long time, the memory that it > allocates should never be migrated or hot-removed, so make sure that > it comes from the appropriate place. > The benefit of adding madvise() flag is that we won't have to deal > with slow page migration during pin time, but the disadvantage is that > we would need to change the user interface. It's best if we avoid involving userspace until it's shown that's it's insufficient. > Before I start working on the above approaches, I would like to get an > opinion from the community on an appropriate path forward for this > problem. If what I described sounds reasonable, or if there are other > ideas on how to address the problem that I am seeing. > > Thank you, > Pasha >