Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp955618pxu; Mon, 23 Nov 2020 08:11:24 -0800 (PST) X-Google-Smtp-Source: ABdhPJxfnm2CzlQJQuw/4piYIXTszEDdDCUCqrWjGyWyMZ5EDHdspCuTfWu40W9NLjyGPkzyKcxK X-Received: by 2002:a17:906:e24d:: with SMTP id gq13mr313158ejb.262.1606147884508; Mon, 23 Nov 2020 08:11:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606147884; cv=none; d=google.com; s=arc-20160816; b=V/XCEG1Vaqoh1SUxvGUgNRD6V+cYUwB7DB7O8F2llJf73FuCqfQEK6YMFMYOyv/2ZP uVw7CGFd/MF4nHMMciQVxcWzPJ64Hsc/L9fE/Wk0gLYT2h6OD+T2i+8bvsrH1goVH9Mp jgu/kdWdNERb5M0zA+2AQfdaNzpKNKDm2RLsHfavMFA2eM2w2a51c8vB9odHhYoKrzlO wP54jmxAI3xMOWkiTt6j4ZwR0DjgCjuYhhMZcdIjC/9mx2vwt9zLc3OKky15KSfC74Xa JIUMUTdfuz+7c7mPU+Vcr9hTJ4u0Q9o9blOk10eJCp4DptiH8SXv3SCxenC+Ke8q/8Q8 rigg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=27zCzsugV+PXTs9+xhojLQOGiVY+wvfQyM1MNClymK4=; b=Yb8sSCdR1Nlj3lLYLIp7XUj3Rmu3aJfGceUtX0+deuc917jnlUWdV1J0bjCPcJIvs/ ljvIVg0gGVZy8/S6W0K605Z/9LpNsH6XB0D6lK1DX1PXhaUUSXouNcvJbp+3stagIqH2 QVel0P8i7/0pkp2zrTTbZOknyoEm+AYVTNLjcojtnzE9zSF/QNzV0+R+JvhGfVzOVwzV rz3lJ9QmPv8slXswFAbHIfYGuagB8s99OJcGN52ahP5+vYgM1OCtNYz2cy1ibFwVckZ2 zy5iiTWynLwSXkkcZGThIazqQTCSocvbzqaefpkh6ztU4Dyx8N79bkJyLkad3xi006WO 9+Qg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=UHx0tOO+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id qt10si7787755ejb.44.2020.11.23.08.11.00; Mon, 23 Nov 2020 08:11:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=UHx0tOO+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389871AbgKWQHC (ORCPT + 99 others); Mon, 23 Nov 2020 11:07:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387403AbgKWQHB (ORCPT ); Mon, 23 Nov 2020 11:07:01 -0500 Received: from mail-ed1-x531.google.com (mail-ed1-x531.google.com [IPv6:2a00:1450:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 338F8C0613CF for ; Mon, 23 Nov 2020 08:07:00 -0800 (PST) Received: by mail-ed1-x531.google.com with SMTP id r22so9003739edw.6 for ; Mon, 23 Nov 2020 08:07:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=27zCzsugV+PXTs9+xhojLQOGiVY+wvfQyM1MNClymK4=; b=UHx0tOO+ym/Yp91KlLEqkdPump6vUEu4pcxmRAdpFaTDJqp3F+IVqNSvwUd/jhgO/z 22TP6WibIeeGx8zZjSQWm6CJ7DFURRnwxBXxlFZCozs3By7lr+VvrDR1AVqTT95UJqvO 0L9Ea3bo41/3p3LhTLVMl7+iixeNmOdoS2J8+omZOIGBeCR3vs8QQyuzQXwFEaIQNxue HZ23rJ1UrAErc87Dqb5pym+K0F9pds7w9HVtg9IXvrTh6pUj7IlTyX4BXGovfYCEmqeK 3Dxfh3RZrBatnCroZXcp3vT9gHQa5IHwv53LMeXUdVEPLDyT2o9wBjTm8Zl45UaznEj+ 2bJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=27zCzsugV+PXTs9+xhojLQOGiVY+wvfQyM1MNClymK4=; b=F4ZdfScyb+eKLh8aoDgDNA1CRstcDIx0CSEdfasbWzXB3uLFLZvF8/3jDcoHrQml3X OyTzrJZDpJ1eHAl3sh/SfjW5Xg6JnnOxkEvqEtASk8IZfB0S3nOJ++ftMLsPOrdqfaG4 Y9vaH09NO9jnummwKcSirnvcmV09SsN4FS0qn82WK0qqRwtRKiaQVwSC2jYP2Ghns2Gy 7CSQz96hDOYvbmnBYqKqak/NJ7W+pS+dBcu1hWw9iA8u39+FRRmuyfMJO1afg4N+HEkG NNkLqeb4N8Nd+YeTX1mUNTR7Y5BvbeGSNt0Vs/16GToulhtTGiLXWAAZANYz3vLcwRBg f3nQ== X-Gm-Message-State: AOAM531Qr42m3L4v8YZQDk+o63vAxEr5f3+MdpD1fckoJ8ytYnw/kCG1 q3C3JLWlYGnfy/XrlzuvS6iSivP4PViG6q6NDRHFmw== X-Received: by 2002:a05:6402:a53:: with SMTP id bt19mr46981153edb.26.1606147617545; Mon, 23 Nov 2020 08:06:57 -0800 (PST) MIME-Version: 1.0 References: <20201123090129.GD27488@dhcp22.suse.cz> In-Reply-To: <20201123090129.GD27488@dhcp22.suse.cz> From: Pavel Tatashin Date: Mon, 23 Nov 2020 11:06:21 -0500 Message-ID: Subject: Re: Pinning ZONE_MOVABLE pages To: Michal Hocko Cc: linux-mm , Andrew Morton , Vlastimil Babka , LKML , David Hildenbrand , Oscar Salvador , Dan Williams , Sasha Levin , Tyler Hicks , Joonsoo Kim , sthemmin@microsoft.com Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 23, 2020 at 4:01 AM Michal Hocko wrote: > > On Fri 20-11-20 15:27:46, Pavel Tatashin wrote: > > Recently, I encountered a hang that is happening during memory hot > > remove operation. It turns out that the hang is caused by pinned user > > pages in ZONE_MOVABLE. > > > > Kernel expects that all pages in ZONE_MOVABLE can be migrated, but > > this is not the case if a user applications such as through dpdk > > libraries pinned them via vfio dma map. > > Long term or effectively time unbound pinning on zone movable is > fundamentaly broken. The sole reason of ZONE_MOVABLE existence is to > guarantee migrateability. If the cosumer of this memory cannot guarantee > that then it shouldn't use __GFP_MOVABLE in the first place. Exactly, this is what I am trying to solve, and started this thread to figure out what is the best approach to address this problem. > > > Kernel keeps trying to > > hot-remove them, but refcnt never gets to zero, so we are looping > > until the hardware watchdog kicks in. > > Yeah, the existing offlining behavior doesn't stop trying because the > current implementation of the migration cannot tell a diffence between > short and long term failures. Maybe the recent ref count for long term > pinning can be used to help out there. > > Anyway, I am wondering what do you mean by watchdog firing. The > operation should trigger neither of soft, hard or hung detectors. You are right, the hot-remove is killable operation. In our case, however, systemd stops petting watchdog during kexec reboot to ensure that reboot finishes, however, because we hot-remove memory during shutdown, and kernel is unable to hot-remove memory within 60s we get a watchdog reset. > > > We cannot do dma unmaps before hot-remove, because hot-remove is a > > slow operation, and we have thousands for network flows handled by > > dpdk that we just cannot suspend for the duration of hot-remove > > operation. > > > > The solution is for dpdk to allocate pages from a zone below > > ZONE_MOVAVLE, i.e. ZONE_NORMAL/ZONE_HIGHMEM, but this is not possible. > > There is no user interface that we have that allows applications to > > select what zone the memory should come from. > > Our existing interface is __GFP_MOVABLE. It is a responsibility of the > driver to know whether the resulting memory is migratable. Users > shouldn't even have to think about that. Sure, so let's migrate, and fault memory from drivers when long term pinning. Which is 1 and 2 in my proposal. > > I've spoken with Stephen Hemminger, and he said that DPDK is moving in > > the direction of using transparent huge pages instead of HugeTLBs, > > which means that we need to allow at least anonymous, and anonymous > > transparent huge pages to come from non-movable zones on demand. > > You can migrate before pinning. Yes. > > > Here is what I am proposing: > > 1. Add a new flag that is passed through pin_user_pages_* down to > > fault handlers, and allow the fault handler to allocate from a > > non-movable zone. > > gup already tries to deal with long term pins on CMA regions and migrate > to a non CMA region. Have a look at __gup_longterm_locked. Migrating of > the movable zone sounds like a reasonable solution to me. Yes, CMA is doing something similar, but it is migrating before pinning from CMA to movable zone to avoid fragmentation of CMA. What we need to do is migrate before pinning to a non-movable zone for all pages. > > > 2. Add an internal move_pages_zone() similar to move_pages() syscall > > but instead of migrating to a different NUMA node, migrate pages from > > ZONE_MOVABLE to another zone. > > Call move_pages_zone() on demand prior to pinning pages from > > vfio_pin_map_dma() for instance. > > Why is the existing migration API insufficient? Here I am talking about internal implementation not user API. We do not have a function that migrates pages in a user address space from one zone to another zone. We only have a function that is exposed as a syscall that migrates pages from one node to another node. > > > 3. Perhaps, it also makes sense to add madvise() flag, to allocate > > pages from non-movable zone. When a user application knows that it > > will do DMA mapping, and pin pages for a long time, the memory that it > > allocates should never be migrated or hot-removed, so make sure that > > it comes from the appropriate place. > > The benefit of adding madvise() flag is that we won't have to deal > > with slow page migration during pin time, but the disadvantage is that > > we would need to change the user interface. > > No, the MOVABLE_ZONE like other zone types are internal implementation > detail of the MM. I do not think we want to expose that to the userspace > and carve this into stone. What I mean here is allowing users to guarantee that the page's PA is going to stay the same. Sort of a stronger mlock. Mlock only guarantees that the page is not swapped, but something like MADV_PINNED would guarantee that page is not going to be swapped and also not migrated. If a user determines the PA of that page, that PA is going to stay the same throughout the life of the page. This is not exposing internal implementation in any way, this guarantee could be honored in various ways: i.e. pinned or allocating from ZONE_NORMAL. The fact that we would honor it by allocating memory from ZONE_NORMAL is implementation detail that would not be exposed to the user. This is from DPDK's description: https://software.intel.com/content/www/us/en/develop/articles/memory-in-dpdk-part-1-general-concepts.html " Whenever a memory area is made available for DPDK to use, DPDK figures out its physical address by asking the kernel at that time. Since DPDK uses pinned memory, generally in the form of huge pages, the physical address of the underlying memory area is not expected to change, so the hardware can rely on those physical addresses to be valid at all times, even if the memory itself is not used for some time. DPDK then uses these physical addresses when preparing I/O transactions to be done by the hardware, and configures the hardware in such a way that the hardware is allowed to initiate DMA transactions itself. This allows DPDK to avoid needless overhead and to perform I/O entirely from user space. " I just think it is inefficient to first allocate memory from ZONE_MOVABLE, and later migrate it to ZONE_NORMAL. That said, I agree, we probably should not be adding a new flag at least as part of this work.