Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp930135pxu; Mon, 23 Nov 2020 07:36:12 -0800 (PST) X-Google-Smtp-Source: ABdhPJz2MhUw+ukAi/6ohPAgKEUsty+m2t8yJwiDN1xgwEOvz9rzDKYek3yo2eTwDzcvbgCesmWb X-Received: by 2002:a17:907:2063:: with SMTP id qp3mr187804ejb.314.1606145772429; Mon, 23 Nov 2020 07:36:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606145772; cv=none; d=google.com; s=arc-20160816; b=JNmukjKQuirKwQxw1KpHEgbSkoE8DxaBrvA/1juSzZ5LLvsywXFHRT5iP7E1jJwbsi OmftFvtfGbxCqg/dj38NwTz+14Ve0qhPflRkmRbl71MIlWMNnHBFDC0O8pNJhRTsmPAc SoMhOvIaF/3KlPaOIE0GyEKAOvqpy1y49FVkma7rsJ+t2q/y0t+UuliJdW7ObIoExELz OiRS5ctw/Y3bs7OIakh/1qcHiCs02BK6OPY7knmsv1uIrgOzHK8LBLHMR3oyEuIXPuHk 1Kxtf7/OslygfM8e4HTRgGqWgqbAwrlXHMHNjrVPhb4WbQFEqhnlRcn5x7OqHniDhtiq QSuQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=fIZYbFnYdwj7TViAxEdW/8HWXJPnofI2KBoUzM1Et1w=; b=KtibABKx83CO6NZMsUx6f4A5AyIog0Axxd6vxn1nol9NveTpX7yB8yTQpJjLa4fWMc Jcw/0modi77j8N3gHiKd6ayI/qoGr+5tTTbFBQfaXF+1TtbTRqO2iW34KNspa0AtGtv3 u6z9brX9SwLhnwgAAD6hw11TH1xFu83P3dY63GXlqZ9uhD2DF4T3fO8PnPbGHX4dCqCL 2F9U4I3pe8Pe1f+i+9E9f7iP4wb+o0bua5EY0PK7XQxkNuH3k/2k+WmuiM745ehL6u0f h7aOKfa4nV+rCAe8R3WwkZ+35BRgfIok+WTPjkN0kqC3frYaHD3+bZQqB4XVGPzul8uP Mhaw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=iT+pGSg6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y25si6626219ejd.556.2020.11.23.07.35.48; Mon, 23 Nov 2020 07:36:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=iT+pGSg6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389550AbgKWPbz (ORCPT + 99 others); Mon, 23 Nov 2020 10:31:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55794 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389536AbgKWPby (ORCPT ); Mon, 23 Nov 2020 10:31:54 -0500 Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com [IPv6:2a00:1450:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93A9DC0613CF for ; Mon, 23 Nov 2020 07:31:53 -0800 (PST) Received: by mail-ej1-x636.google.com with SMTP id mc24so2778884ejb.6 for ; Mon, 23 Nov 2020 07:31:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=fIZYbFnYdwj7TViAxEdW/8HWXJPnofI2KBoUzM1Et1w=; b=iT+pGSg6Cx/G8/LTFQRRDQok0gvbqLFnc0bXOhWkbrJGuE1Ovbz2uwIfLkpTn01+qJ O8rDNv7d1DItT8Qn7hHGy6UWl4YOZfC7ferFvnenKMcRvFEzKh/EBxGDGdKTWRuvLwFw 9exeVTwUWk9tSVOT2UdJu9RPs2wgkEiTPi2rdIOOq8GO0fz0AZbZICU39aCKWuHMcEno yrFGoQKFM2atxvBSq1qrj0Ul7w8vTIfotcZVrJErR5Q9AgnWdimb8kFS49Xr7OiW7/bu S0p9FpcguwNpxRluVWWNMfGSW0mXagL7wN0mjWxILNf4GPVXTRmkvGG1HW89NYw1OLpO xvtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=fIZYbFnYdwj7TViAxEdW/8HWXJPnofI2KBoUzM1Et1w=; b=tJs2XWvgCwHE2f11yzUSSg46Ph/B2tb412Kl90roSPiJDPA7hGuJS7a8jR8fsXfz1z GosTjy+tGPnFd7h3tE8+FMnrB2t8AA1yOKQq9gyOcNqLtGbFDT6aVJznXeYAjOmHewMw xI038R47t5zc/dzO/qYmjR3OtvcJ0DrGdCchYWzre5fLhvh7hImsMo0/iRk+na0AixVf dEJ8ZenHqSZsByMg+jVk+r3GYC+XIKdAJqNLon/DicJQXZf1WMSnQqxJeZb7khg0k35+ TqhK8wELxEI/jhetX6m4MbYOU1jrgG6mrCFHwDlPMnNG6VHZZp9l9yI8bklTBN4EsI6s PX8Q== X-Gm-Message-State: AOAM530K33J/+f8OPRJ0BkDpKSo/4wAtIOxLszA92zkdh4a8c8E7cV7E ftdhoADvZsqTi/k+0wZodVLFASVZH9Nh/x9YmS/zXg== X-Received: by 2002:a17:906:d41:: with SMTP id r1mr108385ejh.383.1606145512254; Mon, 23 Nov 2020 07:31:52 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Pavel Tatashin Date: Mon, 23 Nov 2020 10:31:16 -0500 Message-ID: Subject: Re: Pinning ZONE_MOVABLE pages To: David Rientjes Cc: linux-mm , Andrew Morton , Vlastimil Babka , LKML , Michal Hocko , David Hildenbrand , Oscar Salvador , Dan Williams , Sasha Levin , Tyler Hicks , Joonsoo Kim , sthemmin@microsoft.com Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > > I've spoken with Stephen Hemminger, and he said that DPDK is moving in > > the direction of using transparent huge pages instead of HugeTLBs, > > which means that we need to allow at least anonymous, and anonymous > > transparent huge pages to come from non-movable zones on demand. > > > > I'd like to know more about this use case, ZONE_MOVABLE is typically a > great way to optimize for thp availability because, absent memory pinning, > this memory can always be defragmented. So the idea is that DPDK will now > allocate all of its thp from ZONE_NORMAL or only a small subset? Seems > like an invitation for oom kill if the sizing of ZONE_NORMAL is > insufficient. The idea is to allocate only those THP and anon pages that are long term pinned from ZONE_NORMAL, the rest can still be allocated from ZONE_MOVABLE. > > > Here is what I am proposing: > > 1. Add a new flag that is passed through pin_user_pages_* down to > > fault handlers, and allow the fault handler to allocate from a > > non-movable zone. > > > > Sample function stacks through which this info needs to be passed is this: > > > > pin_user_pages_remote(gup_flags) > > __get_user_pages_remote(gup_flags) > > __gup_longterm_locked(gup_flags) > > __get_user_pages_locked(gup_flags) > > __get_user_pages(gup_flags) > > faultin_page(gup_flags) > > Convert gup_flags into fault_flags > > handle_mm_fault(fault_flags) > > > > From handle_mm_fault(), the stack diverges into various faults, > > examples include: > > > > Transparent Huge Page > > handle_mm_fault(fault_flags) > > __handle_mm_fault(fault_flags) > > Create: struct vm_fault vmf, use fault_flags to specify correct gfp_mask > > create_huge_pmd(vmf); > > do_huge_pmd_anonymous_page(vmf); > > mm_get_huge_zero_page(vma->vm_mm); -> flag is lost, so flag from > > vmf.gfp_mask should be passed as well. > > > > There are several other similar paths in a transparent huge page, also > > there is a named path where allocation is based on filesystems, and > > the flag should be honored there as well, but it does not have to be > > added at the same time. > > > > Regular Pages > > handle_mm_fault(fault_flags) > > __handle_mm_fault(fault_flags) > > Create: struct vm_fault vmf, use fault_flags to specify correct gfp_mask > > handle_pte_fault(vmf) > > do_anonymous_page(vmf); > > page = alloc_zeroed_user_highpage_movable(vma, vmf->address); -> > > replace change this call according to gfp_mask. > > > > This would likely be useful for AMD SEV as well, which requires guest > pages to be pinned because the encryption algorithm depends on the host > physical address. This ensures that plaintext memory for two pages don't > result in the same ciphertext.