Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp16480052rwd; Mon, 26 Jun 2023 10:31:12 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6TuRYMdvkPhIldg3FPkZ50UaLsBK9EbSUYYgH2SxxiEVQHAvcGcTnrIdfR3g973cc0YUQI X-Received: by 2002:a17:906:6a25:b0:989:40a9:5059 with SMTP id qw37-20020a1709066a2500b0098940a95059mr14929606ejc.73.1687800671812; Mon, 26 Jun 2023 10:31:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687800671; cv=none; d=google.com; s=arc-20160816; b=Qoj4uVvgokw0D+AV0yvRa742rqGME0ecdRvu0yBFLB6nqD9Vak9FZurRekC3iKSzip 14+MJpuuQspN5oSE9qov/urFDe3yZlXSj8bDwkMXV3sa+O9SCDLMzJNHEOLNq2Z3Uhqo 30P+kv96YBOMpHtejQF9a4ebCvc/kYaKbX1KZOD6DyPeOnJI8wjuwHS73gH8NUHML6Kn XdEpeF83lhTiAJtDWkKCHe+Mo6o4vuBaW1qSTLJOI5xec966fmeiW02QVbrkwLjat9SJ 7aUy1u8YJLOPF6tscjlePsd6iEm5sVpnaHo/s1lzPNdGixCa+z8QVtopQGiFCao2Q+aY BZOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=lSO1RBWRS/esyQ/yVe9ST1uKh8G+pWrwI0yEsZeuvk8=; fh=H2MVjBlipHVEN6kEAh1RDhnPLB9jpPNjGExTmo1/EvA=; b=X6NwO1k/NHZpFn5dUVGgbBkxcj5ciPyOgEtQk1nzTY1nJhPjLouqZfC43zHP3klQXY FquXm3/kZ1eR1RobeRHWYmWLBQTW9lvnhFgiD6Dq4d2Y5EHdV7fRdpNt44MtJp/JrWa0 Xdwb1pQ0PiZWETZ7JN34t6yhQkAXOpNPcXnF/+rzKAApF10MyTh24wodvhlBYvztOacs L5g0wUUOSA9HhxBP0Jt5moCTmZ3gleosdHBKHz6n+lPwg9tzHhMCUhLJHNwbX6k3U8ms RReL0FBhhEgh0Givb58l2Qvv8Jnl7u+BJVWZCyyRcZuV22UaNvAPWiu+0ilDUUrQ2rU3 blEw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l26-20020a1709060e1a00b0098885a739f2si3138718eji.657.2023.06.26.10.30.46; Mon, 26 Jun 2023 10:31:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231314AbjFZRPv (ORCPT + 99 others); Mon, 26 Jun 2023 13:15:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231267AbjFZRO5 (ORCPT ); Mon, 26 Jun 2023 13:14:57 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9547010D2; Mon, 26 Jun 2023 10:14:50 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6A1EA1474; Mon, 26 Jun 2023 10:15:34 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 80DFE3F663; Mon, 26 Jun 2023 10:14:47 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , "Matthew Wilcox (Oracle)" , "Kirill A. Shutemov" , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Will Deacon , Geert Uytterhoeven , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-s390@vger.kernel.org Subject: [PATCH v1 03/10] mm: Introduce try_vma_alloc_movable_folio() Date: Mon, 26 Jun 2023 18:14:23 +0100 Message-Id: <20230626171430.3167004-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626171430.3167004-1-ryan.roberts@arm.com> References: <20230626171430.3167004-1-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Opportunistically attempt to allocate high-order folios in highmem, optionally zeroed. Retry with lower orders all the way to order-0, until success. Although, of note, order-1 allocations are skipped since a large folio must be at least order-2 to work with the THP machinery. The user must check what they got with folio_order(). This will be used to oportunistically allocate large folios for anonymous memory with a sensible fallback under memory pressure. For attempts to allocate non-0 orders, we set __GFP_NORETRY to prevent high latency due to reclaim, instead preferring to just try for a lower order. The same approach is used by the readahead code when allocating large folios. Signed-off-by: Ryan Roberts --- mm/memory.c | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/mm/memory.c b/mm/memory.c index 367bbbb29d91..53896d46e686 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3001,6 +3001,39 @@ static vm_fault_t fault_dirty_shared_page(struct vm_fault *vmf) return 0; } +static inline struct folio *vma_alloc_movable_folio(struct vm_area_struct *vma, + unsigned long vaddr, int order, bool zeroed) +{ + gfp_t gfp = order > 0 ? __GFP_NORETRY | __GFP_NOWARN : 0; + + if (zeroed) + return vma_alloc_zeroed_movable_folio(vma, vaddr, gfp, order); + else + return vma_alloc_folio(GFP_HIGHUSER_MOVABLE | gfp, order, vma, + vaddr, false); +} + +/* + * Opportunistically attempt to allocate high-order folios, retrying with lower + * orders all the way to order-0, until success. order-1 allocations are skipped + * since a folio must be at least order-2 to work with the THP machinery. The + * user must check what they got with folio_order(). vaddr can be any virtual + * address that will be mapped by the allocated folio. + */ +static struct folio *try_vma_alloc_movable_folio(struct vm_area_struct *vma, + unsigned long vaddr, int order, bool zeroed) +{ + struct folio *folio; + + for (; order > 1; order--) { + folio = vma_alloc_movable_folio(vma, vaddr, order, zeroed); + if (folio) + return folio; + } + + return vma_alloc_movable_folio(vma, vaddr, 0, zeroed); +} + /* * Handle write page faults for pages that can be reused in the current vma * -- 2.25.1