Received: by 2002:a05:6358:489b:b0:bb:da1:e618 with SMTP id x27csp7096111rwn; Tue, 13 Sep 2022 13:54:26 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5mHby5mFYRFBKk/KJx5DZVPpJK14VgPop+Fl7efmhg6tmo4FWjskXwKQmn4+y/f5p5u+Um X-Received: by 2002:a17:90b:264a:b0:1fd:f88d:dbad with SMTP id pa10-20020a17090b264a00b001fdf88ddbadmr1107519pjb.93.1663102466681; Tue, 13 Sep 2022 13:54:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1663102466; cv=none; d=google.com; s=arc-20160816; b=EuUp8BeVjugZC50jSzAW0yb2BI2FQo4Om64K+oAIxDNIUppZlfR6uaMfcA3RU6TWfg GhgUB/tO77VtIJtpYNUHBQu4/GkuZBMDCUUOHeNmvxV0JpKaRBB26OyI5F13zGmMrlOc 58MwuWX8glorrtYpIUJID1kIgylPvz4O1PVl1qJudm4ylIFKeg/7SMYGKWmQHbj1EaZW cq70eBpfGxxSr2cmy50vODsMLHqfXZOzUG/aPxXVdf4pLSlZ9YvQXH/8/fWOEpjVrXun XKb2u8lUIUKabl5h2jDzpC+1h+rNB/FRkga8sI4S0SG604ssJbpdxx9KSbdsX/yZU3uf NlWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ivekPqFwDTKnplQubqOcOnwSj5SwqgEqKMLIx8G0mTA=; b=bb0WZlYuiaS9JGPJgCkBpQ8PJL7Jz35VhdahQwadblHRMH9Y3RJdYGZUMBRTcpMqch Mpom8Z3ulSY9TIe6qVOiKEha/dxcEEfcy7UbLHey3RoNOg+DIT18WGhJYHZ2MnKvBr94 9ufLQYZcV5y77EB2a3IfBsg7q7yI7ZRqAt37bKiTeja6dqCodquu92J+HuxtP6VcqmNW 4GmmwdJcEhNjS7ztAljraonJJModGzym0+2e2izpKHZTc2aJ+XGUZbNOxeAjouuMQ5IP oKlzuuRk5T79HSEDZB8yJQZ8gWFOVDnTJ4K/B32Ygxdr9Ztfj9XCVeM98RCNLS6+F9MQ a4UA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=kvUC28iy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c14-20020a17090abf0e00b0020087bc6415si4572966pjs.16.2022.09.13.13.54.13; Tue, 13 Sep 2022 13:54:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=kvUC28iy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229920AbiIMT6r (ORCPT + 99 others); Tue, 13 Sep 2022 15:58:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229854AbiIMT60 (ORCPT ); Tue, 13 Sep 2022 15:58:26 -0400 Received: from mail-qv1-xf2a.google.com (mail-qv1-xf2a.google.com [IPv6:2607:f8b0:4864:20::f2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65A9E7696F; Tue, 13 Sep 2022 12:58:14 -0700 (PDT) Received: by mail-qv1-xf2a.google.com with SMTP id g4so10091039qvo.3; Tue, 13 Sep 2022 12:58:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=ivekPqFwDTKnplQubqOcOnwSj5SwqgEqKMLIx8G0mTA=; b=kvUC28iyEDcMoby9xrnJHhocivoBZj9gqY+ui6eXHvBbq0tXS2dzYAAmqBdXCNT1wy Q0fTtOM1kwBq1neZJJDtv8OVb2E4WTXaookEyrn9UWfnw0gFlPMPSsLKdxzfJXEQRdps TBY/dxDfKDXG5nzytFERu/dU7OZ7FBxZvIzoh4dkW/QYs2YW41IfiKdkkwlFW8nnTbi2 RA0eP5SekM1KC2ny0+Tc+vzS7nujN6k/2bx9b8Rj/IE1ke6REKO17fe0+no3v3V+6DPQ hTcXXufvBklDdoPK3SevxPEu7k9onHuyK1D6iBkExepS0BnC5h2ewbJv8V7xNvAFJRmS S5FA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=ivekPqFwDTKnplQubqOcOnwSj5SwqgEqKMLIx8G0mTA=; b=2BjZVmk/glp/qQCTxmgbJKANL0YD5f0xJY7Lo8jW5wO9sZcBmsQolXQgmzXkY2oeCM XeAZ52Cat0Sr32rElTUfs8I0UBzfu4mjp9jeZl53hy/zLxPJrDKBrqkgIlf+XDPsxKm6 8UYIcf1gn7XbyJvvdSP/yck6op18DF2gc9oJy5WPP2Lienwv6Yuf2LknLCpaU9C8G34m EhSZQC0kVTub7OaaTngdqpdrGZicbLHNi23ZCg1gn/bg0XDVpFCJViPc033QXJgY0aQQ Y9w5XgzT6wh99Nis7MlrObfb9oDwWLGtDIRwBE/dooYpK+J4Tu/ILyyAC8e28881ZXWA mJAA== X-Gm-Message-State: ACgBeo3baPJEVT12L7e4mPTvKiOTxdRtvw4ZF2Ln+bxcS8g9VZ4AIOVO y/nghyGNUgCe0oHXMTVZ6c4= X-Received: by 2002:a05:6214:1c09:b0:4ac:9160:7484 with SMTP id u9-20020a0562141c0900b004ac91607484mr17181443qvc.13.1663099092959; Tue, 13 Sep 2022 12:58:12 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s11-20020a05620a29cb00b006b8e049cf08sm276305qkp.2.2022.09.13.12.58.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Sep 2022 12:58:12 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Frank Rowand , Mike Kravetz , Muchun Song , Mike Rapoport , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Doug Berger , Florian Fainelli , David Hildenbrand , Zi Yan , Oscar Salvador , Hari Bathini , Kees Cook , - , KOSAKI Motohiro , Mel Gorman , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux.dev Subject: [PATCH 05/21] mm/hugetlb: allow migrated hugepage to dissolve when freed Date: Tue, 13 Sep 2022 12:54:52 -0700 Message-Id: <20220913195508.3511038-6-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220913195508.3511038-1-opendmb@gmail.com> References: <20220913195508.3511038-1-opendmb@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There is no isolation mechanism for hugepages so a hugepage that is migrated is returned to its hugepage freelist. This creates problems for alloc_contig_range() because migrated hugepages can be allocated as migrate targets for subsequent hugepage migration attempts. Even if the migration succeeds the alloc_contig_range() attempt will fail because test_pages_isolated() will find the now free hugepages haven't been dissolved. A subsequent attempt by alloc_contig_range() is necessary for the isolate_migratepages_range() function to find the freed hugepage and dissolve it (assuming it has not been reallocated). A workqueue is introduced to perform the equivalent functionality of alloc_and_dissolve_huge_page() for a migrated hugepage when it is freed so that the pages can be released to the isolated page lists of the buddy allocator allowing the alloc_contig_range() attempt to succeed. The HPG_dissolve hugepage flag is introduced to allow tagging migratable hugepages that should be dissolved when freed. Signed-off-by: Doug Berger --- include/linux/hugetlb.h | 5 +++ mm/hugetlb.c | 72 ++++++++++++++++++++++++++++++++++++++--- mm/migrate.c | 1 + mm/page_alloc.c | 1 + 4 files changed, 75 insertions(+), 4 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 3ec981a0d8b3..0e6e21805e51 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -222,6 +222,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma, bool is_hugetlb_entry_migration(pte_t pte); void hugetlb_unshare_all_pmds(struct vm_area_struct *vma); +void sync_hugetlb_dissolve(void); #else /* !CONFIG_HUGETLB_PAGE */ @@ -430,6 +431,8 @@ static inline vm_fault_t hugetlb_fault(struct mm_struct *mm, static inline void hugetlb_unshare_all_pmds(struct vm_area_struct *vma) { } +static inline void sync_hugetlb_dissolve(void) { } + #endif /* !CONFIG_HUGETLB_PAGE */ /* * hugepages at page global directory. If arch support @@ -574,6 +577,7 @@ enum hugetlb_page_flags { HPG_freed, HPG_vmemmap_optimized, HPG_raw_hwp_unreliable, + HPG_dissolve, __NR_HPAGEFLAGS, }; @@ -621,6 +625,7 @@ HPAGEFLAG(Temporary, temporary) HPAGEFLAG(Freed, freed) HPAGEFLAG(VmemmapOptimized, vmemmap_optimized) HPAGEFLAG(RawHwpUnreliable, raw_hwp_unreliable) +HPAGEFLAG(Dissolve, dissolve) #ifdef CONFIG_HUGETLB_PAGE diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f232a37df4b6..da80889e1436 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1582,6 +1582,10 @@ static void __update_and_free_page(struct hstate *h, struct page *page) } } +static LLIST_HEAD(hpage_dissolvelist); +static void dissolve_hpage_workfn(struct work_struct *work); +static DECLARE_WORK(dissolve_hpage_work, dissolve_hpage_workfn); + /* * As update_and_free_page() can be called under any context, so we cannot * use GFP_KERNEL to allocate vmemmap pages. However, we can defer the @@ -1628,6 +1632,8 @@ static inline void flush_free_hpage_work(struct hstate *h) { if (hugetlb_vmemmap_optimizable(h)) flush_work(&free_hpage_work); + if (!hstate_is_gigantic(h)) + flush_work(&dissolve_hpage_work); } static void update_and_free_page(struct hstate *h, struct page *page, @@ -1679,7 +1685,7 @@ void free_huge_page(struct page *page) struct hstate *h = page_hstate(page); int nid = page_to_nid(page); struct hugepage_subpool *spool = hugetlb_page_subpool(page); - bool restore_reserve; + bool restore_reserve, dissolve; unsigned long flags; VM_BUG_ON_PAGE(page_count(page), page); @@ -1691,6 +1697,8 @@ void free_huge_page(struct page *page) page->mapping = NULL; restore_reserve = HPageRestoreReserve(page); ClearHPageRestoreReserve(page); + dissolve = HPageDissolve(page); + ClearHPageDissolve(page); /* * If HPageRestoreReserve was set on page, page allocation consumed a @@ -1729,6 +1737,11 @@ void free_huge_page(struct page *page) remove_hugetlb_page(h, page, true); spin_unlock_irqrestore(&hugetlb_lock, flags); update_and_free_page(h, page, true); + } else if (dissolve) { + spin_unlock_irqrestore(&hugetlb_lock, flags); + if (llist_add((struct llist_node *)&page->mapping, + &hpage_dissolvelist)) + schedule_work(&dissolve_hpage_work); } else { arch_clear_hugepage_flags(page); enqueue_huge_page(h, page); @@ -2771,6 +2784,49 @@ static void replace_hugepage(struct hstate *h, int nid, struct page *old_page, enqueue_huge_page(h, new_page); } +static void dissolve_hpage_workfn(struct work_struct *work) +{ + struct llist_node *node; + + node = llist_del_all(&hpage_dissolvelist); + + while (node) { + struct page *oldpage, *newpage; + struct hstate *h; + int nid; + + oldpage = container_of((struct address_space **)node, + struct page, mapping); + node = node->next; + oldpage->mapping = NULL; + + h = page_hstate(oldpage); + nid = page_to_nid(oldpage); + + newpage = alloc_replacement_page(h, nid); + + spin_lock_irq(&hugetlb_lock); + /* finish freeing oldpage */ + arch_clear_hugepage_flags(oldpage); + enqueue_huge_page(h, oldpage); + if (IS_ERR(newpage)) { + /* cannot dissolve so just leave free */ + spin_unlock_irq(&hugetlb_lock); + goto next; + } + + replace_hugepage(h, nid, oldpage, newpage); + + /* + * Pages have been replaced, we can safely free the old one. + */ + spin_unlock_irq(&hugetlb_lock); + __update_and_free_page(h, oldpage); +next: + cond_resched(); + } +} + /* * alloc_and_dissolve_huge_page - Allocate a new page and dissolve the old one * @h: struct hstate old page belongs to @@ -2803,6 +2859,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, */ spin_unlock_irq(&hugetlb_lock); ret = isolate_hugetlb(old_page, list); + SetHPageDissolve(old_page); spin_lock_irq(&hugetlb_lock); goto free_new; } else if (!HPageFreed(old_page)) { @@ -2864,14 +2921,21 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) if (hstate_is_gigantic(h)) return -ENOMEM; - if (page_count(head) && !isolate_hugetlb(head, list)) + if (page_count(head) && !isolate_hugetlb(head, list)) { + SetHPageDissolve(head); ret = 0; - else if (!page_count(head)) + } else if (!page_count(head)) { ret = alloc_and_dissolve_huge_page(h, head, list); - + } return ret; } +void sync_hugetlb_dissolve(void) +{ + flush_work(&free_hpage_work); + flush_work(&dissolve_hpage_work); +} + struct page *alloc_huge_page(struct vm_area_struct *vma, unsigned long addr, int avoid_reserve) { diff --git a/mm/migrate.c b/mm/migrate.c index 6a1597c92261..b6c6123e614c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -141,6 +141,7 @@ void putback_movable_pages(struct list_head *l) list_for_each_entry_safe(page, page2, l, lru) { if (unlikely(PageHuge(page))) { + ClearHPageDissolve(page); putback_active_hugepage(page); continue; } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e5486d47406e..6bf76bbc0308 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -9235,6 +9235,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, if (ret && ret != -EBUSY) goto done; ret = 0; + sync_hugetlb_dissolve(); /* * Pages from [start, end) are within a pageblock_nr_pages -- 2.25.1