Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6412EC636D4 for ; Tue, 7 Feb 2023 17:01:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230421AbjBGRB6 (ORCPT ); Tue, 7 Feb 2023 12:01:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43308 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231455AbjBGRBx (ORCPT ); Tue, 7 Feb 2023 12:01:53 -0500 Received: from out30-100.freemail.mail.aliyun.com (out30-100.freemail.mail.aliyun.com [115.124.30.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5CAC47EF3 for ; Tue, 7 Feb 2023 09:01:48 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046056;MF=xhao@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0Vb8I69i_1675789302; Received: from 30.25.212.190(mailfrom:xhao@linux.alibaba.com fp:SMTPD_---0Vb8I69i_1675789302) by smtp.aliyun-inc.com; Wed, 08 Feb 2023 01:01:44 +0800 Message-ID: <01f838b1-c9c1-9e0d-5fc6-3583a1d070ec@linux.alibaba.com> Date: Wed, 8 Feb 2023 01:01:42 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.6.1 Subject: Re: [PATCH -v4 3/9] migrate_pages: restrict number of pages to migrate in batch To: Huang Ying , Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Baolin Wang , Zi Yan , Yang Shi , Oscar Salvador , Matthew Wilcox , Bharata B Rao , Alistair Popple , Minchan Kim , Mike Kravetz , Hyeonggon Yoo <42.hyeyoo@gmail.com> References: <20230206063313.635011-1-ying.huang@intel.com> <20230206063313.635011-4-ying.huang@intel.com> From: haoxin In-Reply-To: <20230206063313.635011-4-ying.huang@intel.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 在 2023/2/6 下午2:33, Huang Ying 写道: > This is a preparation patch to batch the folio unmapping and moving > for non-hugetlb folios. > > If we had batched the folio unmapping, all folios to be migrated would > be unmapped before copying the contents and flags of the folios. If > the folios that were passed to migrate_pages() were too many in unit > of pages, the execution of the processes would be stopped for too long > time, thus too long latency. For example, migrate_pages() syscall > will call migrate_pages() with all folios of a process. To avoid this > possible issue, in this patch, we restrict the number of pages to be > migrated to be no more than HPAGE_PMD_NR. That is, the influence is > at the same level of THP migration. > > Signed-off-by: "Huang, Ying" > Reviewed-by: Baolin Wang > Cc: Zi Yan > Cc: Yang Shi > Cc: Oscar Salvador > Cc: Matthew Wilcox > Cc: Bharata B Rao > Cc: Alistair Popple > Cc: haoxin > Cc: Minchan Kim > Cc: Mike Kravetz > Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> > --- > mm/migrate.c | 174 +++++++++++++++++++++++++++++++-------------------- > 1 file changed, 106 insertions(+), 68 deletions(-) > > diff --git a/mm/migrate.c b/mm/migrate.c > index be7f37523463..9a667039c34c 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1396,6 +1396,11 @@ static inline int try_split_folio(struct folio *folio, struct list_head *split_f > return rc; > } > > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > +#define NR_MAX_BATCHED_MIGRATION HPAGE_PMD_NR > +#else > +#define NR_MAX_BATCHED_MIGRATION 512 > +#endif > #define NR_MAX_MIGRATE_PAGES_RETRY 10 > > struct migrate_pages_stats { > @@ -1497,40 +1502,15 @@ static int migrate_hugetlbs(struct list_head *from, new_page_t get_new_page, > return nr_failed; > } > > -/* > - * migrate_pages - migrate the folios specified in a list, to the free folios > - * supplied as the target for the page migration > - * > - * @from: The list of folios to be migrated. > - * @get_new_page: The function used to allocate free folios to be used > - * as the target of the folio migration. > - * @put_new_page: The function used to free target folios if migration > - * fails, or NULL if no special handling is necessary. > - * @private: Private data to be passed on to get_new_page() > - * @mode: The migration mode that specifies the constraints for > - * folio migration, if any. > - * @reason: The reason for folio migration. > - * @ret_succeeded: Set to the number of folios migrated successfully if > - * the caller passes a non-NULL pointer. > - * > - * The function returns after NR_MAX_MIGRATE_PAGES_RETRY attempts or if no folios > - * are movable any more because the list has become empty or no retryable folios > - * exist any more. It is caller's responsibility to call putback_movable_pages() > - * only if ret != 0. > - * > - * Returns the number of {normal folio, large folio, hugetlb} that were not > - * migrated, or an error code. The number of large folio splits will be > - * considered as the number of non-migrated large folio, no matter how many > - * split folios of the large folio are migrated successfully. > - */ > -int migrate_pages(struct list_head *from, new_page_t get_new_page, > +static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, > free_page_t put_new_page, unsigned long private, > - enum migrate_mode mode, int reason, unsigned int *ret_succeeded) > + enum migrate_mode mode, int reason, struct list_head *ret_folios, > + struct migrate_pages_stats *stats) > { > int retry = 1; > int large_retry = 1; > int thp_retry = 1; > - int nr_failed; > + int nr_failed = 0; > int nr_retry_pages = 0; > int nr_large_failed = 0; > int pass = 0; > @@ -1538,20 +1518,9 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > bool is_thp = false; > struct folio *folio, *folio2; > int rc, nr_pages; > - LIST_HEAD(ret_folios); > LIST_HEAD(split_folios); > bool nosplit = (reason == MR_NUMA_MISPLACED); > bool no_split_folio_counting = false; > - struct migrate_pages_stats stats; > - > - trace_mm_migrate_pages_start(mode, reason); > - > - memset(&stats, 0, sizeof(stats)); > - rc = migrate_hugetlbs(from, get_new_page, put_new_page, private, mode, reason, > - &stats, &ret_folios); > - if (rc < 0) > - goto out; > - nr_failed = rc; > > split_folio_migration: > for (pass = 0; > @@ -1563,12 +1532,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > nr_retry_pages = 0; > > list_for_each_entry_safe(folio, folio2, from, lru) { > - /* Retried hugetlb folios will be kept in list */ > - if (folio_test_hugetlb(folio)) { > - list_move_tail(&folio->lru, &ret_folios); > - continue; > - } > - > /* > * Large folio statistics is based on the source large > * folio. Capture required information that might get > @@ -1582,15 +1545,14 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > > rc = unmap_and_move(get_new_page, put_new_page, > private, folio, pass > 2, mode, > - reason, &ret_folios); > + reason, ret_folios); > /* > * The rules are: > * Success: folio will be freed > * -EAGAIN: stay on the from list > * -ENOMEM: stay on the from list > * -ENOSYS: stay on the from list > - * Other errno: put on ret_folios list then splice to > - * from list > + * Other errno: put on ret_folios list > */ > switch(rc) { > /* > @@ -1607,17 +1569,17 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > /* Large folio migration is unsupported */ > if (is_large) { > nr_large_failed++; > - stats.nr_thp_failed += is_thp; > + stats->nr_thp_failed += is_thp; > if (!try_split_folio(folio, &split_folios)) { > - stats.nr_thp_split += is_thp; > + stats->nr_thp_split += is_thp; > break; > } > } else if (!no_split_folio_counting) { > nr_failed++; > } > > - stats.nr_failed_pages += nr_pages; > - list_move_tail(&folio->lru, &ret_folios); > + stats->nr_failed_pages += nr_pages; > + list_move_tail(&folio->lru, ret_folios); > break; > case -ENOMEM: > /* > @@ -1626,13 +1588,13 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > */ > if (is_large) { > nr_large_failed++; > - stats.nr_thp_failed += is_thp; > + stats->nr_thp_failed += is_thp; > /* Large folio NUMA faulting doesn't split to retry. */ > if (!nosplit) { > int ret = try_split_folio(folio, &split_folios); > > if (!ret) { > - stats.nr_thp_split += is_thp; > + stats->nr_thp_split += is_thp; > break; > } else if (reason == MR_LONGTERM_PIN && > ret == -EAGAIN) { > @@ -1650,17 +1612,17 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > nr_failed++; > } > > - stats.nr_failed_pages += nr_pages + nr_retry_pages; > + stats->nr_failed_pages += nr_pages + nr_retry_pages; > /* > * There might be some split folios of fail-to-migrate large > - * folios left in split_folios list. Move them back to migration > + * folios left in split_folios list. Move them to ret_folios > * list so that they could be put back to the right list by > * the caller otherwise the folio refcnt will be leaked. > */ > - list_splice_init(&split_folios, from); > + list_splice_init(&split_folios, ret_folios); > /* nr_failed isn't updated for not used */ > nr_large_failed += large_retry; > - stats.nr_thp_failed += thp_retry; > + stats->nr_thp_failed += thp_retry; > goto out; > case -EAGAIN: > if (is_large) { > @@ -1672,8 +1634,8 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > nr_retry_pages += nr_pages; > break; > case MIGRATEPAGE_SUCCESS: > - stats.nr_succeeded += nr_pages; > - stats.nr_thp_succeeded += is_thp; > + stats->nr_succeeded += nr_pages; > + stats->nr_thp_succeeded += is_thp; > break; > default: > /* > @@ -1684,20 +1646,20 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > */ > if (is_large) { > nr_large_failed++; > - stats.nr_thp_failed += is_thp; > + stats->nr_thp_failed += is_thp; > } else if (!no_split_folio_counting) { > nr_failed++; > } > > - stats.nr_failed_pages += nr_pages; > + stats->nr_failed_pages += nr_pages; > break; > } > } > } > nr_failed += retry; > nr_large_failed += large_retry; > - stats.nr_thp_failed += thp_retry; > - stats.nr_failed_pages += nr_retry_pages; > + stats->nr_thp_failed += thp_retry; > + stats->nr_failed_pages += nr_retry_pages; > /* > * Try to migrate split folios of fail-to-migrate large folios, no > * nr_failed counting in this round, since all split folios of a > @@ -1708,7 +1670,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > * Move non-migrated folios (after NR_MAX_MIGRATE_PAGES_RETRY > * retries) to ret_folios to avoid migrating them again. > */ > - list_splice_init(from, &ret_folios); > + list_splice_init(from, ret_folios); > list_splice_init(&split_folios, from); > no_split_folio_counting = true; > retry = 1; > @@ -1716,6 +1678,82 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > } > > rc = nr_failed + nr_large_failed; > +out: > + return rc; > +} > + > +/* > + * migrate_pages - migrate the folios specified in a list, to the free folios > + * supplied as the target for the page migration > + * > + * @from: The list of folios to be migrated. > + * @get_new_page: The function used to allocate free folios to be used > + * as the target of the folio migration. > + * @put_new_page: The function used to free target folios if migration > + * fails, or NULL if no special handling is necessary. > + * @private: Private data to be passed on to get_new_page() > + * @mode: The migration mode that specifies the constraints for > + * folio migration, if any. > + * @reason: The reason for folio migration. > + * @ret_succeeded: Set to the number of folios migrated successfully if > + * the caller passes a non-NULL pointer. > + * > + * The function returns after NR_MAX_MIGRATE_PAGES_RETRY attempts or if no folios > + * are movable any more because the list has become empty or no retryable folios > + * exist any more. It is caller's responsibility to call putback_movable_pages() > + * only if ret != 0. > + * > + * Returns the number of {normal folio, large folio, hugetlb} that were not > + * migrated, or an error code. The number of large folio splits will be > + * considered as the number of non-migrated large folio, no matter how many > + * split folios of the large folio are migrated successfully. > + */ > +int migrate_pages(struct list_head *from, new_page_t get_new_page, > + free_page_t put_new_page, unsigned long private, > + enum migrate_mode mode, int reason, unsigned int *ret_succeeded) > +{ > + int rc, rc_gather; > + int nr_pages; > + struct folio *folio, *folio2; > + LIST_HEAD(folios); > + LIST_HEAD(ret_folios); > + struct migrate_pages_stats stats; > + > + trace_mm_migrate_pages_start(mode, reason); > + > + memset(&stats, 0, sizeof(stats)); > + > + rc_gather = migrate_hugetlbs(from, get_new_page, put_new_page, private, > + mode, reason, &stats, &ret_folios); > + if (rc_gather < 0) > + goto out; > +again: > + nr_pages = 0; > + list_for_each_entry_safe(folio, folio2, from, lru) { > + /* Retried hugetlb folios will be kept in list */ > + if (folio_test_hugetlb(folio)) { > + list_move_tail(&folio->lru, &ret_folios); > + continue; > + } > + > + nr_pages += folio_nr_pages(folio); > + if (nr_pages > NR_MAX_BATCHED_MIGRATION) > + break; > + } > + if (nr_pages > NR_MAX_BATCHED_MIGRATION) > + list_cut_before(&folios, from, &folio->lru); > + else > + list_splice_init(from, &folios); > + rc = migrate_pages_batch(&folios, get_new_page, put_new_page, private, > + mode, reason, &ret_folios, &stats); > + list_splice_tail_init(&folios, &ret_folios); > + if (rc < 0) { > + rc_gather = rc; > + goto out; > + } > + rc_gather += rc; > + if (!list_empty(from)) > + goto again; > out: > /* > * Put the permanent failure folio back to migration list, they > @@ -1728,7 +1766,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > * are migrated successfully. > */ > if (list_empty(from)) > - rc = 0; > + rc_gather = 0; > > count_vm_events(PGMIGRATE_SUCCESS, stats.nr_succeeded); > count_vm_events(PGMIGRATE_FAIL, stats.nr_failed_pages); > @@ -1742,7 +1780,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > if (ret_succeeded) > *ret_succeeded = stats.nr_succeeded; > > - return rc; > + return rc_gather; > } > > struct page *alloc_migration_target(struct page *page, unsigned long private) Reviewed-by: Xin Hao