Received: by 2002:a05:6358:9144:b0:117:f937:c515 with SMTP id r4csp6675637rwr; Tue, 9 May 2023 19:57:10 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7bXnTYd8XMlDSJkApGpYLLY6tEyDdWVtTg+PuOsxsHmfF2I7bG5VrGr484ALkD+PsVfkTl X-Received: by 2002:a05:6a20:3d94:b0:d6:ba0b:c82c with SMTP id s20-20020a056a203d9400b000d6ba0bc82cmr20111845pzi.38.1683687430623; Tue, 09 May 2023 19:57:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683687430; cv=none; d=google.com; s=arc-20160816; b=1E9eADjGV1X2fGdfwyrCS/BcYsVkAYgti1AvaXtefqohNLarxRA4jW53aUDuMCswg4 oLBJkvyxTvU9nGRQ9xgKzEwpQrt/4hBwC+ui7aeoCtL+JdN7sRzjwHAsy9f7Rn3kM3hn 1CDF30tMupODjgqT1af3gVseuC1UWQEc7H44p3r6U7O1w8PUHChjfGXKHK+dRj3Hrw6T 5TysfCjsNM885IPH0PFob5RX5rxUWTg259FAhd1J/ymutbD+BKbxEdmJ1yClWV7mky5D ALM9OitFtSJgutOaZFHw5DYY8lQkn9N73LlFHoDz5GoyCRF/72ubz7K39NMteMYCV1BO rKuA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:message-id:in-reply-to :date:references:subject:cc:to:from:dkim-signature; bh=tjkFVLH4Bo7SaqVHd+hNKzuA1MiwRrhRMA9/Cb0jSWM=; b=fNXG+MUYzkhJ5F5Gt1xpbLy2PHvBHppl1cI+lFx7XvFH/clMp0DSrfsYOWhsqZseun Rr16gzq7RVl2M3ca4nYFZ+G2FHDNbz4c4FGBJ35rw7GTQjODwPi4LXX2mJHMykuBXsp8 C0+L0K7gM/9Q/72fczGY2q4vjigoBRR0W18VaKhKC3BSvVdj19lC9Xovwk+0DA1+gHMw iX8fWhDDWEHjsMFKG8oRgUHCSk4YEHY/gB5hoq7Aw1hxY6ig/34JNQ8jFnmD6ytyfwYx LJ0WOHnEnZd2CImhTPD/b6SZAyzIV05KrP9RWYD2nyoaGdApi2ngU9+D85YSmM64G0t3 GqFQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=YVRCXRgU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i30-20020a63585e000000b0052856cec948si2993579pgm.879.2023.05.09.19.56.58; Tue, 09 May 2023 19:57:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=YVRCXRgU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230199AbjEJCho (ORCPT + 99 others); Tue, 9 May 2023 22:37:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46708 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229501AbjEJChm (ORCPT ); Tue, 9 May 2023 22:37:42 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B68C171A for ; Tue, 9 May 2023 19:37:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1683686260; x=1715222260; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=1cvf0IAzLY3A36IfABph2686OaIaxENCHqp1nBwNdDk=; b=YVRCXRgUtatPhgzKtSuakMWWGms5IXEU9wngosm1H4vsNS3M70gIfUqa E+T0Bcla61lNsGKF7DLXFAEWqZ1QrzVaPvL1Iw7b9v7Zu9/YhTjwnVycL 3jxb96H+rjmMXuThoeE41RGZeMyj0grwn2WhvHT4PhfXKlHsIoR3xdU/f 2B7y7eJhovRLZb2yHG0IWFVbWRrSp53FubFbqLjTFSZ9QcdWXqks6N60c ZeIyfN8B9TdhLEsiAU5d8NfQz+qtweCPVE4HdhiKr1zBN1bWSwUrJeqRG D/KCEoyMnmAIxSIbZMOANS8tMrvCxl1YX+EWS1bD89brz4mcyEve6a/Km w==; X-IronPort-AV: E=McAfee;i="6600,9927,10705"; a="329728760" X-IronPort-AV: E=Sophos;i="5.99,263,1677571200"; d="scan'208";a="329728760" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2023 19:37:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10705"; a="676687536" X-IronPort-AV: E=Sophos;i="5.99,263,1677571200"; d="scan'208";a="676687536" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2023 19:37:37 -0700 From: "Huang, Ying" To: Zi Yan Cc: Andrew Morton , , , Xin Hao , Yang Shi , Baolin Wang , "Oscar Salvador" , Alistair Popple Subject: Re: [PATCH 1/2] migrate_pages_batch: simplify retrying and failure counting of large folios References: <20230509022014.380493-1-ying.huang@intel.com> <21078C33-2679-4C13-B46D-65858A4DC516@nvidia.com> Date: Wed, 10 May 2023 10:36:24 +0800 In-Reply-To: <21078C33-2679-4C13-B46D-65858A4DC516@nvidia.com> (Zi Yan's message of "Tue, 09 May 2023 10:25:44 -0400") Message-ID: <875y90ap5j.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Zi Yan writes: > On 8 May 2023, at 22:20, Huang Ying wrote: > >> After recent changes to the retrying and failure counting in >> migrate_pages_batch(), it was found that it's unnecessary to count >> retrying and failure for normal, large, and THP folios separately. >> Because we don't use retrying and failure number of large folios >> directly. So, in this patch, we simplified retrying and failure >> counting of large folios via counting retrying and failure of normal >> and large folios together. This results in the reduced line number. >> >> This is just code cleanup, no functionality changes are expected. >> >> Signed-off-by: "Huang, Ying" >> Cc: Xin Hao >> Cc: Zi Yan >> Cc: Yang Shi >> Cc: Baolin Wang >> Cc: Oscar Salvador >> Cc: Alistair Popple >> --- >> mm/migrate.c | 103 +++++++++++++++++---------------------------------- >> 1 file changed, 35 insertions(+), 68 deletions(-) >> >> diff --git a/mm/migrate.c b/mm/migrate.c >> index 01cac26a3127..10709aed76d3 100644 >> --- a/mm/migrate.c >> +++ b/mm/migrate.c >> @@ -1614,11 +1614,9 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, >> int nr_pass) >> { >> int retry = 1; >> - int large_retry = 1; >> int thp_retry = 1; >> int nr_failed = 0; >> int nr_retry_pages = 0; >> - int nr_large_failed = 0; >> int pass = 0; >> bool is_large = false; >> bool is_thp = false; >> @@ -1631,9 +1629,8 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, >> VM_WARN_ON_ONCE(mode != MIGRATE_ASYNC && >> !list_empty(from) && !list_is_singular(from)); >> >> - for (pass = 0; pass < nr_pass && (retry || large_retry); pass++) { >> + for (pass = 0; pass < nr_pass && retry; pass++) { >> retry = 0; >> - large_retry = 0; >> thp_retry = 0; >> nr_retry_pages = 0; >> >> @@ -1660,7 +1657,7 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, >> * list is processed. >> */ >> if (!thp_migration_supported() && is_thp) { >> - nr_large_failed++; >> + nr_failed++; >> stats->nr_thp_failed++; >> if (!try_split_folio(folio, split_folios)) { >> stats->nr_thp_split++; >> @@ -1688,38 +1685,33 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, >> * When memory is low, don't bother to try to migrate >> * other folios, move unmapped folios, then exit. >> */ >> - if (is_large) { >> - nr_large_failed++; >> - stats->nr_thp_failed += is_thp; >> - /* Large folio NUMA faulting doesn't split to retry. */ >> - if (!nosplit) { >> - int ret = try_split_folio(folio, split_folios); >> - >> - if (!ret) { >> - stats->nr_thp_split += is_thp; >> - break; >> - } else if (reason == MR_LONGTERM_PIN && >> - ret == -EAGAIN) { >> - /* >> - * Try again to split large folio to >> - * mitigate the failure of longterm pinning. >> - */ >> - large_retry++; >> - thp_retry += is_thp; >> - nr_retry_pages += nr_pages; >> - /* Undo duplicated failure counting. */ >> - nr_large_failed--; >> - stats->nr_thp_failed -= is_thp; >> - break; >> - } >> + nr_failed++; >> + stats->nr_thp_failed += is_thp; >> + /* Large folio NUMA faulting doesn't split to retry. */ >> + if (is_large && !nosplit) { >> + int ret = try_split_folio(folio, split_folios); >> + >> + if (!ret) { >> + stats->nr_thp_split += is_thp; >> + break; >> + } else if (reason == MR_LONGTERM_PIN && >> + ret == -EAGAIN) { >> + /* >> + * Try again to split large folio to >> + * mitigate the failure of longterm pinning. >> + */ >> + retry++; >> + thp_retry += is_thp; >> + nr_retry_pages += nr_pages; >> + /* Undo duplicated failure counting. */ >> + nr_failed--; >> + stats->nr_thp_failed -= is_thp; >> + break; >> } >> - } else { >> - nr_failed++; >> } >> >> stats->nr_failed_pages += nr_pages + nr_retry_pages; >> /* nr_failed isn't updated for not used */ >> - nr_large_failed += large_retry; >> stats->nr_thp_failed += thp_retry; >> rc_saved = rc; >> if (list_empty(&unmap_folios)) >> @@ -1727,12 +1719,8 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, >> else >> goto move; >> case -EAGAIN: >> - if (is_large) { >> - large_retry++; >> - thp_retry += is_thp; >> - } else { >> - retry++; >> - } >> + retry++; >> + thp_retry += is_thp; >> nr_retry_pages += nr_pages; >> break; >> case MIGRATEPAGE_SUCCESS: >> @@ -1750,20 +1738,14 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, >> * removed from migration folio list and not >> * retried in the next outer loop. >> */ >> - if (is_large) { >> - nr_large_failed++; >> - stats->nr_thp_failed += is_thp; >> - } else { >> - nr_failed++; >> - } >> - >> + nr_failed++; >> + stats->nr_thp_failed += is_thp; >> stats->nr_failed_pages += nr_pages; >> break; >> } >> } >> } >> nr_failed += retry; >> - nr_large_failed += large_retry; >> stats->nr_thp_failed += thp_retry; >> stats->nr_failed_pages += nr_retry_pages; >> move: >> @@ -1771,17 +1753,15 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, >> try_to_unmap_flush(); >> >> retry = 1; >> - for (pass = 0; pass < nr_pass && (retry || large_retry); pass++) { >> + for (pass = 0; pass < nr_pass && retry; pass++) { >> retry = 0; >> - large_retry = 0; >> thp_retry = 0; >> nr_retry_pages = 0; >> >> dst = list_first_entry(&dst_folios, struct folio, lru); >> dst2 = list_next_entry(dst, lru); >> list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { >> - is_large = folio_test_large(folio); >> - is_thp = is_large && folio_test_pmd_mappable(folio); >> + is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio); > > Should this be part of patch 2? Or maybe just merge two patches? OK. Will merge 2 patches. >> nr_pages = folio_nr_pages(folio); >> >> cond_resched(); >> @@ -1797,12 +1777,8 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, >> */ >> switch(rc) { >> case -EAGAIN: >> - if (is_large) { >> - large_retry++; >> - thp_retry += is_thp; >> - } else { >> - retry++; >> - } >> + retry++; >> + thp_retry += is_thp; >> nr_retry_pages += nr_pages; >> break; >> case MIGRATEPAGE_SUCCESS: >> @@ -1810,13 +1786,8 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, >> stats->nr_thp_succeeded += is_thp; >> break; >> default: >> - if (is_large) { >> - nr_large_failed++; >> - stats->nr_thp_failed += is_thp; >> - } else { >> - nr_failed++; >> - } >> - >> + nr_failed++; >> + stats->nr_thp_failed += is_thp; >> stats->nr_failed_pages += nr_pages; >> break; >> } >> @@ -1825,14 +1796,10 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, >> } >> } >> nr_failed += retry; >> - nr_large_failed += large_retry; >> stats->nr_thp_failed += thp_retry; >> stats->nr_failed_pages += nr_retry_pages; >> >> - if (rc_saved) >> - rc = rc_saved; >> - else >> - rc = nr_failed + nr_large_failed; >> + rc = rc_saved ? : nr_failed; >> out: >> /* Cleanup remaining folios */ >> dst = list_first_entry(&dst_folios, struct folio, lru); >> -- >> 2.39.2 > > Otherwise LGTM. Reviewed-by: Zi Yan Thanks! Best Regards, Huang, Ying