Received: by 2002:a05:6358:16cc:b0:ea:6187:17c9 with SMTP id r12csp409401rwl; Wed, 4 Jan 2023 22:04:43 -0800 (PST) X-Google-Smtp-Source: AMrXdXsLXNkOFW/HfXp2i5eh4Gt2QikWjhnpM4fN8gJgPLWAJpNAiSjAPTvId9RJzgNYDW0GFErk X-Received: by 2002:a17:906:d052:b0:7be:e26a:6104 with SMTP id bo18-20020a170906d05200b007bee26a6104mr44628889ejb.52.1672898683323; Wed, 04 Jan 2023 22:04:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672898683; cv=none; d=google.com; s=arc-20160816; b=vKKQDF0Xh5XKskOLwp+LJT4mYZSt4IkPwS0uWendvnvd9ruWtqBrAciCecdcQbkeoy /W1TWud0gKpM4yRoHI6XZnmiOPWx9n7UOcCq0RTwJwk3iOsOqET1W87TY8gRywbaX4n5 Gik0kHxMbXLtl1exHm6mQe2iLRMC8KhI9ZKQwa+8m7j5LstgqzU/M/ef+N7n3HtP6Crj LOXso9lxSsSypX758RDucu9phUHvTYzICMBedCOTG5jG7tbz4gkVeesEaNCmGpnUADfN 9wvurTCkLmU52VB96c6qGIQjde2vHRBD/m7Wi6vX5VN2glZMVRfF8bqFWi2pLnYiBp3F E6Mg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:message-id:in-reply-to :date:references:subject:cc:to:from:dkim-signature; bh=F/ZDmbHGsqoW/ty4ZaUVS8c/qkFNxYP9Jo/o6Z29EzQ=; b=P3MaTZ2JzxwKGrBk9NCtV9xYrd72ulAHRNP0QqqmvsXVN2yHHo0aRXjxO4i4jowOtF 79UL6rt7uvlMgKFzSGw5OYWZv/xop3zFvwo+U/LocZQQX+GtdYe4lPWHzfIZ8VHULtXy 9QuoUcy9mqtC74q6oyw6GH4JqsHaS1GGEoHb76GVPpETQ/3P9kDfoXrqvE0sw1U8Td1W J9mLj9+EDCIcaWCh6SEfyeymEeZ08gfixWDNm/c1Srffj7ClD4Bz51O4CUnt3A07HhQ+ 5086do4itewEvEqzLzISubqFUyjoVKBzdS09iDFiBQjgStSaHiRjndaGDO9YydisSRcA nuRQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BJ4tr215; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e28-20020a50d4dc000000b0046b1abd7893si30725502edj.531.2023.01.04.22.04.30; Wed, 04 Jan 2023 22:04:43 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BJ4tr215; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229699AbjAEFyJ (ORCPT + 55 others); Thu, 5 Jan 2023 00:54:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229462AbjAEFyI (ORCPT ); Thu, 5 Jan 2023 00:54:08 -0500 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9DD13D9EC for ; Wed, 4 Jan 2023 21:54:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1672898047; x=1704434047; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=VmEZjSfJDXSrUyUEeB9nOlbhCCZ3NoNLGnwj9XiAsqY=; b=BJ4tr215oNus406X3+cuMgPlkJfNPNJDlm8fwDVs+AwoZHyj1u1jE2Ie x+WeJGfcYTVwjDSzEt92V0UmwUEOyM66NDEy3f2ZTEY98pE20K1BjSOBW OtyNPVe3e6OTsrOJ1zBYBw9lmSWwV7wIgVDCRqEMFNIHeXKtCgd6S2+vo NLoH1lW9pSmESIIhtlCKeB36/fArw65uKUG7gUkJMLzpvvE871eEFDF7H Tyc93TpvU/xLHoDfTO7uGpOi5r2BpEBrlrrY7VtOoEDfOewm3eCeEh0zV 9xMDHihi4ue8vWL23soK0f7ZUG9gE2jHBkY5tKxDvOWmC+Nt2yL9ZlAWX g==; X-IronPort-AV: E=McAfee;i="6500,9779,10580"; a="349342772" X-IronPort-AV: E=Sophos;i="5.96,302,1665471600"; d="scan'208";a="349342772" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jan 2023 21:54:07 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10580"; a="657381594" X-IronPort-AV: E=Sophos;i="5.96,302,1665471600"; d="scan'208";a="657381594" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jan 2023 21:54:04 -0800 From: "Huang, Ying" To: Alistair Popple Cc: Andrew Morton , , , Zi Yan , Yang Shi , Baolin Wang , "Oscar Salvador" , Matthew Wilcox , "Bharata B Rao" , haoxin Subject: Re: [PATCH 1/8] migrate_pages: organize stats with struct migrate_pages_stats References: <20221227002859.27740-1-ying.huang@intel.com> <20221227002859.27740-2-ying.huang@intel.com> <87y1qhu0to.fsf@nvidia.com> Date: Thu, 05 Jan 2023 13:53:11 +0800 In-Reply-To: <87y1qhu0to.fsf@nvidia.com> (Alistair Popple's message of "Thu, 05 Jan 2023 14:02:12 +1100") Message-ID: <87lemheddk.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Alistair Popple writes: > Huang Ying writes: > >> Define struct migrate_pages_stats to organize the various statistics >> in migrate_pages(). This makes it easier to collect and consume the >> statistics in multiple functions. This will be needed in the >> following patches in the series. >> >> Signed-off-by: "Huang, Ying" >> Cc: Zi Yan >> Cc: Yang Shi >> Cc: Baolin Wang >> Cc: Oscar Salvador >> Cc: Matthew Wilcox >> Cc: Bharata B Rao >> Cc: Alistair Popple >> Cc: haoxin >> --- >> mm/migrate.c | 58 +++++++++++++++++++++++++++++----------------------- >> 1 file changed, 32 insertions(+), 26 deletions(-) >> >> diff --git a/mm/migrate.c b/mm/migrate.c >> index a4d3fc65085f..ec9263a33d38 100644 >> --- a/mm/migrate.c >> +++ b/mm/migrate.c >> @@ -1396,6 +1396,14 @@ static inline int try_split_folio(struct folio *folio, struct list_head *split_f >> return rc; >> } >> >> +struct migrate_pages_stats { >> + int nr_succeeded; >> + int nr_failed_pages; >> + int nr_thp_succeeded; >> + int nr_thp_failed; >> + int nr_thp_split; > > I think some brief comments in the code for what each stat is tracking > and their relationship to each other would be helpful (ie. does > nr_succeeded include thp subpages, etc). Or at least a reference to > where this is documented (ie. page_migration.rst) as I recall there has > been some confusion in the past that has lead to bugs. OK, will do that in the next version. > Otherwise the patch looks good so: > > Reviewed-by: Alistair Popple Thanks! Best Regards, Huang, Ying >> +}; >> + >> /* >> * migrate_pages - migrate the folios specified in a list, to the free folios >> * supplied as the target for the page migration >> @@ -1430,13 +1438,8 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> int large_retry = 1; >> int thp_retry = 1; >> int nr_failed = 0; >> - int nr_failed_pages = 0; >> int nr_retry_pages = 0; >> - int nr_succeeded = 0; >> - int nr_thp_succeeded = 0; >> int nr_large_failed = 0; >> - int nr_thp_failed = 0; >> - int nr_thp_split = 0; >> int pass = 0; >> bool is_large = false; >> bool is_thp = false; >> @@ -1446,9 +1449,11 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> LIST_HEAD(split_folios); >> bool nosplit = (reason == MR_NUMA_MISPLACED); >> bool no_split_folio_counting = false; >> + struct migrate_pages_stats stats; >> >> trace_mm_migrate_pages_start(mode, reason); >> >> + memset(&stats, 0, sizeof(stats)); >> split_folio_migration: >> for (pass = 0; pass < 10 && (retry || large_retry); pass++) { >> retry = 0; >> @@ -1502,9 +1507,9 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> /* Large folio migration is unsupported */ >> if (is_large) { >> nr_large_failed++; >> - nr_thp_failed += is_thp; >> + stats.nr_thp_failed += is_thp; >> if (!try_split_folio(folio, &split_folios)) { >> - nr_thp_split += is_thp; >> + stats.nr_thp_split += is_thp; >> break; >> } >> /* Hugetlb migration is unsupported */ >> @@ -1512,7 +1517,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> nr_failed++; >> } >> >> - nr_failed_pages += nr_pages; >> + stats.nr_failed_pages += nr_pages; >> list_move_tail(&folio->lru, &ret_folios); >> break; >> case -ENOMEM: >> @@ -1522,13 +1527,13 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> */ >> if (is_large) { >> nr_large_failed++; >> - nr_thp_failed += is_thp; >> + stats.nr_thp_failed += is_thp; >> /* Large folio NUMA faulting doesn't split to retry. */ >> if (!nosplit) { >> int ret = try_split_folio(folio, &split_folios); >> >> if (!ret) { >> - nr_thp_split += is_thp; >> + stats.nr_thp_split += is_thp; >> break; >> } else if (reason == MR_LONGTERM_PIN && >> ret == -EAGAIN) { >> @@ -1546,7 +1551,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> nr_failed++; >> } >> >> - nr_failed_pages += nr_pages + nr_retry_pages; >> + stats.nr_failed_pages += nr_pages + nr_retry_pages; >> /* >> * There might be some split folios of fail-to-migrate large >> * folios left in split_folios list. Move them back to migration >> @@ -1556,7 +1561,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> list_splice_init(&split_folios, from); >> /* nr_failed isn't updated for not used */ >> nr_large_failed += large_retry; >> - nr_thp_failed += thp_retry; >> + stats.nr_thp_failed += thp_retry; >> goto out; >> case -EAGAIN: >> if (is_large) { >> @@ -1568,8 +1573,8 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> nr_retry_pages += nr_pages; >> break; >> case MIGRATEPAGE_SUCCESS: >> - nr_succeeded += nr_pages; >> - nr_thp_succeeded += is_thp; >> + stats.nr_succeeded += nr_pages; >> + stats.nr_thp_succeeded += is_thp; >> break; >> default: >> /* >> @@ -1580,20 +1585,20 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> */ >> if (is_large) { >> nr_large_failed++; >> - nr_thp_failed += is_thp; >> + stats.nr_thp_failed += is_thp; >> } else if (!no_split_folio_counting) { >> nr_failed++; >> } >> >> - nr_failed_pages += nr_pages; >> + stats.nr_failed_pages += nr_pages; >> break; >> } >> } >> } >> nr_failed += retry; >> nr_large_failed += large_retry; >> - nr_thp_failed += thp_retry; >> - nr_failed_pages += nr_retry_pages; >> + stats.nr_thp_failed += thp_retry; >> + stats.nr_failed_pages += nr_retry_pages; >> /* >> * Try to migrate split folios of fail-to-migrate large folios, no >> * nr_failed counting in this round, since all split folios of a >> @@ -1626,16 +1631,17 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> if (list_empty(from)) >> rc = 0; >> >> - count_vm_events(PGMIGRATE_SUCCESS, nr_succeeded); >> - count_vm_events(PGMIGRATE_FAIL, nr_failed_pages); >> - count_vm_events(THP_MIGRATION_SUCCESS, nr_thp_succeeded); >> - count_vm_events(THP_MIGRATION_FAIL, nr_thp_failed); >> - count_vm_events(THP_MIGRATION_SPLIT, nr_thp_split); >> - trace_mm_migrate_pages(nr_succeeded, nr_failed_pages, nr_thp_succeeded, >> - nr_thp_failed, nr_thp_split, mode, reason); >> + count_vm_events(PGMIGRATE_SUCCESS, stats.nr_succeeded); >> + count_vm_events(PGMIGRATE_FAIL, stats.nr_failed_pages); >> + count_vm_events(THP_MIGRATION_SUCCESS, stats.nr_thp_succeeded); >> + count_vm_events(THP_MIGRATION_FAIL, stats.nr_thp_failed); >> + count_vm_events(THP_MIGRATION_SPLIT, stats.nr_thp_split); >> + trace_mm_migrate_pages(stats.nr_succeeded, stats.nr_failed_pages, >> + stats.nr_thp_succeeded, stats.nr_thp_failed, >> + stats.nr_thp_split, mode, reason); >> >> if (ret_succeeded) >> - *ret_succeeded = nr_succeeded; >> + *ret_succeeded = stats.nr_succeeded; >> >> return rc; >> }