Received: by 2002:a05:6358:16cc:b0:ea:6187:17c9 with SMTP id r12csp2553135rwl; Mon, 26 Dec 2022 17:32:52 -0800 (PST) X-Google-Smtp-Source: AMrXdXtedK6Dc9f7bHG1QbZwvVhcVQuTSrOHbHIkRyeX1O4uqVnjvghlbrJf+DAPrFmKP98jVl6W X-Received: by 2002:a05:6a20:e606:b0:9d:efd3:66e5 with SMTP id my6-20020a056a20e60600b0009defd366e5mr24387731pzb.44.1672104771999; Mon, 26 Dec 2022 17:32:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672104771; cv=none; d=google.com; s=arc-20160816; b=b3CSJs2ioxW+VQJxZL0h6lZD+/whl5Lv7/mG1ku6TiSc2omriST7PmafuETHyrcisi ntB/Rf93mKWiQkwmK9rdNJUm1GpUlfr2lQeLVnPu92edSVLW/MUXn86JmoIElY/ene6z i270nvzuEsmk1n5zAliOYR+cgR93+ZCAJjyvb/c9pSLvoyntCLY3yvNg9DLEjMyzPzxZ ZQa7H8B3nrL81ZFuFahNE0tDG310P82bVFIuIanugC7LlfDGZcKW0gV34rjEVtoTsR/J 5reexXoR+PW4bF0jI18WGABapqkWgMaqPoowrwT6EXA8tbwOEuSHv8k5O73tOP3QZs2W EB6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=EmUgQlD9tnG4P9yimIg6vRqU1RZS8A7fonhRfLBzs4c=; b=YPYCCLG1J/29f0j4zuKNVKeGt6RvZ7GCyVhxOx8v2AavckUNmF1qW0kUco80+WjAX9 ktn0Ro4CEjSbKke4wp2tzymkCxj7T5PivVKta7QW5L8CxFiuNVCKlJldaJcf/t8AnSxL mI3/BveaEjslO0/P1fmpjA3vb7c+tJ0AK7k8r7IZSTY3z0nwvwMqujskyEFjqDVQn57q VvTW2C4J+5wcs7xiHiLRYKZvNFoeOnKCyLNXzfpI3x3+6Ypp7/x7eG/tFGN2anvrh8CJ RktHOk1NylD+FdW0KxLzGb4upuz+la25N4JumEuWuXWb+DI9dKhWdCk6PLkqY6ThK05d QL2w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Tz4jGTh+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f6-20020a170902ce8600b00186a397324asi14101690plg.373.2022.12.26.17.32.40; Mon, 26 Dec 2022 17:32:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Tz4jGTh+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232529AbiL0A3i (ORCPT + 66 others); Mon, 26 Dec 2022 19:29:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232418AbiL0A3b (ORCPT ); Mon, 26 Dec 2022 19:29:31 -0500 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78DF126DC for ; Mon, 26 Dec 2022 16:29:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1672100970; x=1703636970; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fdSCkAQLw2skvmVvw4V6UUPknrIjo/5xEY/WxCdRv2U=; b=Tz4jGTh+uDUQFGqQ5bzITpnGgSV2rzZydGaY9RVklDBUh4uJEOPLiy9E XrZjQZhq894XFV1/uoBsZQ33KWdB5AGVQZqaIdQYuSpP4tzqT4JM4aBYs 8rrENS1w33kunRBo8k5fGuh5Ka4x5Xbk47aCrisNZVfpreSiqh8HhCzcI tJP7KHJnQsth0gIeEmmEgWFvh15/pqLsh4fsCVab2nQfP5SdqdRp48lp9 E9tjNRAoPC/spXpLDOolZXJBNqreJBli+0E2Ja36HrLQt0wjuxe0D7hT2 0WOrhDXFZ7VWbfLzjs4e98VUPhawhRExf4WFFg10tsF22s/TQ3ONifYYe A==; X-IronPort-AV: E=McAfee;i="6500,9779,10572"; a="322597218" X-IronPort-AV: E=Sophos;i="5.96,277,1665471600"; d="scan'208";a="322597218" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Dec 2022 16:29:29 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10572"; a="760172194" X-IronPort-AV: E=Sophos;i="5.96,277,1665471600"; d="scan'208";a="760172194" Received: from yyang3-mobl1.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.254.212.104]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Dec 2022 16:29:25 -0800 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Zi Yan , Yang Shi , Baolin Wang , Oscar Salvador , Matthew Wilcox , Bharata B Rao , Alistair Popple , haoxin Subject: [PATCH 2/8] migrate_pages: separate hugetlb folios migration Date: Tue, 27 Dec 2022 08:28:53 +0800 Message-Id: <20221227002859.27740-3-ying.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221227002859.27740-1-ying.huang@intel.com> References: <20221227002859.27740-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This is a preparation patch to batch the folio unmapping and moving for the non-hugetlb folios. Based on that we can batch the TLB shootdown during the folio migration and make it possible to use some hardware accelerator for the folio copying. In this patch the hugetlb folios and non-hugetlb folios migration is separated in migrate_pages() to make it easy to change the non-hugetlb folios migration implementation. Signed-off-by: "Huang, Ying" Cc: Zi Yan Cc: Yang Shi Cc: Baolin Wang Cc: Oscar Salvador Cc: Matthew Wilcox Cc: Bharata B Rao Cc: Alistair Popple Cc: haoxin --- mm/migrate.c | 114 ++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 99 insertions(+), 15 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index ec9263a33d38..bdbe73fe2eb7 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1404,6 +1404,87 @@ struct migrate_pages_stats { int nr_thp_split; }; +static int migrate_hugetlbs(struct list_head *from, new_page_t get_new_page, + free_page_t put_new_page, unsigned long private, + enum migrate_mode mode, int reason, + struct migrate_pages_stats *stats, + struct list_head *ret_folios) +{ + int retry = 1; + int nr_failed = 0; + int nr_retry_pages = 0; + int pass = 0; + struct folio *folio, *folio2; + int rc = 0, nr_pages; + + for (pass = 0; pass < 10 && retry; pass++) { + retry = 0; + nr_retry_pages = 0; + + list_for_each_entry_safe(folio, folio2, from, lru) { + if (!folio_test_hugetlb(folio)) + continue; + + nr_pages = folio_nr_pages(folio); + + cond_resched(); + + rc = unmap_and_move_huge_page(get_new_page, + put_new_page, private, + &folio->page, pass > 2, mode, + reason, ret_folios); + /* + * The rules are: + * Success: hugetlb folio will be put back + * -EAGAIN: stay on the from list + * -ENOMEM: stay on the from list + * -ENOSYS: stay on the from list + * Other errno: put on ret_folios list + */ + switch(rc) { + case -ENOSYS: + /* Hugetlb migration is unsupported */ + nr_failed++; + stats->nr_failed_pages += nr_pages; + list_move_tail(&folio->lru, ret_folios); + break; + case -ENOMEM: + /* + * When memory is low, don't bother to try to migrate + * other folios, just exit. + */ + nr_failed++; + stats->nr_failed_pages += nr_pages; + goto out; + case -EAGAIN: + retry++; + nr_retry_pages += nr_pages; + break; + case MIGRATEPAGE_SUCCESS: + stats->nr_succeeded += nr_pages; + break; + default: + /* + * Permanent failure (-EBUSY, etc.): + * unlike -EAGAIN case, the failed folio is + * removed from migration folio list and not + * retried in the next outer loop. + */ + nr_failed++; + stats->nr_failed_pages += nr_pages; + break; + } + } + } +out: + nr_failed += retry; + stats->nr_failed_pages += nr_retry_pages; + if (rc != -ENOMEM) + rc = nr_failed; + + return rc; +} + /* * migrate_pages - migrate the folios specified in a list, to the free folios * supplied as the target for the page migration @@ -1437,7 +1518,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, int retry = 1; int large_retry = 1; int thp_retry = 1; - int nr_failed = 0; + int nr_failed; int nr_retry_pages = 0; int nr_large_failed = 0; int pass = 0; @@ -1454,6 +1535,12 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, trace_mm_migrate_pages_start(mode, reason); memset(&stats, 0, sizeof(stats)); + rc = migrate_hugetlbs(from, get_new_page, put_new_page, private, mode, reason, + &stats, &ret_folios); + if (rc < 0) + goto out; + nr_failed = rc; + split_folio_migration: for (pass = 0; pass < 10 && (retry || large_retry); pass++) { retry = 0; @@ -1462,30 +1549,28 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, nr_retry_pages = 0; list_for_each_entry_safe(folio, folio2, from, lru) { + if (folio_test_hugetlb(folio)) { + list_move_tail(&folio->lru, &ret_folios); + continue; + } + /* * Large folio statistics is based on the source large * folio. Capture required information that might get * lost during migration. */ - is_large = folio_test_large(folio) && !folio_test_hugetlb(folio); + is_large = folio_test_large(folio); is_thp = is_large && folio_test_pmd_mappable(folio); nr_pages = folio_nr_pages(folio); + cond_resched(); - if (folio_test_hugetlb(folio)) - rc = unmap_and_move_huge_page(get_new_page, - put_new_page, private, - &folio->page, pass > 2, mode, - reason, - &ret_folios); - else - rc = unmap_and_move(get_new_page, put_new_page, - private, folio, pass > 2, mode, - reason, &ret_folios); + rc = unmap_and_move(get_new_page, put_new_page, + private, folio, pass > 2, mode, + reason, &ret_folios); /* * The rules are: - * Success: non hugetlb folio will be freed, hugetlb - * folio will be put back + * Success: folio will be freed * -EAGAIN: stay on the from list * -ENOMEM: stay on the from list * -ENOSYS: stay on the from list @@ -1512,7 +1597,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, stats.nr_thp_split += is_thp; break; } - /* Hugetlb migration is unsupported */ } else if (!no_split_folio_counting) { nr_failed++; } -- 2.35.1