Received: by 2002:a05:6358:4e97:b0:b3:742d:4702 with SMTP id ce23csp4536293rwb; Wed, 17 Aug 2022 01:29:05 -0700 (PDT) X-Google-Smtp-Source: AA6agR6WWcQOsLIMfjw/MbEStt1M0SV2B0yu9DXqW8tg1PzsNSA2KmBEALfxdKtjMy51yQsSgl2z X-Received: by 2002:a17:906:668b:b0:730:a5b7:8985 with SMTP id z11-20020a170906668b00b00730a5b78985mr15470338ejo.548.1660724945201; Wed, 17 Aug 2022 01:29:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1660724945; cv=none; d=google.com; s=arc-20160816; b=sPTkJk3itMb3PHMZTPKgamEYET4My9P+n/w56a5o4ZwfwcBLouCTPW9dR8KDSI5K+J p01k+GMsiy3p4ltm06/dX1xfGlNrz/tQBzCOWA7jGleTPjZ3xAVMDrxF4byXEFMR2fnt KKAqln/gJuF9oZgXnMQisulle1Y7giV4o91vxeeKvKwwkLEaaLUqxs9BIrzWIeSvI/9x mAEZw5V4/hgJynXlciNbD7XMhQIXij5Lz//R5VryDgmFPf4nH3YQePHP3vdw4SdDMkj1 MF3LbbldaRC55snYMfQ3CDgmjK3XxGma+67EffbYE7Iyc+3YzjkoUhK/bB33PNyUI2yk wTAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=7MmZTUgstPfBAeG8Lzr8leKe1d5fpHbqnHQhGxDLCuU=; b=Tb3IOW+869JupUaX6Jp4+D4IzoyK9ZMZRsKAo7A9pSOZ6kdxyem3jG2C5CHNxPgJ6a mlrmHPYqSteglvFlFcWeeqn26aqSFfd8Q8hmrKLMACe2Uoj0NCso5TNVVbnpdSdIfG1s XpQlyK6ioVyDZ9KK6E8x0eWzIu8EoJlVHabATEufRNqBtnmWw2rTOPuUuMLjZty7C0+d SgoJ8VS3huNU/iQpvBwCtD5FQzMFwKQFifFfruihSCWrfNihFFcIuIxkkyQX3Ye6xB0T BOz/OP1U3xKd7XxbYAHDU+JwP26CocWL3DEIVleHIlaTEwPWGgKQzdGwN0iB/Xnk5dut pV6g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=HQ72BfIP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ji2-20020a170907980200b00733e46f4543si12965590ejc.820.2022.08.17.01.28.37; Wed, 17 Aug 2022 01:29:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=HQ72BfIP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234670AbiHQIOi (ORCPT + 99 others); Wed, 17 Aug 2022 04:14:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234531AbiHQIO2 (ORCPT ); Wed, 17 Aug 2022 04:14:28 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D4B25300F for ; Wed, 17 Aug 2022 01:14:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660724061; x=1692260061; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=YgX2oWE22h0TBxS39QPKndxaK0c4rvOlVqgEtY2CsnQ=; b=HQ72BfIPh56rx7JMotpdG7migocT0TaHG22TScwVTihsGx/g7ClMZ6FZ hVg9YPpANQu7ROZ3aLQSx/wH3E80vb0BrXenFD5D9FKee92zDwqy71EIb 3hwbpiB64OR2OEG28HJEevfIY+jYGhiQB8890NoQPKVLwNQH0oShMuGM6 j9prEkIPqRfHE3Mcq47/2qhdRTLXm4TNNFr1g60hvnJsvpEJQ8n/O1L8m cxoOonBckupI0MTAiYh7/c0ZfcYX9NHEqHuO2ThnNPXc4I/BoNVCVub+C EHe9f+RVlxfizrLpAvdzIetawuyO8wsqDoDxsOYtKgbJD+2aEKmpYqWwL A==; X-IronPort-AV: E=McAfee;i="6400,9594,10441"; a="293228776" X-IronPort-AV: E=Sophos;i="5.93,242,1654585200"; d="scan'208";a="293228776" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2022 01:14:21 -0700 X-IronPort-AV: E=Sophos;i="5.93,242,1654585200"; d="scan'208";a="583668065" Received: from yhuang6-mobl1.sh.intel.com ([10.238.6.172]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2022 01:14:18 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Huang, Ying" , Baolin Wang , Zi Yan , Yang Shi , Oscar Salvador Subject: [PATCH -V3 0/8] migrate_pages(): fix several bugs in error path Date: Wed, 17 Aug 2022 16:14:00 +0800 Message-Id: <20220817081408.513338-1-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Huang, Ying" During review the code of migrate_pages() and build a test program for it. Several bugs in error path are identified and fixed in this series. Most patches are tested via - Apply error-inject.patch in Linux kernel - Compile test-migrate.c (with -lnuma) - Test with test-migrate.sh error-inject.patch, test-migrate.c, and test-migrate.sh are as below. It turns out that error injection is an important tool to fix bugs in error path. Changes: v3: - Rebased on mm-unstable (20220816) - Added Baolin's patch to avoid retry 10 times for fail to migrate THP subpages v2: - Rebased on v5.19-rc5 - Addressed some comments from Baolin, Thanks! - Added reviewed-by tags Best Regards, Huang, Ying ------------------------- error-inject.patch ------------------------- From 295ea21204f3f025a041fe39c68a2eaec8313c68 Mon Sep 17 00:00:00 2001 From: Huang Ying Date: Tue, 21 Jun 2022 11:08:30 +0800 Subject: [PATCH] migrate_pages: error inject --- mm/migrate.c | 58 +++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 55 insertions(+), 3 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 399904015d23..87d47064ec6c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -337,6 +337,42 @@ void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd) } #endif +#define EI_MP_ENOSYS 0x0001 +#define EI_MP_THP_ENOMEM 0x0002 +#define EI_MP_NP_ENOMEM 0x0004 +#define EI_MP_EAGAIN 0x0008 +#define EI_MP_EOTHER 0x0010 +#define EI_MP_NOSPLIT 0x0020 +#define EI_MP_SPLIT_FAIL 0x0040 +#define EI_MP_EAGAIN_PERM 0x0080 +#define EI_MP_EBUSY 0x0100 + +static unsigned int ei_migrate_pages; + +module_param(ei_migrate_pages, uint, 0644); + +static bool ei_thp_migration_supported(void) +{ + if (ei_migrate_pages & EI_MP_ENOSYS) + return false; + else + return thp_migration_supported(); +} + +static int ei_trylock_page(struct page *page) +{ + if (ei_migrate_pages & EI_MP_EAGAIN) + return 0; + return trylock_page(page); +} + +static int ei_split_huge_page_to_list(struct page *page, struct list_head *list) +{ + if (ei_migrate_pages & EI_MP_SPLIT_FAIL) + return -EBUSY; + return split_huge_page_to_list(page, list); +} + static int expected_page_refs(struct address_space *mapping, struct page *page) { int expected_count = 1; @@ -368,6 +404,9 @@ int folio_migrate_mapping(struct address_space *mapping, if (folio_ref_count(folio) != expected_count) return -EAGAIN; + if (ei_migrate_pages & EI_MP_EAGAIN_PERM) + return -EAGAIN; + /* No turning back from here */ newfolio->index = folio->index; newfolio->mapping = folio->mapping; @@ -929,7 +968,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage, struct anon_vma *anon_vma = NULL; bool is_lru = !__PageMovable(page); - if (!trylock_page(page)) { + if (!ei_trylock_page(page)) { if (!force || mode == MIGRATE_ASYNC) goto out; @@ -952,6 +991,11 @@ static int __unmap_and_move(struct page *page, struct page *newpage, lock_page(page); } + if (ei_migrate_pages & EI_MP_EBUSY) { + rc = -EBUSY; + goto out_unlock; + } + if (PageWriteback(page)) { /* * Only in the case of a full synchronous migration is it @@ -1086,7 +1130,7 @@ static int unmap_and_move(new_page_t get_new_page, int rc = MIGRATEPAGE_SUCCESS; struct page *newpage = NULL; - if (!thp_migration_supported() && PageTransHuge(page)) + if (!ei_thp_migration_supported() && PageTransHuge(page)) return -ENOSYS; if (page_count(page) == 1) { @@ -1102,6 +1146,11 @@ static int unmap_and_move(new_page_t get_new_page, goto out; } + if ((ei_migrate_pages & EI_MP_THP_ENOMEM) && PageTransHuge(page)) + return -ENOMEM; + if ((ei_migrate_pages & EI_MP_NP_ENOMEM) && !PageTransHuge(page)) + return -ENOMEM; + newpage = get_new_page(page, private); if (!newpage) return -ENOMEM; @@ -1305,7 +1354,7 @@ static inline int try_split_thp(struct page *page, struct list_head *split_pages int rc; lock_page(page); - rc = split_huge_page_to_list(page, split_pages); + rc = ei_split_huge_page_to_list(page, split_pages); unlock_page(page); return rc; @@ -1358,6 +1407,9 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, bool nosplit = (reason == MR_NUMA_MISPLACED); bool no_subpage_counting = false; + if (ei_migrate_pages & EI_MP_NOSPLIT) + nosplit = true; + trace_mm_migrate_pages_start(mode, reason); thp_subpage_migration: -- 2.30.2 ------------------------- test-migrate.c ------------------------------------- #define _GNU_SOURCE #include #include #include #include #include #include #include #include #include #include #include #include #ifndef MADV_FREE #define MADV_FREE 8 /* free pages only if memory pressure */ #endif #define ONE_MB (1024 * 1024) #define MAP_SIZE (16 * ONE_MB) #define THP_SIZE (2 * ONE_MB) #define THP_MASK (THP_SIZE - 1) #define ERR_EXIT_ON(cond, msg) \ do { \ int __cond_in_macro = (cond); \ if (__cond_in_macro) \ error_exit(__cond_in_macro, (msg)); \ } while (0) void error_msg(int ret, int nr, int *status, const char *msg) { int i; fprintf(stderr, "Error: %s, ret : %d, error: %s\n", msg, ret, strerror(errno)); if (!nr) return; fprintf(stderr, "status: "); for (i = 0; i < nr; i++) fprintf(stderr, "%d ", status[i]); fprintf(stderr, "\n"); } void error_exit(int ret, const char *msg) { error_msg(ret, 0, NULL, msg); exit(1); } void *addr_thp; void *addr; char *pn; char *pn1; char *pn2; char *pn3; void *pages[4]; int status[4]; void create_map(bool thp) { int ret; void *p; p = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); ERR_EXIT_ON(p == MAP_FAILED, "mmap"); if (thp) { ret = madvise(p, MAP_SIZE, MADV_HUGEPAGE); ERR_EXIT_ON(ret, "advise hugepage"); addr_thp = p; } else { addr = p; } } void prepare() { int ret; struct iovec iov; if (addr) { munmap(addr_thp, MAP_SIZE); munmap(addr, MAP_SIZE); } create_map(true); create_map(false); pn = (char *)(((unsigned long)addr_thp + THP_SIZE) & ~THP_MASK); pn1 = pn + THP_SIZE; pages[0] = pn; pages[1] = pn1; *pn = 1; pn2 = (char *)(((unsigned long)addr + THP_SIZE) & ~THP_MASK); pn3 = pn2 + THP_SIZE; pages[2] = pn2; pages[3] = pn3; status[0] = status[1] = status[2] = status[3] = 1024; } void test_migrate() { int ret; int nodes[4] = { 1, 1, 1, 1 }; pid_t pid = getpid(); prepare(); *pn1 = 1; *pn2 = 1; *pn3 = 1; ret = move_pages(pid, 4, pages, nodes, status, MPOL_MF_MOVE_ALL); error_msg(ret, 4, status, "move 4 pages"); } int main(int argc, char *argv[]) { numa_run_on_node(0); test_migrate(); return 0; } --------------------- test-migrate.sh ---------------------------- #!/bin/bash PARAM=/sys/module/migrate/parameters/ei_migrate_pages get_vmstat() { echo ================= $* ================ cat /proc/vmstat | grep -e '\(pgmigrate\|thp_migration\)' } simple_test() { echo $1 > $PARAM shift get_vmstat before $* ./test-migrate get_vmstat after $* } #define EI_MP_ENOSYS 0x0001 #define EI_MP_THP_ENOMEM 0x0002 #define EI_MP_NP_ENOMEM 0x0004 #define EI_MP_EAGAIN 0x0008 #define EI_MP_EOTHER 0x0010 #define EI_MP_NOSPLIT 0x0020 #define EI_MP_SPLIT_FAIL 0x0040 #define EI_MP_EAGAIN_PERM 0x0080 #define EI_MP_EBUSY 0x0100 simple_test 0x26 ENOMEM simple_test 0x81 retry THP subpages simple_test 0xc1 ENOSYS simple_test 0x101 ENOSYS