Received: by 2002:a05:7412:7c14:b0:fa:6e18:a558 with SMTP id ii20csp50499rdb; Sun, 21 Jan 2024 17:02:07 -0800 (PST) X-Google-Smtp-Source: AGHT+IEw4qXkMtLsR18uiHca/9HNJYi2zB9m9Wlhb0Hz4J5cKnwzsS/RZD01Bxp7W/bdNAbCXljq X-Received: by 2002:a05:6830:2691:b0:6db:a600:1077 with SMTP id l17-20020a056830269100b006dba6001077mr1686786otu.43.1705885327699; Sun, 21 Jan 2024 17:02:07 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1705885327; cv=pass; d=google.com; s=arc-20160816; b=MV/x5qrKRa5irobYlN/scHsYHfPiqX3v5rs6JKIHbuf3SD/97QajW5xPFlJYLi1qzT 4s8/cozzTJmRAiTvT9IuCSEC9UaFrmlXXHhw/bFkYA80/KI7pN05d69Ks6t4y0WBSPhC v4Bzy8Qh6AyKblULUChooGkT4TcAYcXJeCdocEayMu//CCAJlpB5lMVGEY3lkdJvuPMx 80UECDpnKbh5pS8fqJ7HoLYUMMUU8aDugXRINdD6ZOWLNLKHMjVwFNGUgqRLyLpE9VxS QRNyhtgl4NH+hW0cVQ6rke8Bn3zfmZt48eGAcYCmecIAbhkea5h7VxnYVj0GSMyKnwU5 AjjQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-unsubscribe:list-subscribe:list-id:precedence:references :in-reply-to:message-id:date:subject:cc:to:from; bh=hj/OU3wk3OzsqULfJnD2mvtgwTJyJb/ndDOd9gzDe6E=; fh=I/fJb8ryqmmGEnsMy9AhDuMN2ego/v5YAM2Gbv30rAg=; b=0PGU/DI3frwyh2JkJmqmJL7voGoYzh9i7K6255I/J3E2udJuIFnGULbnhtpuUOyLn8 37pNpeUAN/VN+3lSnck4M55D8PSCeRW2/fLLh+a0eONSpfBOMkV+bagngS6p+wioX6Y4 ini67Lf40YKIDESZiPCjIQNajpa/d40NCPkAunlzz9qrSWyui90BK4MFPYpz7wPtPjXW rlp+0d7RbZWIF+WAlEe9OCX48OOZ6q8nURYQ+zL1VA5hmGJfwOmJE8Oos6cxKz9kRboP pO+s4bxB6zcvmp+3IsypN9/aSMiBfv4SFLsAlLRquzTG8HpbjNXm6ZGr04d7KGJWZ5U1 R+ng== ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=sk.com); spf=pass (google.com: domain of linux-kernel+bounces-32249-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-32249-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id c11-20020ac85a8b000000b0042a32878861si3733374qtc.30.2024.01.21.17.02.07 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 21 Jan 2024 17:02:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-32249-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=sk.com); spf=pass (google.com: domain of linux-kernel+bounces-32249-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-32249-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 65E071C218E2 for ; Mon, 22 Jan 2024 01:02:07 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id BB8295C99; Mon, 22 Jan 2024 01:01:07 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id BCFEF1370 for ; Mon, 22 Jan 2024 01:01:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705885267; cv=none; b=MKxx7aRY3jcSnldRJ1k74xykP9MDqalqm0PNOHSc6yYOkZNWjHdZ0l3DGxzpQ+ZxaZNCxMTCla6kwxSmx3PRvX4I+fiTl2f+3Qexj+1xM2GZ+YMAX7vHSHiRHqsAYP4i6SWpNr/NUaVP05j4EtC7yqu4zK1BiKN8zbEjuRJiSEU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705885267; c=relaxed/simple; bh=bOo6ompPxEjsShhZOqBiAWkLgDaqfCbo8j/HY5Q8F4Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=fcJ8NP5X4tC39qeAZO1hIT2VmFrj+k+xJnvroAhwX2nmggW3CfnvOewpx0Y4zUVd0vA7o0GUsx5YQh5c6RLSeZYBG9PjFmU3GjCpPfYmAf4J4xTyJiH3U0HqJhvxWwJO5QsTbDZYim3RnpbTq8ViKN19UHWU4HJkyUz6NDwCOu4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-84-65adbe43617b From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, namit@vmware.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v7 5/8] mm: Separate move/undo doing on folio list from migrate_pages_batch() Date: Mon, 22 Jan 2024 10:00:37 +0900 Message-Id: <20240122010040.74346-6-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240122010040.74346-1-byungchul@sk.com> References: <20240122010040.74346-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrILMWRmVeSWpSXmKPExsXC9ZZnoa7zvrWpBvNv61vMWb+GzeLzhn9s Fi82tDNafF3/i9ni6ac+FovLu+awWdxb85/V4vyutawWO5buY7K4dGABk8X1XQ8ZLY73HmCy mH/vM5vF5k1TmS2OT5nKaPH7B1DHyVmTWRwEPb639rF47Jx1l91jwaZSj80rtDwW73nJ5LFp VSebx6ZPk9g93p07x+5xYsZvFo95JwM93u+7yuax9ZedR+PUa2wenzfJebyb/5YtgD+KyyYl NSezLLVI3y6BK+PG+7vsBf06FYvmPGRvYFyp3MXIySEhYCJx+88+dhj7wvn5zCA2m4C6xI0b P8FsEQEziYOtf4BquDiYBT4ySaz+3sECkhAWiJVYtbCLFcRmEVCVOLv/EZjNK2AqcWHaJhaI ofISqzccABvECTRoyq8uJhBbCKhm7oedjCBDJQSa2SX6Pq9ihGiQlDi44gbLBEbeBYwMqxiF MvPKchMzc0z0MirzMiv0kvNzNzECI2JZ7Z/oHYyfLgQfYhTgYFTi4XVgX5sqxJpYVlyZe4hR goNZSYSXX3VVqhBvSmJlVWpRfnxRaU5q8SFGaQ4WJXFeo2/lKUIC6YklqdmpqQWpRTBZJg5O qQbGQiO9n3/fssqtcjRRZ5bLFnoyNcc53bTYzXHfqo1T118vOyGpGs/E6aFZNWnb8qa0Yxt6 ljamP5p1MeoV79IHnRdjg2cYbWQ5euxlTPmr5Y4iedFL/qsdeO255Mp0XrWtMxR8lPpevFA4 e2xl/oRHnzUvFTNmHjn7ODOrVmXT50utF+8ErVB4qcRSnJFoqMVcVJwIAG/OZi+EAgAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrBLMWRmVeSWpSXmKPExsXC5WfdrOu8b22qwb0zyhZz1q9hs/i84R+b xYsN7YwWX9f/YrZ4+qmPxeLw3JOsFpd3zWGzuLfmP6vF+V1rWS12LN3HZHHpwAImi+u7HjJa HO89wGQx/95nNovNm6YyWxyfMpXR4vcPoI6TsyazOAh5fG/tY/HYOesuu8eCTaUem1doeSze 85LJY9OqTjaPTZ8msXu8O3eO3ePEjN8sHvNOBnq833eVzWPxiw9MHlt/2Xk0Tr3G5vF5k5zH u/lv2QIEorhsUlJzMstSi/TtErgybry/y17Qr1OxaM5D9gbGlcpdjJwcEgImEhfOz2cGsdkE 1CVu3PgJZosImEkcbP3D3sXIxcEs8JFJYvX3DhaQhLBArMSqhV2sIDaLgKrE2f2PwGxeAVOJ C9M2sUAMlZdYveEA2CBOoEFTfnUxgdhCQDVzP+xknMDItYCRYRWjSGZeWW5iZo6pXnF2RmVe ZoVecn7uJkZgeC+r/TNxB+OXy+6HGAU4GJV4eB3Y16YKsSaWFVfmHmKU4GBWEuHlV12VKsSb klhZlVqUH19UmpNafIhRmoNFSZzXKzw1QUggPbEkNTs1tSC1CCbLxMEp1cB47ckGe8Z5rqdO b6u78sza8ZXU6/fhrgwTfFvZbzQ2zj76ZG3tYqE1Yq8l1bRuLQ1JWcijw922cfFj/crVak9v f8+V5vc13FRVzFhjeqQyu1vSQ47rTb1S+wwLj6k8NizivzR2/lwse8THzIxHcm3JjqyC8qjF DSrHI8sPLVM1vmJwcmdmP78SS3FGoqEWc1FxIgDyA45IawIAAA== X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Functionally, no change. This is a preparation for migrc mechanism that requires to use separate folio lists for its own handling at migration. Refactored migrate_pages_batch() and separated move and undo parts operating on folio list, from migrate_pages_batch(). Signed-off-by: Byungchul Park --- mm/migrate.c | 134 +++++++++++++++++++++++++++++++-------------------- 1 file changed, 83 insertions(+), 51 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 397f2a6e34cb..bbe1ecef4956 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1611,6 +1611,81 @@ static int migrate_hugetlbs(struct list_head *from, new_folio_t get_new_folio, return nr_failed; } +static void migrate_folios_move(struct list_head *src_folios, + struct list_head *dst_folios, + free_folio_t put_new_folio, unsigned long private, + enum migrate_mode mode, int reason, + struct list_head *ret_folios, + struct migrate_pages_stats *stats, + int *retry, int *thp_retry, int *nr_failed, + int *nr_retry_pages) +{ + struct folio *folio, *folio2, *dst, *dst2; + bool is_thp; + int nr_pages; + int rc; + + dst = list_first_entry(dst_folios, struct folio, lru); + dst2 = list_next_entry(dst, lru); + list_for_each_entry_safe(folio, folio2, src_folios, lru) { + is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio); + nr_pages = folio_nr_pages(folio); + + cond_resched(); + + rc = migrate_folio_move(put_new_folio, private, + folio, dst, mode, + reason, ret_folios); + /* + * The rules are: + * Success: folio will be freed + * -EAGAIN: stay on the unmap_folios list + * Other errno: put on ret_folios list + */ + switch(rc) { + case -EAGAIN: + *retry += 1; + *thp_retry += is_thp; + *nr_retry_pages += nr_pages; + break; + case MIGRATEPAGE_SUCCESS: + stats->nr_succeeded += nr_pages; + stats->nr_thp_succeeded += is_thp; + break; + default: + *nr_failed += 1; + stats->nr_thp_failed += is_thp; + stats->nr_failed_pages += nr_pages; + break; + } + dst = dst2; + dst2 = list_next_entry(dst, lru); + } +} + +static void migrate_folios_undo(struct list_head *src_folios, + struct list_head *dst_folios, + free_folio_t put_new_folio, unsigned long private, + struct list_head *ret_folios) +{ + struct folio *folio, *folio2, *dst, *dst2; + + dst = list_first_entry(dst_folios, struct folio, lru); + dst2 = list_next_entry(dst, lru); + list_for_each_entry_safe(folio, folio2, src_folios, lru) { + int old_page_state = 0; + struct anon_vma *anon_vma = NULL; + + __migrate_folio_extract(dst, &old_page_state, &anon_vma); + migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED, + anon_vma, true, ret_folios); + list_del(&dst->lru); + migrate_folio_undo_dst(dst, true, put_new_folio, private); + dst = dst2; + dst2 = list_next_entry(dst, lru); + } +} + /* * migrate_pages_batch() first unmaps folios in the from list as many as * possible, then move the unmapped folios. @@ -1633,7 +1708,7 @@ static int migrate_pages_batch(struct list_head *from, int pass = 0; bool is_thp = false; bool is_large = false; - struct folio *folio, *folio2, *dst = NULL, *dst2; + struct folio *folio, *folio2, *dst = NULL; int rc, rc_saved = 0, nr_pages; LIST_HEAD(unmap_folios); LIST_HEAD(dst_folios); @@ -1769,42 +1844,11 @@ static int migrate_pages_batch(struct list_head *from, thp_retry = 0; nr_retry_pages = 0; - dst = list_first_entry(&dst_folios, struct folio, lru); - dst2 = list_next_entry(dst, lru); - list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { - is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio); - nr_pages = folio_nr_pages(folio); - - cond_resched(); - - rc = migrate_folio_move(put_new_folio, private, - folio, dst, mode, - reason, ret_folios); - /* - * The rules are: - * Success: folio will be freed - * -EAGAIN: stay on the unmap_folios list - * Other errno: put on ret_folios list - */ - switch(rc) { - case -EAGAIN: - retry++; - thp_retry += is_thp; - nr_retry_pages += nr_pages; - break; - case MIGRATEPAGE_SUCCESS: - stats->nr_succeeded += nr_pages; - stats->nr_thp_succeeded += is_thp; - break; - default: - nr_failed++; - stats->nr_thp_failed += is_thp; - stats->nr_failed_pages += nr_pages; - break; - } - dst = dst2; - dst2 = list_next_entry(dst, lru); - } + /* Move the unmapped folios */ + migrate_folios_move(&unmap_folios, &dst_folios, + put_new_folio, private, mode, reason, + ret_folios, stats, &retry, &thp_retry, + &nr_failed, &nr_retry_pages); } nr_failed += retry; stats->nr_thp_failed += thp_retry; @@ -1813,20 +1857,8 @@ static int migrate_pages_batch(struct list_head *from, rc = rc_saved ? : nr_failed; out: /* Cleanup remaining folios */ - dst = list_first_entry(&dst_folios, struct folio, lru); - dst2 = list_next_entry(dst, lru); - list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { - int old_page_state = 0; - struct anon_vma *anon_vma = NULL; - - __migrate_folio_extract(dst, &old_page_state, &anon_vma); - migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED, - anon_vma, true, ret_folios); - list_del(&dst->lru); - migrate_folio_undo_dst(dst, true, put_new_folio, private); - dst = dst2; - dst2 = list_next_entry(dst, lru); - } + migrate_folios_undo(&unmap_folios, &dst_folios, + put_new_folio, private, ret_folios); return rc; } -- 2.17.1