Received: by 2002:a05:7412:cfc7:b0:fc:a2b0:25d7 with SMTP id by7csp957532rdb; Sun, 18 Feb 2024 22:04:38 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCXGjexvHQgujmL9lkoyOhohvo7Pjkh1efaIk3GEOZ4avbL6c34alKzoSWibRXjdw8NU3h+rnw3BsOqCFgV5YqaFFJ/PsxliR4bcn56RvQ== X-Google-Smtp-Source: AGHT+IGraYRkK99+dzL1+nG+puHszS5igw42anFAZS6mSHmifkdXrC42DTUoTaMO6BHCsehK+L5X X-Received: by 2002:a05:6830:1e78:b0:6e2:f9af:4f77 with SMTP id m24-20020a0568301e7800b006e2f9af4f77mr12158330otr.0.1708322678040; Sun, 18 Feb 2024 22:04:38 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708322678; cv=pass; d=google.com; s=arc-20160816; b=zanpzmUTNm9kPE2JmmthjsjFS0cjuEO3ZZ+r1WK4b/qbvFmsQFOiIQ51lboXGl6Q7I 4XjUpcsQ0TXFSA9YNgJZefzUeXjzTondG/lbJO+XYTTyEQ9Ct70H7bSpcuLzsjlVH1q8 oclZoVEEBmckIYtLTk5AEOsHJV7balMzA+/5BuYcr/7on/1eVdCGbpIQ8l/gz+N27mye 8C3cpT/BKXfC422tDxI1Goh44/lS1w9/CQzzfdf/RMMlpnB94NzFvGcyD1qwKW8ViM8p zfjcxGvnqbdIaOWeY8/1OYjUgeUt78nMMrlKHPtsy07sm6l1l2qeQGTmD1rQqvLrogFG 7emA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-unsubscribe:list-subscribe:list-id:precedence:message-id:date :subject:cc:to:from; bh=hpAUw0yWDgk5HHxvA60wttSaA7QvaYEw0876N24VksY=; fh=I/fJb8ryqmmGEnsMy9AhDuMN2ego/v5YAM2Gbv30rAg=; b=bNZpSbPKM6a0E4XqbmlKx5qAzh2f/11HzhH7uga+iRAiUFmfi9Dir9eQSDcRXQIQ+H TAT1vTC+7uK316Fo7Vr3ffyZsNQoYAstdeRFdsmcaFuWKdYotu/4E1qppzNR3K+uu4qQ EBhEYF10ago6FT5HTylmzv8GIVuQRYGAxomuxJfsaaGyuwWvnpzTReA7dmnEsAzWEIb/ Q0esxlpmZLseWcxbSkr3xQ3qpkrVU4Reem9AULJZVZ9mvnHZD9ccYXtlGiudZGLgaBjd L3cEdZjxwzXqRl/5dKoHicxxfVqBybRKf53S5eb7RHtgXVi5Z25tmir3Y8jdakq6OkG8 3bRg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=sk.com); spf=pass (google.com: domain of linux-kernel+bounces-70749-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-70749-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id 201-20020a6302d2000000b005d8e22b5258si3789094pgc.879.2024.02.18.22.04.37 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 18 Feb 2024 22:04:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-70749-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=sk.com); spf=pass (google.com: domain of linux-kernel+bounces-70749-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-70749-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id B315B28291E for ; Mon, 19 Feb 2024 06:04:37 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4C2FF20308; Mon, 19 Feb 2024 06:04:29 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4D4B1200A8 for ; Mon, 19 Feb 2024 06:04:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708322666; cv=none; b=UpqiP7cxdeJCxIAVn/c+vgf3K/5jT2tKDkbjOFttw5fdbQXNgoq0WfBZzu+ib65QkdN3SD6+zpr8mYf6gIQXcZQZJhAypJospJGzb9phQzF6of2BY3/wVqGMMg9cxoqfr+/Xcpio1pIOVwC43VGXGHnIVeu4tbhF7lMiPs4juPA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708322666; c=relaxed/simple; bh=FbC23IxQvaPYKs3D5Hr55D8OA23RjvqavkSZ1N60AM0=; h=From:To:Cc:Subject:Date:Message-Id; b=qrvNm9uDLlYmDnIyDopA7pzkn/ocdDyu4doxc1PNHbc+l18DxQaTBV7YhwHUyhAMGk4nLUXnUCRDWgDctp5BG85wNGjr9/uMbjBdUBPwjJ6AWXbMWBFfZ4OqhnmmeDopuRZ8rXQ5HjemIexwwxWRnMS5ZgKZSoENJ7Wtn16ZIOM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d6dff70000001748-ba-65d2ef611319 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, namit@vmware.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v8 0/8] Reduce TLB flushes by 94% by improving folio migration Date: Mon, 19 Feb 2024 15:03:59 +0900 Message-Id: <20240219060407.25254-1-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrJLMWRmVeSWpSXmKPExsXC9ZZnkW7i+0upBs1PVC3mrF/DZvF5wz82 ixcb2hktvq7/xWzx9FMfi8XlXXPYLO6t+c9qcX7XWlaLHUv3MVlcOrCAyeL6roeMFsd7DzBZ zL/3mc1i86apzBbHp0xltPj9A6jj5KzJLA6CHt9b+1g8ds66y+6xYFOpx+YVWh6L97xk8ti0 qpPNY9OnSewe786dY/c4MeM3i8e8k4Ee7/ddZfPY+svOo3HqNTaPz5vkPN7Nf8sWwB/FZZOS mpNZllqkb5fAlTG5fxljweyAioPXZ7M1ME6w6WLk5JAQMJG4drSbFcZ+de8gO4jNJqAucePG T2YQW0TATOJg6x+gOBcHs8BHJonV3ztYQBLCAn4Se950gjWwCKhKfLtyC8zmFTCVuHe0hwli qLzE6g0HmEGaJQRes0k8vrWeBSIhKXFwxQ2WCYzcCxgZVjEKZeaV5SZm5pjoZVTmZVboJefn bmIEBviy2j/ROxg/XQg+xCjAwajEw5shcilViDWxrLgy9xCjBAezkgive9OFVCHelMTKqtSi /Pii0pzU4kOM0hwsSuK8Rt/KU4QE0hNLUrNTUwtSi2CyTBycUg2MG3bwvSs2+H85JOOc/rGP Hi3dX5n26/06+C+EzfDIcedToVNPHN6b/Di3X6Df6OoeCea7vO9nfN0Xfft+T1LB1P7ZXydz Ti6vfbynetkujdr7zdrfjB/xrhc4IhzUmWcuK3nnXm/n+57uINbeJ9tfFzy95fJuRvLMXJ+e SftULWYsVO5Un/6LRYmlOCPRUIu5qDgRAJFmyshsAgAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrALMWRmVeSWpSXmKPExsXC5WfdrJv4/lKqwcNGSYs569ewWXze8I/N 4sWGdkaLr+t/MVs8/dTHYnF47klWi8u75rBZ3Fvzn9Xi/K61rBY7lu5jsrh0YAGTxfVdDxkt jvceYLKYf+8zm8XmTVOZLY5Pmcpo8fsHUMfJWZNZHIQ8vrf2sXjsnHWX3WPBplKPzSu0PBbv ecnksWlVJ5vHpk+T2D3enTvH7nFixm8Wj3knAz3e77vK5rH4xQcmj62/7Dwap15j8/i8Sc7j 3fy3bAECUVw2Kak5mWWpRfp2CVwZk/uXMRbMDqg4eH02WwPjBJsuRk4OCQETiVf3DrKD2GwC 6hI3bvxkBrFFBMwkDrb+AYpzcTALfGSSWP29gwUkISzgJ7HnTSdYA4uAqsS3K7fAbF4BU4l7 R3uYIIbKS6zecIB5AiPHAkaGVYwimXlluYmZOaZ6xdkZlXmZFXrJ+bmbGIHhuqz2z8QdjF8u ux9iFOBgVOLhzRC5lCrEmlhWXJl7iFGCg1lJhNe96UKqEG9KYmVValF+fFFpTmrxIUZpDhYl cV6v8NQEIYH0xJLU7NTUgtQimCwTB6dUA+Ojrdtup8bllHt+ksioW6mqfsUtdMpW7VnHFW58 2Kih6zT9uG/mC4ms2XXbfl2OubZnaeP5W5kxxdoXPJVYT5ztXbJwfS3X4kWGB/pY7+yfvPCG +vI+3r5PPxtN92QGabGtUcx4ufhmyRSZKObpJkG6ejOt9Stvvi4uv2hw5arRimJpq+TG3olK LMUZiYZazEXFiQAhZQ/FUwIAAA== X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Hi everyone, While I'm working with a tiered memory system e.g. CXL memory, I have been facing migration overhead esp. TLB shootdown on promotion or demotion between different tiers. Yeah.. most TLB shootdowns on migration through hinting fault can be avoided thanks to Huang Ying's work, commit 4d4b6d66db ("mm,unmap: avoid flushing TLB in batch if PTE is inaccessible"). See the following link: https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/ However, it's only for ones using hinting fault. I thought it'd be much better if we have a general mechanism to reduce the number of TLB flushes and TLB misses, that we can ultimately apply to any type of migration, I tried it only for tiering for now tho. I'm suggesting a mechanism called MIGRC that stands for 'Migration Read Copy', to reduce TLB flushes by keeping source and destination of folios participated in the migrations until all TLB flushes required are done, only if those folios are not mapped with write permission PTE entries. To achieve that: 1. For the folios that map only to non-writable TLB entries, prevent TLB flush at migration by keeping both source and destination folios, which will be handled later at a better time. 2. When any non-writable TLB entry changes to writable e.g. through fault handler, give up migrc mechanism so as to perform TLB flush required right away. I observed a big improvement of TLB flushes # and TLB misses # at the following evaluation using XSBench like: 1. itlb flush was reduced by 93.9%. 2. dtlb thread was reduced by 43.5%. 3. stlb flush was reduced by 24.9%. 4. dtlb store misses was reduced by 34.2%. 5. itlb load misses was reduced by 45.5%. 6. The runtime was reduced by 3.5%. I believe that it would help more with any real cases. --- The measurement result: Architecture - x86_64 QEMU - kvm enabled, host cpu Numa - 2 nodes (16 CPUs 1GB, no CPUs 8GB) Linux Kernel - v6.7, numa balancing tiering on, demotion enabled Benchmark - XSBench -p 100000000 (-p option makes the runtime longer) run 'perf stat' using events: 1) itlb.itlb_flush 2) tlb_flush.dtlb_thread 3) tlb_flush.stlb_any 4) dTLB-load-misses 5) dTLB-store-misses 6) iTLB-load-misses run 'cat /proc/vmstat' and pick: 1) numa_pages_migrated 2) pgmigrate_success 3) nr_tlb_remote_flush 4) nr_tlb_remote_flush_received 5) nr_tlb_local_flush_all 6) nr_tlb_local_flush_one BEFORE - mainline v6.7 ---------------------- $ perf stat -a \ -e itlb.itlb_flush \ -e tlb_flush.dtlb_thread \ -e tlb_flush.stlb_any \ -e dTLB-load-misses \ -e dTLB-store-misses \ -e iTLB-load-misses \ ./XSBench -p 100000000 Performance counter stats for 'system wide': 85647229 itlb.itlb_flush 480981504 tlb_flush.dtlb_thread 323937200 tlb_flush.stlb_any 238381632579 dTLB-load-misses 601514255 dTLB-store-misses 2974157461 iTLB-load-misses 2252.883892112 seconds time elapsed $ cat /proc/vmstat ... numa_pages_migrated 12790664 pgmigrate_success 26835314 nr_tlb_remote_flush 3031412 nr_tlb_remote_flush_received 45234862 nr_tlb_local_flush_all 216584 nr_tlb_local_flush_one 740940 ... AFTER - mainline v6.7 + migrc ----------------------------- $ perf stat -a \ -e itlb.itlb_flush \ -e tlb_flush.dtlb_thread \ -e tlb_flush.stlb_any \ -e dTLB-load-misses \ -e dTLB-store-misses \ -e iTLB-load-misses \ ./XSBench -p 100000000 Performance counter stats for 'system wide': 5240261 itlb.itlb_flush 271581774 tlb_flush.dtlb_thread 243149389 tlb_flush.stlb_any 234502983364 dTLB-load-misses 395673680 dTLB-store-misses 1620215163 iTLB-load-misses 2172.283436287 seconds time elapsed $ cat /proc/vmstat ... numa_pages_migrated 14897064 pgmigrate_success 30825530 nr_tlb_remote_flush 198290 nr_tlb_remote_flush_received 2820156 nr_tlb_local_flush_all 92048 nr_tlb_local_flush_one 741401 ... --- Changes from v7: 1. Rewrite cover letter to explain what 'migrc' mechasism is. (feedbacked by Andrew Morton) 2. Supplement the commit message of a patch 'mm: Add APIs to free a folio directly to the buddy bypassing pcp'. (feedbacked by Andrew Morton) Changes from v6: 1. Fix build errors in case of CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH disabled by moving migrc_flush_{start,end}() calls from arch code to try_to_unmap_flush() in mm/rmap.c. Changes from v5: 1. Fix build errors in case of CONFIG_MIGRATION disabled or CONFIG_HWPOISON_INJECT moduled. (feedbacked by kernel test bot and Raymond Jay Golo) 2. Organize migrc code with two kconfigs, CONFIG_MIGRATION and CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH. Changes from v4: 1. Rebase on v6.7. 2. Fix build errors in arm64 that is doing nothing for TLB flush but has CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH. (reported by kernel test robot) 3. Don't use any page flag. So the system would give up migrc mechanism more often but it's okay. The final improvement is good enough. 4. Instead, optimize full TLB flush(arch_tlbbatch_flush()) by avoiding redundant CPUs from TLB flush. Changes from v3: 1. Don't use the kconfig, CONFIG_MIGRC, and remove sysctl knob, migrc_enable. (feedbacked by Nadav) 2. Remove the optimization skipping CPUs that have already performed TLB flushes needed by any reason when performing TLB flushes by migrc because I can't tell the performance difference between w/ the optimization and w/o that. (feedbacked by Nadav) 3. Minimize arch-specific code. While at it, move all the migrc declarations and inline functions from include/linux/mm.h to mm/internal.h (feedbacked by Dave Hansen, Nadav) 4. Separate a part making migrc paused when the system is in high memory pressure to another patch. (feedbacked by Nadav) 5. Rename: a. arch_tlbbatch_clean() to arch_tlbbatch_clear(), b. tlb_ubc_nowr to tlb_ubc_ro, c. migrc_try_flush_free_folios() to migrc_flush_free_folios(), d. migrc_stop to migrc_pause. (feedbacked by Nadav) 6. Use ->lru list_head instead of introducing a new llist_head. (feedbacked by Nadav) 7. Use non-atomic operations of page-flag when it's safe. (feedbacked by Nadav) 8. Use stack instead of keeping a pointer of 'struct migrc_req' in struct task, which is for manipulating it locally. (feedbacked by Nadav) 9. Replace a lot of simple functions to inline functions placed in a header, mm/internal.h. (feedbacked by Nadav) 10. Add additional sufficient comments. (feedbacked by Nadav) 11. Remove a lot of wrapper functions. (feedbacked by Nadav) Changes from RFC v2: 1. Remove additional occupation in struct page. To do that, unioned with lru field for migrc's list and added a page flag. I know page flag is a thing that we don't like to add but no choice because migrc should distinguish folios under migrc's control from others. Instead, I force migrc to be used only on 64 bit system to mitigate you guys from getting angry. 2. Remove meaningless internal object allocator that I introduced to minimize impact onto the system. However, a ton of tests showed there was no difference. 3. Stop migrc from working when the system is in high memory pressure like about to perform direct reclaim. At the condition where the swap mechanism is heavily used, I found the system suffered from regression without this control. 4. Exclude folios that pte_dirty() == true from migrc's interest so that migrc can work simpler. 5. Combine several patches that work tightly coupled to one. 6. Add sufficient comments for better review. 7. Manage migrc's request in per-node manner (from globally). 8. Add TLB miss improvement in commit message. 9. Test with more CPUs(4 -> 16) to see bigger improvement. Changes from RFC: 1. Fix a bug triggered when a destination folio at the previous migration becomes a source folio at the next migration, before the folio gets handled properly so that the folio can play with another migration. There was inconsistency in the folio's state. Fixed it. 2. Split the patch set into more pieces so that the folks can review better. (Feedbacked by Nadav Amit) 3. Fix a wrong usage of barrier e.g. smp_mb__after_atomic(). (Feedbacked by Nadav Amit) 4. Tried to add sufficient comments to explain the patch set better. (Feedbacked by Nadav Amit) Byungchul Park (8): x86/tlb: Add APIs manipulating tlb batch's arch data arm64: tlbflush: Add APIs manipulating tlb batch's arch data mm/rmap: Recognize read-only TLB entries during batched TLB flush x86/tlb, mm/rmap: Separate arch_tlbbatch_clear() out of arch_tlbbatch_flush() mm: Separate move/undo doing on folio list from migrate_pages_batch() mm: Add APIs to free a folio directly to the buddy bypassing pcp mm: Defer TLB flush by keeping both src and dst folios at migration mm: Pause migrc mechanism at high memory pressure arch/arm64/include/asm/tlbflush.h | 19 ++ arch/x86/include/asm/tlbflush.h | 18 ++ arch/x86/mm/tlb.c | 2 - include/linux/mm.h | 23 ++ include/linux/mmzone.h | 7 + include/linux/sched.h | 9 + mm/internal.h | 78 ++++++ mm/memory.c | 8 + mm/migrate.c | 411 ++++++++++++++++++++++++++---- mm/page_alloc.c | 34 ++- mm/rmap.c | 40 ++- mm/swap.c | 7 + 12 files changed, 597 insertions(+), 59 deletions(-) base-commit: 0dd3ee31125508cd67f7e7172247f05b7fd1753a -- 2.17.1