Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 736A6C636CC for ; Mon, 13 Feb 2023 12:36:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231312AbjBMMgN (ORCPT ); Mon, 13 Feb 2023 07:36:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229918AbjBMMgK (ORCPT ); Mon, 13 Feb 2023 07:36:10 -0500 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07E081ADCC for ; Mon, 13 Feb 2023 04:35:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676291747; x=1707827747; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VYkt38A9z8Pq4upkiF6Pp5/5nTEJdBdjh8yFl0FvjyA=; b=MIRr8AHBtU/2gb1dDTS+Q7aOJbfE9BfQ7azCh6SZJNMVYZiwmb+mVCYy HxOCzX5ImrODebITp/lP1idDFhnkoA21ClrOJCKbx6liwacP/80ThcP/B KAd7qkBPayVgg7/aNTAJzbVG/zXo0yLDCqtmuQN/Eken5LvABNXIIhGQ9 nzBgI0cdnmMe+S7TTR8C0lasXGsFWSpHYYI1oNFVqSo1K46NCVNFStGwr HMpXpiHEpjg6PNW1ef63o0WK3jgIQ7bDmyt+gjtGGqSl7kP7RnPZ+dhf4 u8iOfBMlvMCiDoiQZ/J+zH2P888t19s3VLlmZymNZ0e48R5vn+VJF+BCH A==; X-IronPort-AV: E=McAfee;i="6500,9779,10619"; a="310513310" X-IronPort-AV: E=Sophos;i="5.97,294,1669104000"; d="scan'208";a="310513310" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Feb 2023 04:35:47 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10619"; a="646366663" X-IronPort-AV: E=Sophos;i="5.97,294,1669104000"; d="scan'208";a="646366663" Received: from changxin-mobl2.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.28.171]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Feb 2023 04:35:43 -0800 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Xin Hao , Zi Yan , Yang Shi , Baolin Wang , Oscar Salvador , Matthew Wilcox , Bharata B Rao , Alistair Popple , Minchan Kim , Mike Kravetz , Hyeonggon Yoo <42.hyeyoo@gmail.com> Subject: [PATCH -v5 8/9] migrate_pages: batch flushing TLB Date: Mon, 13 Feb 2023 20:34:43 +0800 Message-Id: <20230213123444.155149-9-ying.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230213123444.155149-1-ying.huang@intel.com> References: <20230213123444.155149-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The TLB flushing will cost quite some CPU cycles during the folio migration in some situations. For example, when migrate a folio of a process with multiple active threads that run on multiple CPUs. After batching the _unmap and _move in migrate_pages(), the TLB flushing can be batched easily with the existing TLB flush batching mechanism. This patch implements that. We use the following test case to test the patch. On a 2-socket Intel server, - Run pmbench memory accessing benchmark - Run `migratepages` to migrate pages of pmbench between node 0 and node 1 back and forth. With the patch, the TLB flushing IPI reduces 99.1% during the test and the number of pages migrated successfully per second increases 291.7%. Haoxin helped to test the patchset on an ARM64 server with 128 cores, 2 NUMA nodes. Test results show that the page migration performance increases up to 78%. NOTE: TLB flushing is batched only for normal folios, not for THP folios. Because the overhead of TLB flushing for THP folios is much lower than that for normal folios (about 1/512 on x86 platform). Signed-off-by: "Huang, Ying" Tested-by: Xin Hao Reviewed-by: Zi Yan Reviewed-by: Xin Hao Cc: Yang Shi Cc: Baolin Wang Cc: Oscar Salvador Cc: Matthew Wilcox Cc: Bharata B Rao Cc: Alistair Popple Cc: Minchan Kim Cc: Mike Kravetz Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/migrate.c | 5 ++++- mm/rmap.c | 20 +++++++++++++++++--- 2 files changed, 21 insertions(+), 4 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 00713ccb6643..2fa420e4f68c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1248,7 +1248,7 @@ static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page /* Establish migration ptes */ VM_BUG_ON_FOLIO(folio_test_anon(src) && !folio_test_ksm(src) && !anon_vma, src); - try_to_migrate(src, 0); + try_to_migrate(src, TTU_BATCH_FLUSH); page_was_mapped = 1; } @@ -1806,6 +1806,9 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, stats->nr_thp_failed += thp_retry; stats->nr_failed_pages += nr_retry_pages; move: + /* Flush TLBs for all unmapped folios */ + try_to_unmap_flush(); + retry = 1; for (pass = 0; pass < NR_MAX_MIGRATE_PAGES_RETRY && (retry || large_retry); diff --git a/mm/rmap.c b/mm/rmap.c index 8287f2cc327d..15ae24585fc4 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1952,7 +1952,21 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, } else { flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); /* Nuke the page table entry. */ - pteval = ptep_clear_flush(vma, address, pvmw.pte); + if (should_defer_flush(mm, flags)) { + /* + * We clear the PTE but do not flush so potentially + * a remote CPU could still be writing to the folio. + * If the entry was previously clean then the + * architecture must guarantee that a clear->dirty + * transition on a cached TLB entry is written through + * and traps if the PTE is unmapped. + */ + pteval = ptep_get_and_clear(mm, address, pvmw.pte); + + set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); + } else { + pteval = ptep_clear_flush(vma, address, pvmw.pte); + } } /* Set the dirty flag on the folio now the pte is gone. */ @@ -2124,10 +2138,10 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags) /* * Migration always ignores mlock and only supports TTU_RMAP_LOCKED and - * TTU_SPLIT_HUGE_PMD and TTU_SYNC flags. + * TTU_SPLIT_HUGE_PMD, TTU_SYNC, and TTU_BATCH_FLUSH flags. */ if (WARN_ON_ONCE(flags & ~(TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD | - TTU_SYNC))) + TTU_SYNC | TTU_BATCH_FLUSH))) return; if (folio_is_zone_device(folio) && -- 2.35.1