Received: by 2002:a05:7412:b10a:b0:f3:1519:9f41 with SMTP id az10csp2738669rdb; Mon, 4 Dec 2023 06:22:10 -0800 (PST) X-Google-Smtp-Source: AGHT+IHi+N9cwqJPx8IiF7WQlJYYJk9Tr+YQ+uqsazfJhT7fTR+sjN7QnENbgPamLSowv5Yp8hQd X-Received: by 2002:a17:903:11cd:b0:1d0:6cfd:d3c4 with SMTP id q13-20020a17090311cd00b001d06cfdd3c4mr4923975plh.17.1701699730246; Mon, 04 Dec 2023 06:22:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701699730; cv=none; d=google.com; s=arc-20160816; b=WOX0l3hvxp/gTEvfuiMEyRY79tFXazjiGX1nlUc5kXse6VfQRfLHYz5DObDUdlrpFW 52DO9pKRQeXFaC+rDnTb+eouP+N+NnRITlmNBFJu9YsPPw5fBLTdHwGpgp5FdsfTzjry xZo9cUbRoN1bqyLuIPF5i80bhxFEULxW8/ir+ftbz/Jd9mT++sr047PRKwwI8+15clC4 GsjG85NMKYLEJ2KUnJYnNZV13P8RJ+/YCcWMHSX5Tdx9m/gdTPfdSygJLAyfIP/YOdPs ra9S0RD1Oz+j71tyXj+IbA6yALAkIhYKtBBTipMxiBGPnuqUcw7D3Un7MKqhFwJ2mfQI cSTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=1DVxi53AdHJct3hXH5+VGS1svVEDHY6RYmwnFv3N0GA=; fh=UlFXU+hKOMcm6idRzNfe3Pu+B5L1f7aeir3C6i/NWQA=; b=ApVqJvWx4XVbZAjg5unpo1Z376mF7UWhBEILZPdb9m6BzjA9VE4wmmYB0Uf37RUINL kZ2+NRjXBKaWelrB18OPqOYzKzXfSEbKSrS3sm2pC7ke5bvk/V9W9zB1T6Ef+4tENoWC GjMuvCpZellFFfV24Z1Qt/KYiLv2D6kau2CsFam21fAJp1bXRZCeDtWpJ5P1TKp2YHKe bVKHpsF4lYGXEh724SrM7xvd/ChqVZnIxsS22VGG+rQR7cCOKgD/Tbzsp0JXvb8UCHU7 RxBhtGrH8UM6VCJpNGokiUdsob8dR41bXE5TtwtJVSROvM4V/S2FIaRjUmNdn/3piPkK dydA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=cCjdf3X8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id y14-20020a1709027c8e00b001cfc4215864si7594760pll.588.2023.12.04.06.22.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 06:22:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=cCjdf3X8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id BBDB38057DA4; Mon, 4 Dec 2023 06:22:04 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235654AbjLDOVr (ORCPT + 99 others); Mon, 4 Dec 2023 09:21:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233342AbjLDOVq (ORCPT ); Mon, 4 Dec 2023 09:21:46 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B664A9 for ; Mon, 4 Dec 2023 06:21:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701699710; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=1DVxi53AdHJct3hXH5+VGS1svVEDHY6RYmwnFv3N0GA=; b=cCjdf3X8BsX9IjgztuRF4tpXRzbn0UyMlM0kHlPVGPpaDrsNvgubKviTBOKcqlreZr1022 irCS8LaVAUGM3wV+X9jr58HiEHKKsNWDZjhCcsFHe7rE2nNxhm88GVUkUE9ZxlQfyAR9dd nk8V2rawq0qIdwXW/AkAGCvhENtLD2Y= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-480-Tpjb8xMcMU6W6IJk6zWHwA-1; Mon, 04 Dec 2023 09:21:49 -0500 X-MC-Unique: Tpjb8xMcMU6W6IJk6zWHwA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8ED8A811E7E; Mon, 4 Dec 2023 14:21:48 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.195.87]) by smtp.corp.redhat.com (Postfix) with ESMTP id CB0DD2026D4C; Mon, 4 Dec 2023 14:21:46 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Hugh Dickins , Ryan Roberts , Yin Fengwei , Mike Kravetz , Muchun Song , Peter Xu Subject: [PATCH RFC 00/39] mm/rmap: interface overhaul Date: Mon, 4 Dec 2023 15:21:07 +0100 Message-ID: <20231204142146.91437-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Mon, 04 Dec 2023 06:22:04 -0800 (PST) Baed on mm-stable from a couple of days. This series proposes an overhaul to our rmap interface, to get rid of the "bool compound" / RMAP_COMPOUND parameter with the goal of making the interface less error prone, more future proof, and more natural to extend to "batching". Also, this converts the interface to always consume folio+subpage, which speeds up operations on large folios. Further, this series adds PTE-batching variants for 4 rmap functions, whereby only folio_add_anon_rmap_ptes() is used for batching in this series when PTE-remapping a PMD-mapped THP. Ryan has series where we would make use of folio_remove_rmap_ptes() [1] -- he carries his own batching variant right now -- and folio_try_dup_anon_rmap_ptes()/folio_dup_file_rmap_ptes() [2]. There is some overlap with both series (and some other work, like multi-size THP [3]), so that will need some coordination, and likely a stepwise inclusion. I got that started [4], but it made sense to show the whole picture. The patches of [4] are contained in here, with one additional patch added ("mm/rmap: introduce and use hugetlb_try_share_anon_rmap()") and some slight patch description changes. In general, RMAP batching is an important optimization for PTE-mapped THP, especially once we want to move towards a total mapcount or further, as shown with my WIP patches on "mapped shared vs. mapped exclusively" [5]. The rmap batching part of [5] is also contained here in a slightly reworked fork [and I found a bug du to the "compound" parameter handling in these patches that should be fixed here :) ]. This series performs a lot of folio conversion, that could be separated if there is a good reason. Most of the added LOC in the diff are only due to documentation. As we're moving to a pte/pmd interface where we clearly express the mapping granularity we are dealing with, we first get the remainder of hugetlb out of the way, as it is special and expected to remain special: it treats everything as a "single logical PTE" and only currently allows entire mappings. Even if we'd ever support partial mappings, I strongly assume the interface and implementation will still differ heavily: hopefull we can avoid working on subpages/subpage mapcounts completely and only add a "count" parameter for them to enable batching. New (extended) hugetlb interface that operate on entire folio: * hugetlb_add_new_anon_rmap() -> Already existed * hugetlb_add_anon_rmap() -> Already existed * hugetlb_try_dup_anon_rmap() * hugetlb_try_share_anon_rmap() * hugetlb_add_file_rmap() * hugetlb_remove_rmap() New "ordinary" interface for small folios / THP:: * folio_add_new_anon_rmap() -> Already existed * folio_add_anon_rmap_[pte|ptes|pmd]() * folio_try_dup_anon_rmap_[pte|ptes|pmd]() * folio_try_share_anon_rmap_[pte|pmd]() * folio_add_file_rmap_[pte|ptes|pmd]() * folio_dup_file_rmap_[pte|ptes|pmd]() * folio_remove_rmap_[pte|ptes|pmd]() folio_add_new_anon_rmap() will always map at the biggest granularity possible (currently, a single PMD to cover a PMD-sized THP). Could be extended if ever required. In the future, we might want "_pud" variants and eventually "_pmds" variants for batching. Further, if hugepd is ever a thing outside hugetlb code, we might want some variants for that. All stuff for the distant future. I ran some simple microbenchmarks from [5] on an Intel(R) Xeon(R) Silver 4210R: munmap(), fork(), cow, MADV_DONTNEED on each PTE ... and PTE remapping PMD-mapped THPs on 1 GiB of memory. For small folios, there is barely a change (< 1 % performance improvement), whereby fork() still stands out with 0.74% performance improvement, but it might be just some noise. Folio optimizations don't help that much with small folios. For PTE-mapped THP: * PTE-remapping a PMD-mapped THP is more than 10% faster. -> RMAP batching * fork() is more than 4% faster. -> folio conversion * MADV_DONTNEED is 2% faster -> folio conversion * COW by writing only a single byte on a COW-shared PTE -> folio conversion * munmap() is only slightly faster (< 1%). [1] https://lkml.kernel.org/r/20230810103332.3062143-1-ryan.roberts@arm.com [2] https://lkml.kernel.org/r/20231204105440.61448-1-ryan.roberts@arm.com [3] https://lkml.kernel.org/r/20231204102027.57185-1-ryan.roberts@arm.com [4] https://lkml.kernel.org/r/20231128145205.215026-1-david@redhat.com [5] https://lkml.kernel.org/r/20231124132626.235350-1-david@redhat.com Cc: Andrew Morton Cc: "Matthew Wilcox (Oracle)" Cc: Hugh Dickins Cc: Ryan Roberts Cc: Yin Fengwei Cc: Mike Kravetz Cc: Muchun Song Cc: Peter Xu David Hildenbrand (39): mm/rmap: rename hugepage_add* to hugetlb_add* mm/rmap: introduce and use hugetlb_remove_rmap() mm/rmap: introduce and use hugetlb_add_file_rmap() mm/rmap: introduce and use hugetlb_try_dup_anon_rmap() mm/rmap: introduce and use hugetlb_try_share_anon_rmap() mm/rmap: add hugetlb sanity checks mm/rmap: convert folio_add_file_rmap_range() into folio_add_file_rmap_[pte|ptes|pmd]() mm/memory: page_add_file_rmap() -> folio_add_file_rmap_[pte|pmd]() mm/huge_memory: page_add_file_rmap() -> folio_add_file_rmap_pmd() mm/migrate: page_add_file_rmap() -> folio_add_file_rmap_pte() mm/userfaultfd: page_add_file_rmap() -> folio_add_file_rmap_pte() mm/rmap: remove page_add_file_rmap() mm/rmap: factor out adding folio mappings into __folio_add_rmap() mm/rmap: introduce folio_add_anon_rmap_[pte|ptes|pmd]() mm/huge_memory: batch rmap operations in __split_huge_pmd_locked() mm/huge_memory: page_add_anon_rmap() -> folio_add_anon_rmap_pmd() mm/migrate: page_add_anon_rmap() -> folio_add_anon_rmap_pte() mm/ksm: page_add_anon_rmap() -> folio_add_anon_rmap_pte() mm/swapfile: page_add_anon_rmap() -> folio_add_anon_rmap_pte() mm/memory: page_add_anon_rmap() -> folio_add_anon_rmap_pte() mm/rmap: remove page_add_anon_rmap() mm/rmap: remove RMAP_COMPOUND mm/rmap: introduce folio_remove_rmap_[pte|ptes|pmd]() kernel/events/uprobes: page_remove_rmap() -> folio_remove_rmap_pte() mm/huge_memory: page_remove_rmap() -> folio_remove_rmap_pmd() mm/khugepaged: page_remove_rmap() -> folio_remove_rmap_pte() mm/ksm: page_remove_rmap() -> folio_remove_rmap_pte() mm/memory: page_remove_rmap() -> folio_remove_rmap_pte() mm/migrate_device: page_remove_rmap() -> folio_remove_rmap_pte() mm/rmap: page_remove_rmap() -> folio_remove_rmap_pte() Documentation: stop referring to page_remove_rmap() mm/rmap: remove page_remove_rmap() mm/rmap: convert page_dup_file_rmap() to folio_dup_file_rmap_[pte|ptes|pmd]() mm/rmap: introduce folio_try_dup_anon_rmap_[pte|ptes|pmd]() mm/huge_memory: page_try_dup_anon_rmap() -> folio_try_dup_anon_rmap_pmd() mm/memory: page_try_dup_anon_rmap() -> folio_try_dup_anon_rmap_pte() mm/rmap: remove page_try_dup_anon_rmap() mm: convert page_try_share_anon_rmap() to folio_try_share_anon_rmap_[pte|pmd]() mm/rmap: rename COMPOUND_MAPPED to ENTIRELY_MAPPED Documentation/mm/transhuge.rst | 4 +- Documentation/mm/unevictable-lru.rst | 4 +- include/linux/mm.h | 6 +- include/linux/rmap.h | 380 +++++++++++++++++++----- kernel/events/uprobes.c | 2 +- mm/gup.c | 2 +- mm/huge_memory.c | 85 +++--- mm/hugetlb.c | 21 +- mm/internal.h | 12 +- mm/khugepaged.c | 17 +- mm/ksm.c | 15 +- mm/memory-failure.c | 4 +- mm/memory.c | 60 ++-- mm/migrate.c | 12 +- mm/migrate_device.c | 41 +-- mm/mmu_gather.c | 2 +- mm/rmap.c | 422 ++++++++++++++++----------- mm/swapfile.c | 2 +- mm/userfaultfd.c | 2 +- 19 files changed, 709 insertions(+), 384 deletions(-) -- 2.41.0