Received: by 2002:a05:7412:d8a:b0:e2:908c:2ebd with SMTP id b10csp3612782rdg; Tue, 17 Oct 2023 23:57:29 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGyYeLJo87cXj+9i+OJ12WetiEIvwLPnIETBPYzAOy+J2c4icDb5SJwSxgAKLxelQ3DCSff X-Received: by 2002:a17:903:23c3:b0:1c7:398c:a437 with SMTP id o3-20020a17090323c300b001c7398ca437mr3985404plh.69.1697612248711; Tue, 17 Oct 2023 23:57:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697612248; cv=none; d=google.com; s=arc-20160816; b=QI5NBI1acpjBq/Ri/9ZjBPu2pEbde8n+tXAaW3daSjFeq8kmclQMil8H2lucU/TJGi onvyW3mbz/htHVyBsw0ylK48l7KVF17I0wX6c4BR6fyruQfhTSQqygxEA5bghOsFSgTJ E1AMqMadXWXYOdA7iH5IYwdjbDPz9Mp33nokuexisTcGlb6WbtMLxe8o/rc5M0TASitH d067shdSJEiImfaOSzxq3t/C/nrLqg2Ov70zGXyQyXFeC+8uLr9NMnrZ/+fhqHKcGJTi IyvMBAx2tCycM2rM66Arq3NMjdnV+rQg0lmdESgRyn3RWr/sBUFoM9MBscKo8P+xJfDQ 6mXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:message-id:date :references:in-reply-to:subject:cc:to:from:dkim-signature; bh=DzpxvIfvWkzxo+idm+BhBEQm44lmtHUW04NwEw189iU=; fh=CrY+1mjH6sSoZCGMwQv52ukJsSnLHg4gDjkerAlIFg8=; b=QhYHmyIkEh/W9K8HfsQiS4aN9tbq/HZHVNFz9n8uEDaL6X0DatUNMfaxdtnrDUXxKE kSV8FjGXsv6SPSikvfudrrfU46/mG6dx1b65NCHp0eWg0pBhtghFjNqBpx714G3LC7GA uxzNlv8WnqWdezwsEXONADiJ2qWDO3VFpOY3o0xh72/VR7UmpwyQFe50Lg/5+qLdE1pi lg4oakiy2M1CmwgycJ3gABWO8hWcmASpNlwXbOZZRFHBp2cKrRaXBAx54LqLrtUPl1e/ wZ/4DF5tJmbuK4+eg6CfR7ZAzsX96ZRy/wRczHb4sjeKlSYAbx/n1IBbcWW+zfKI8ouS WYtA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ne61DFId; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id e6-20020a170902cf4600b001ca336f48bdsi3203668plg.556.2023.10.17.23.57.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 23:57:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ne61DFId; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 21EBB808DB62; Tue, 17 Oct 2023 23:57:25 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229737AbjJRG5Q (ORCPT + 99 others); Wed, 18 Oct 2023 02:57:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229676AbjJRG5P (ORCPT ); Wed, 18 Oct 2023 02:57:15 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 179ECB0 for ; Tue, 17 Oct 2023 23:57:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697612232; x=1729148232; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=5kCfENKLdy6RK47w9C+3d3YFaUvT9YppRcN0G7AScnU=; b=ne61DFIdgcXKyNUwl4lC9knFXDsAXwgInxnxS5Kug6jTZuUiC8PaRRc5 dm1JzXcpkpxVJJFYz7MVDmchFUZevIGRCMIF2KJp0MY15XzY6KeNACPFp T08N3+MFM6SmbCQqbYYyzkkKIHyaMF3Ch2ADKaZC0nwmss2xRlTvk8gcY xqn+P191yvzszzf4YBsRBWpPV4jGxUs1sbncVF61PJdHocu2WuPVcPvkL tGgtvLWFnBpIdGCKfy5f6dkIyLyxw+IhIK4QPgjMV+ZzyCHyTVHLGDHOp QLUeMu9MBvzkgI4rC1UAIFyxrP2kIUz1Ucqk26X3a9JNrTqP0l1hkHcoY w==; X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="365303861" X-IronPort-AV: E=Sophos;i="6.03,234,1694761200"; d="scan'208";a="365303861" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 23:57:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="880103185" X-IronPort-AV: E=Sophos;i="6.03,234,1694761200"; d="scan'208";a="880103185" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 23:57:07 -0700 From: "Huang, Ying" To: Ryan Roberts Cc: Andrew Morton , David Hildenbrand , Matthew Wilcox , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , , Subject: Re: [PATCH v2 2/2] mm: swap: Swap-out small-sized THP without splitting In-Reply-To: <20231017161302.2518826-3-ryan.roberts@arm.com> (Ryan Roberts's message of "Tue, 17 Oct 2023 17:13:02 +0100") References: <20231017161302.2518826-1-ryan.roberts@arm.com> <20231017161302.2518826-3-ryan.roberts@arm.com> Date: Wed, 18 Oct 2023 14:55:06 +0800 Message-ID: <87r0ls773p.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Tue, 17 Oct 2023 23:57:25 -0700 (PDT) Ryan Roberts writes: > The upcoming anonymous small-sized THP feature enables performance > improvements by allocating large folios for anonymous memory. However > I've observed that on an arm64 system running a parallel workload (e.g. > kernel compilation) across many cores, under high memory pressure, the > speed regresses. This is due to bottlenecking on the increased number of > TLBIs added due to all the extra folio splitting. > > Therefore, solve this regression by adding support for swapping out > small-sized THP without needing to split the folio, just like is already > done for PMD-sized THP. This change only applies when CONFIG_THP_SWAP is > enabled, and when the swap backing store is a non-rotating block device. > These are the same constraints as for the existing PMD-sized THP > swap-out support. > > Note that no attempt is made to swap-in THP here - this is still done > page-by-page, like for PMD-sized THP. > > The main change here is to improve the swap entry allocator so that it > can allocate any power-of-2 number of contiguous entries between [4, (1 > << PMD_ORDER)] (THP cannot support order-1 folios). This is done by > allocating a cluster for each distinct order and allocating sequentially > from it until the cluster is full. This ensures that we don't need to > search the map and we get no fragmentation due to alignment padding for > different orders in the cluster. If there is no current cluster for a > given order, we attempt to allocate a free cluster from the list. If > there are no free clusters, we fail the allocation and the caller falls > back to splitting the folio and allocates individual entries (as per > existing PMD-sized THP fallback). > > The per-order current clusters are maintained per-cpu using the existing > percpu_cluster infrastructure. This is done to avoid interleving pages > from different tasks, which would prevent IO being batched. This is > already done for the order-0 allocations so we follow the same pattern. > > As far as I can tell, this should not cause any extra fragmentation > concerns, given how similar it is to the existing PMD-sized THP > allocation mechanism. There could be up to (PMD_ORDER-2) * nr_cpus > clusters in concurrent use though, which in a pathalogical case (cluster > set aside for every order for every cpu and only one huge entry > allocated from it) would tie up ~12MiB of unused swap entries for these > high orders (assuming PMD_ORDER=9). In practice, the number of orders in > use will be small and the amount of swap space reserved is very small > compared to a typical swap file. > > Note that PMD_ORDER is not compile-time constant on powerpc, so we have > to allocate the large_next[] array at runtime. > > I've run the tests on Ampere Altra (arm64), set up with a 35G block ram > device as the swap device and from inside a memcg limited to 40G memory. > I've then run `usemem` from vm-scalability with 70 processes (each has > its own core), each allocating and writing 1G of memory. I've repeated > everything 5 times and taken the mean and stdev: > > Mean Performance Improvement vs 4K/baseline > > | alloc size | baseline | + this series | > | | v6.6-rc4+anonfolio | | > |:-----------|--------------------:|--------------------:| > | 4K Page | 0.0% | 1.1% | > | 64K THP | -44.1% | 0.9% | > | 2M THP | 56.0% | 56.4% | > > So with this change, the regression for 64K swap performance goes away. > Both 4K and 64K benhcmarks are now bottlenecked on TLBI performance from > try_to_unmap_flush_dirty(), on arm64 at least. When using fewer cpus in > the test, I see upto 2x performance of 64K THP swapping compared to 4K. > > Signed-off-by: Ryan Roberts > --- > include/linux/swap.h | 6 ++++ > mm/swapfile.c | 74 +++++++++++++++++++++++++++++++++++--------- > mm/vmscan.c | 10 +++--- > 3 files changed, 71 insertions(+), 19 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index a073366a227c..35cbbe6509a9 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -268,6 +268,12 @@ struct swap_cluster_info { > struct percpu_cluster { > struct swap_cluster_info index; /* Current cluster index */ > unsigned int next; /* Likely next allocation offset */ > + unsigned int large_next[]; /* > + * next free offset within current > + * allocation cluster for large folios, > + * or UINT_MAX if no current cluster. > + * Index is (order - 1). > + */ > }; > > struct swap_cluster_list { > diff --git a/mm/swapfile.c b/mm/swapfile.c > index b83ad77e04c0..625964e53c22 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -987,35 +987,70 @@ static int scan_swap_map_slots(struct swap_info_struct *si, > return n_ret; > } > > -static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot) > +static int swap_alloc_large(struct swap_info_struct *si, swp_entry_t *slot, > + unsigned int nr_pages) This looks hacky. IMO, we should put the allocation logic inside percpu_cluster framework. If percpu_cluster framework doesn't work for you, just refactor it firstly. > { > + int order_idx; > unsigned long idx; > struct swap_cluster_info *ci; > + struct percpu_cluster *cluster; > unsigned long offset; > > /* > * Should not even be attempting cluster allocations when huge > * page swap is disabled. Warn and fail the allocation. > */ > - if (!IS_ENABLED(CONFIG_THP_SWAP)) { > + if (!IS_ENABLED(CONFIG_THP_SWAP) || > + nr_pages < 4 || nr_pages > SWAPFILE_CLUSTER || > + !is_power_of_2(nr_pages)) { > VM_WARN_ON_ONCE(1); > return 0; > } > > - if (cluster_list_empty(&si->free_clusters)) > + /* > + * Not using clusters so unable to allocate large entries. > + */ > + if (!si->cluster_info) > return 0; > > - idx = cluster_list_first(&si->free_clusters); > - offset = idx * SWAPFILE_CLUSTER; > - ci = lock_cluster(si, offset); > - alloc_cluster(si, idx); > - cluster_set_count(ci, SWAPFILE_CLUSTER); > + order_idx = ilog2(nr_pages) - 2; > + cluster = this_cpu_ptr(si->percpu_cluster); > + offset = cluster->large_next[order_idx]; > + > + if (offset == UINT_MAX) { > + if (cluster_list_empty(&si->free_clusters)) > + return 0; > + > + idx = cluster_list_first(&si->free_clusters); > + offset = idx * SWAPFILE_CLUSTER; > > - memset(si->swap_map + offset, SWAP_HAS_CACHE, SWAPFILE_CLUSTER); > + ci = lock_cluster(si, offset); > + alloc_cluster(si, idx); > + cluster_set_count(ci, SWAPFILE_CLUSTER); > + > + /* > + * If scan_swap_map_slots() can't find a free cluster, it will > + * check si->swap_map directly. To make sure this standby > + * cluster isn't taken by scan_swap_map_slots(), mark the swap > + * entries bad (occupied). (same approach as discard). > + */ > + memset(si->swap_map + offset + nr_pages, SWAP_MAP_BAD, > + SWAPFILE_CLUSTER - nr_pages); There's an issue with this solution. If the free space of swap device runs low, it's possible that - some cluster are put in the percpu_cluster of some CPUs the swap entries there are marked as used - no free swap entries elsewhere - nr_swap_pages isn't 0 So, we will still scan LRU, but swap allocation fails, although there's still free swap space. I think that we should follow the method we used for the original percpu_cluster. That is, if all free swap entries are in percpu_cluster, we will start to allocate from percpu_cluster. > + } else { > + idx = offset / SWAPFILE_CLUSTER; > + ci = lock_cluster(si, offset); > + } > + > + memset(si->swap_map + offset, SWAP_HAS_CACHE, nr_pages); > unlock_cluster(ci); > - swap_range_alloc(si, offset, SWAPFILE_CLUSTER); > + swap_range_alloc(si, offset, nr_pages); > *slot = swp_entry(si->type, offset); > > + offset += nr_pages; > + if (idx != offset / SWAPFILE_CLUSTER) > + offset = UINT_MAX; > + cluster->large_next[order_idx] = offset; > + > return 1; > } > [snip] -- Best Regards, Huang, Ying