Received: by 2002:ab2:3350:0:b0:1f4:6588:b3a7 with SMTP id o16csp703615lqe; Sat, 6 Apr 2024 23:04:29 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWQeB+BEwMOBPLQfT1AqFyyz/qcLRC0ETFJFuxmRvaqcw+g4ot2NuEXY66wmGXsntuGtjIrWbl3PtgobTH1+fUobhRzErPUrKBBCWQFOQ== X-Google-Smtp-Source: AGHT+IF0xhseZCoAIwohTseBmPVZS9by+ZecKbaBCeKAfdxwsdhII5uIQ8BFrbeSEKGR+HeLO7cH X-Received: by 2002:a05:6a20:6f8a:b0:1a7:48dd:3737 with SMTP id gv10-20020a056a206f8a00b001a748dd3737mr4880801pzb.41.1712469869283; Sat, 06 Apr 2024 23:04:29 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1712469869; cv=pass; d=google.com; s=arc-20160816; b=i6EFz7A3p9j2It4znB8WgnXqj+GVBFjaL+gxDyr2pqgZh6+jqXjChPUwwHr2m/vEDI nUsItFQRiFNCtKfKifhoNm1sz7SwWhFl/hiLdZ8vUSn36psehYAu4cxLcNpBYwjLDnZT X/5eT7jW0Dfn4NBUMyUXdrAnY3CsIoqMZuqrT3n3mPR0iwUvtLgKsvpNVnwGVF51M+C4 5ET2K78Pn/+9eEwe/gBYnG116nyVDN6cx34WG2JuasmY71mkB8pbfmQWp29AXDIVbq5k HKDF3Bm9f6kfoAdP3M9BCH/xjSHg3tFXOaeTyzwFchyU0Kyb0MSyvOqHYZNwUpO2LHoz qxUg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:list-unsubscribe:list-subscribe:list-id:precedence :user-agent:message-id:date:references:in-reply-to:subject:cc:to :from:dkim-signature; bh=kd3qwBqJim9v35xd9dW+M12mnXyAaSQCtFmX9x6MGjk=; fh=LGp7J7rGTQw68AMJLLtJEqSMU/N2tHhkWCVZdaHswSo=; b=FQ50O0IwNppaUN+Iqs68YAXVM8tBC2N/knzGCbMe6b42UB70cObFcr0uoS2KMTOVdI E5cvq/VmoXTUmOZpbBlIOcb7fPnU5TJmEN1wNU1BT+i8y/fovZnh/bWloTBcIpFzdWvM WugdasutYl/jNoMGzSizbXHFwsN37bniAoLcMqjgelM5wptqGIM82UjSiNUUdjwmPZCd 9onIPznTqTaHlORd6WOu81gLFpfMBX8fAA0cEQ1f9pzX5hm3yqVtdyfyNC0GxnnoU+WU wrdirftKJ96E5eTHBaQi3TX5HjoX0x9UaQEsHr2/cFEEUWuRB7VV1H1cGUvl2wN1ddeG rR0w==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=buK6wkFn; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-134185-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-134185-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id fb36-20020a056a002da400b006ecf5a0d539si4170556pfb.71.2024.04.06.23.04.28 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 06 Apr 2024 23:04:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-134185-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=buK6wkFn; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-134185-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-134185-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id D6225B22E03 for ; Sun, 7 Apr 2024 06:04:25 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8D6877464; Sun, 7 Apr 2024 06:04:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="buK6wkFn" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8687917FF for ; Sun, 7 Apr 2024 06:04:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712469855; cv=none; b=Xn7zcGylacHcmmVxQZJrUUNY8aCnETKmJADFMoFGql+csqlE+WxQjx2s1U6cahKUymV8pnmJlFEh6q0JFHN3g2dRQH+5kyW077Bnz0iDcAvrN49MzeuBEohK33AnYU7U9bZjykWdN37DCpWZsNbFWEOi8T5wSVorgMBWH57NEOE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712469855; c=relaxed/simple; bh=9ASREbF5V+W5V2iTmngHkcY+rf4Cr1ccazmdC+nE9Pc=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=XsbAXrgcyUFLeuoL1G0KcxRJYrtwyCL+D5dcV2ERehE6h5oOUciuDl/Z3SwNvKO9yfPtktsYMJ1VRoL6DMX21nXHT+3CzmHj7SCDIGnPA1Dtd5QNgfg33ExvlWKcDUZph/WkPw6xb/uXtvLeHsHRIC3LG+ylegQ84M8I9VlUbK0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=buK6wkFn; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712469853; x=1744005853; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=9ASREbF5V+W5V2iTmngHkcY+rf4Cr1ccazmdC+nE9Pc=; b=buK6wkFnrJHJXhhXVMbguHyl9IB7s/bf98Q6h4Yst2kTaz7IkWvTeIFy wTsnSuykLoqnuv7g4U286MIZom7KM9XXhg7UHa6sS547Mt4GZoI1BB5Ev tXhGybKPO9DXepyw+fQ5NR3QdRup3f8YIrNytR2Hflu+l2oNSJathWylK IzgfmZLv0liZ0Je3Brf6XmID6McysjYq2/MZioHWUZ8W09ONFfhq2tElz jonIOjFlhoT8kgEeyjKQ9TG3xomNvhO781O88s9cFbAn+nHW9MHWzZbgP OtHWQKMLixw55rZhxvBfWGMRcJlpreMMv9rnPu3usU2yLULDAdf7oc3Tf Q==; X-CSE-ConnectionGUID: Ky3gTTOeSLyFX6FYe8UsrA== X-CSE-MsgGUID: t6w9zYVHTfWSOcMACKrlow== X-IronPort-AV: E=McAfee;i="6600,9927,11036"; a="7944394" X-IronPort-AV: E=Sophos;i="6.07,184,1708416000"; d="scan'208";a="7944394" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2024 23:04:12 -0700 X-CSE-ConnectionGUID: enoCzR67Q7GqsPFXl1aIDg== X-CSE-MsgGUID: /eEN4FILTLOXj5Wgffu5wQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,184,1708416000"; d="scan'208";a="19611610" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmviesa009-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2024 23:04:09 -0700 From: "Huang, Ying" To: David Hildenbrand Cc: Ryan Roberts , Andrew Morton , Matthew Wilcox , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , Lance Yang , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v6 4/6] mm: swap: Allow storage of all mTHP orders In-Reply-To: (David Hildenbrand's message of "Fri, 5 Apr 2024 12:38:10 +0200") References: <20240403114032.1162100-1-ryan.roberts@arm.com> <20240403114032.1162100-5-ryan.roberts@arm.com> Date: Sun, 07 Apr 2024 14:02:16 +0800 Message-ID: <87edbhaexj.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=ascii David Hildenbrand writes: > On 03.04.24 13:40, Ryan Roberts wrote: >> Multi-size THP enables performance improvements by allocating large, >> pte-mapped folios for anonymous memory. However I've observed that on an >> arm64 system running a parallel workload (e.g. kernel compilation) >> across many cores, under high memory pressure, the speed regresses. This >> is due to bottlenecking on the increased number of TLBIs added due to >> all the extra folio splitting when the large folios are swapped out. >> Therefore, solve this regression by adding support for swapping out >> mTHP >> without needing to split the folio, just like is already done for >> PMD-sized THP. This change only applies when CONFIG_THP_SWAP is enabled, >> and when the swap backing store is a non-rotating block device. These >> are the same constraints as for the existing PMD-sized THP swap-out >> support. >> Note that no attempt is made to swap-in (m)THP here - this is still >> done >> page-by-page, like for PMD-sized THP. But swapping-out mTHP is a >> prerequisite for swapping-in mTHP. >> The main change here is to improve the swap entry allocator so that >> it >> can allocate any power-of-2 number of contiguous entries between [1, (1 >> << PMD_ORDER)]. This is done by allocating a cluster for each distinct >> order and allocating sequentially from it until the cluster is full. >> This ensures that we don't need to search the map and we get no >> fragmentation due to alignment padding for different orders in the >> cluster. If there is no current cluster for a given order, we attempt to >> allocate a free cluster from the list. If there are no free clusters, we >> fail the allocation and the caller can fall back to splitting the folio >> and allocates individual entries (as per existing PMD-sized THP >> fallback). >> The per-order current clusters are maintained per-cpu using the >> existing >> infrastructure. This is done to avoid interleving pages from different >> tasks, which would prevent IO being batched. This is already done for >> the order-0 allocations so we follow the same pattern. >> As is done for order-0 per-cpu clusters, the scanner now can steal >> order-0 entries from any per-cpu-per-order reserved cluster. This >> ensures that when the swap file is getting full, space doesn't get tied >> up in the per-cpu reserves. >> This change only modifies swap to be able to accept any order >> mTHP. It >> doesn't change the callers to elide doing the actual split. That will be >> done in separate changes. >> Reviewed-by: "Huang, Ying" >> Signed-off-by: Ryan Roberts >> --- >> include/linux/swap.h | 10 ++- >> mm/swap_slots.c | 6 +- >> mm/swapfile.c | 175 ++++++++++++++++++++++++------------------- >> 3 files changed, 109 insertions(+), 82 deletions(-) >> diff --git a/include/linux/swap.h b/include/linux/swap.h >> index 5e1e4f5bf0cb..11c53692f65f 100644 >> --- a/include/linux/swap.h >> +++ b/include/linux/swap.h >> @@ -268,13 +268,19 @@ struct swap_cluster_info { >> */ >> #define SWAP_NEXT_INVALID 0 >> +#ifdef CONFIG_THP_SWAP >> +#define SWAP_NR_ORDERS (PMD_ORDER + 1) >> +#else >> +#define SWAP_NR_ORDERS 1 >> +#endif >> + >> /* >> * We assign a cluster to each CPU, so each CPU can allocate swap entry from >> * its own cluster and swapout sequentially. The purpose is to optimize swapout >> * throughput. >> */ >> struct percpu_cluster { >> - unsigned int next; /* Likely next allocation offset */ >> + unsigned int next[SWAP_NR_ORDERS]; /* Likely next allocation offset */ >> }; >> struct swap_cluster_list { >> @@ -471,7 +477,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio); >> bool folio_free_swap(struct folio *folio); >> void put_swap_folio(struct folio *folio, swp_entry_t entry); >> extern swp_entry_t get_swap_page_of_type(int); >> -extern int get_swap_pages(int n, swp_entry_t swp_entries[], int entry_size); >> +extern int get_swap_pages(int n, swp_entry_t swp_entries[], int order); >> extern int add_swap_count_continuation(swp_entry_t, gfp_t); >> extern void swap_shmem_alloc(swp_entry_t); >> extern int swap_duplicate(swp_entry_t); >> diff --git a/mm/swap_slots.c b/mm/swap_slots.c >> index 53abeaf1371d..13ab3b771409 100644 >> --- a/mm/swap_slots.c >> +++ b/mm/swap_slots.c >> @@ -264,7 +264,7 @@ static int refill_swap_slots_cache(struct swap_slots_cache *cache) >> cache->cur = 0; >> if (swap_slot_cache_active) >> cache->nr = get_swap_pages(SWAP_SLOTS_CACHE_SIZE, >> - cache->slots, 1); >> + cache->slots, 0); >> return cache->nr; >> } >> @@ -311,7 +311,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio) >> if (folio_test_large(folio)) { >> if (IS_ENABLED(CONFIG_THP_SWAP)) >> - get_swap_pages(1, &entry, folio_nr_pages(folio)); >> + get_swap_pages(1, &entry, folio_order(folio)); > > The only comment I have is that this nr_pages -> order conversion adds > a bit of noise to this patch. > > AFAIKS, it's primarily only required for "cluster->next[order]", > everything else doesn't really require the order. > > I'd just have split that out into a separate patch, or simply > converted nr_pages -> order where required. > > Nothing jumped at me, but I'm not an expert on that code, so I'm > mostly trusting the others ;) The nr_pages -> order conversion replaces ilog2(nr_pages) with (1<