Received: by 2002:ab2:3350:0:b0:1f4:6588:b3a7 with SMTP id o16csp1225875lqe; Mon, 8 Apr 2024 02:36:03 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWS9dNfGUEPF+EGhzbOXwW9ORB5vBLswgpYl2ysXhiRBA5zTx0aFZtrpxiiLH3omahwWv78j/SuKeLATgEAvBPoBERBzimTyHmbL+UJBg== X-Google-Smtp-Source: AGHT+IHn9tpX/355gEfrqC7Io9V+H+etIG571CCZGOs39jawjdQKySF01gikCzzJ3FzG4GLkyHsq X-Received: by 2002:ad4:5b87:0:b0:69b:9f0:aee3 with SMTP id 7-20020ad45b87000000b0069b09f0aee3mr5971694qvp.2.1712568963733; Mon, 08 Apr 2024 02:36:03 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1712568963; cv=pass; d=google.com; s=arc-20160816; b=q8pLzmGzgLE5Oe4SXRqCwAZO/Xpvmpbt+Pg1pfqblaghkDuIYOLvGsLjQ5CuXqbklL RcCqiBbCmxuzwHwG802/ZLQLpexqvwcQNpcXpd38qrOY3H+x6orM+B2e35tJDv6kZ8NB WGRCkmgKjj/OGSKqjaJYF7bdbm+8D2NhkmEOdYWYIXpadW0lCFlW4JOe3y5qqeTD9wZr I99JRPEfREZN/dPSqrp46ax76mllYpgxT25Atual4WkibHRimVzxcZcKkEX3zmZ516kL VG0PBp27dapd2eBefVTwKFzTXAeTpOrKRbfLKVyPP0O10c96Z4B4HH5FjDXui2hZ0cX7 Z6xQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=yPrCcd2MOSnMWnlPnHprdhV1SNyI2CtuEmPTUwq4gw0=; fh=xDpkwJuE69l7nlhzQAsOe7A6adamR7Hfwn5ZNYPEtto=; b=W3sAbn070o9HKW9TE4TRQdqKVgYpRyxRA2hBC7/eK0IzEOm+ss4/dz5+qP/vJP2qd/ 8a1jwlvgCRxckH8AzJ5UwEoa0eKO3LM9p9GSeCap7057BEa7t4DSiDnMMRDnFeo/M64R s5MwwRdDUTsAoA1QgOXa24oCDOD9JNX9h0ywPm/sTs0odIloW9jOd9GtvQu0PiZmANd+ nxwGPpyqaQIammI6JmYlO23anvrjHu9BhCjxJJwkalBiU/19E7UboKWuWaSMVzRFYecc PHJ9T2quSJQmKeo1zrzodk/sUpQdUlKGvYkDWbys3F5nMsY7hkFHdGhYOe6Zq87b7mYQ 96zw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-135159-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-135159-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id p13-20020a0cf54d000000b006993127de68si7747021qvm.421.2024.04.08.02.36.03 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Apr 2024 02:36:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-135159-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-135159-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-135159-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 549061C22009 for ; Mon, 8 Apr 2024 09:36:03 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6E4D2482C7; Mon, 8 Apr 2024 09:35:55 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 38D3E6FBF for ; Mon, 8 Apr 2024 09:35:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712568954; cv=none; b=S8h3kqC2DhlasyKowQdc32YHPqMhbcXSUKVtetacacviEOP3XGhkTgG6VK+S8Ze17Eg5pMeXftQOCw01s2vB0u94RFZz/FnxpWuMkMII84CDX5W68QxaUktVUWv8JgTZPeurcQDQy093aNsyxBrlRzYLSq6iSXlDS5zK+M3ZMy0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712568954; c=relaxed/simple; bh=7hCh8G86/2fsEPidW5F9QFigVKQy/nULke71HqInQow=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=UxV4XApm0D3CqPUjimwr/NeMDETnFw7qne/UD3Gx0MMJ3FjCHXpKgwt6C9jBZECiTiU1JIwnj+M2dvhsNtqGyBkVxVQsg98Fh243NJkp6FkZZfcjkwzBtI75IFUHjgTI/uu4jYaMuieoL/SX6KxwKCKJuGB1O3osFU/GvA1DkrU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EC0C41007; Mon, 8 Apr 2024 02:36:22 -0700 (PDT) Received: from [10.57.73.169] (unknown [10.57.73.169]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CAE133F766; Mon, 8 Apr 2024 02:35:49 -0700 (PDT) Message-ID: <521b8246-e327-400b-ae04-8ed97f98703c@arm.com> Date: Mon, 8 Apr 2024 10:35:48 +0100 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v6 4/6] mm: swap: Allow storage of all mTHP orders Content-Language: en-GB To: David Hildenbrand , "Huang, Ying" Cc: Andrew Morton , Matthew Wilcox , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , Lance Yang , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20240403114032.1162100-1-ryan.roberts@arm.com> <20240403114032.1162100-5-ryan.roberts@arm.com> <87edbhaexj.fsf@yhuang6-desk2.ccr.corp.intel.com> <10f8227a-c8e1-4873-aff3-6260cbe4378c@redhat.com> From: Ryan Roberts In-Reply-To: <10f8227a-c8e1-4873-aff3-6260cbe4378c@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit On 08/04/2024 10:33, David Hildenbrand wrote: > On 08.04.24 11:24, Ryan Roberts wrote: >> On 07/04/2024 07:02, Huang, Ying wrote: >>> David Hildenbrand writes: >>> >>>> On 03.04.24 13:40, Ryan Roberts wrote: >>>>> Multi-size THP enables performance improvements by allocating large, >>>>> pte-mapped folios for anonymous memory. However I've observed that on an >>>>> arm64 system running a parallel workload (e.g. kernel compilation) >>>>> across many cores, under high memory pressure, the speed regresses. This >>>>> is due to bottlenecking on the increased number of TLBIs added due to >>>>> all the extra folio splitting when the large folios are swapped out. >>>>> Therefore, solve this regression by adding support for swapping out >>>>> mTHP >>>>> without needing to split the folio, just like is already done for >>>>> PMD-sized THP. This change only applies when CONFIG_THP_SWAP is enabled, >>>>> and when the swap backing store is a non-rotating block device. These >>>>> are the same constraints as for the existing PMD-sized THP swap-out >>>>> support. >>>>> Note that no attempt is made to swap-in (m)THP here - this is still >>>>> done >>>>> page-by-page, like for PMD-sized THP. But swapping-out mTHP is a >>>>> prerequisite for swapping-in mTHP. >>>>> The main change here is to improve the swap entry allocator so that >>>>> it >>>>> can allocate any power-of-2 number of contiguous entries between [1, (1 >>>>> << PMD_ORDER)]. This is done by allocating a cluster for each distinct >>>>> order and allocating sequentially from it until the cluster is full. >>>>> This ensures that we don't need to search the map and we get no >>>>> fragmentation due to alignment padding for different orders in the >>>>> cluster. If there is no current cluster for a given order, we attempt to >>>>> allocate a free cluster from the list. If there are no free clusters, we >>>>> fail the allocation and the caller can fall back to splitting the folio >>>>> and allocates individual entries (as per existing PMD-sized THP >>>>> fallback). >>>>> The per-order current clusters are maintained per-cpu using the >>>>> existing >>>>> infrastructure. This is done to avoid interleving pages from different >>>>> tasks, which would prevent IO being batched. This is already done for >>>>> the order-0 allocations so we follow the same pattern. >>>>> As is done for order-0 per-cpu clusters, the scanner now can steal >>>>> order-0 entries from any per-cpu-per-order reserved cluster. This >>>>> ensures that when the swap file is getting full, space doesn't get tied >>>>> up in the per-cpu reserves. >>>>> This change only modifies swap to be able to accept any order >>>>> mTHP. It >>>>> doesn't change the callers to elide doing the actual split. That will be >>>>> done in separate changes. >>>>> Reviewed-by: "Huang, Ying" >>>>> Signed-off-by: Ryan Roberts >>>>> --- >>>>>    include/linux/swap.h |  10 ++- >>>>>    mm/swap_slots.c      |   6 +- >>>>>    mm/swapfile.c        | 175 ++++++++++++++++++++++++------------------- >>>>>    3 files changed, 109 insertions(+), 82 deletions(-) >>>>> diff --git a/include/linux/swap.h b/include/linux/swap.h >>>>> index 5e1e4f5bf0cb..11c53692f65f 100644 >>>>> --- a/include/linux/swap.h >>>>> +++ b/include/linux/swap.h >>>>> @@ -268,13 +268,19 @@ struct swap_cluster_info { >>>>>     */ >>>>>    #define SWAP_NEXT_INVALID    0 >>>>>    +#ifdef CONFIG_THP_SWAP >>>>> +#define SWAP_NR_ORDERS        (PMD_ORDER + 1) >>>>> +#else >>>>> +#define SWAP_NR_ORDERS        1 >>>>> +#endif >>>>> + >>>>>    /* >>>>>     * We assign a cluster to each CPU, so each CPU can allocate swap entry >>>>> from >>>>>     * its own cluster and swapout sequentially. The purpose is to optimize >>>>> swapout >>>>>     * throughput. >>>>>     */ >>>>>    struct percpu_cluster { >>>>> -    unsigned int next; /* Likely next allocation offset */ >>>>> +    unsigned int next[SWAP_NR_ORDERS]; /* Likely next allocation offset */ >>>>>    }; >>>>>      struct swap_cluster_list { >>>>> @@ -471,7 +477,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio); >>>>>    bool folio_free_swap(struct folio *folio); >>>>>    void put_swap_folio(struct folio *folio, swp_entry_t entry); >>>>>    extern swp_entry_t get_swap_page_of_type(int); >>>>> -extern int get_swap_pages(int n, swp_entry_t swp_entries[], int entry_size); >>>>> +extern int get_swap_pages(int n, swp_entry_t swp_entries[], int order); >>>>>    extern int add_swap_count_continuation(swp_entry_t, gfp_t); >>>>>    extern void swap_shmem_alloc(swp_entry_t); >>>>>    extern int swap_duplicate(swp_entry_t); >>>>> diff --git a/mm/swap_slots.c b/mm/swap_slots.c >>>>> index 53abeaf1371d..13ab3b771409 100644 >>>>> --- a/mm/swap_slots.c >>>>> +++ b/mm/swap_slots.c >>>>> @@ -264,7 +264,7 @@ static int refill_swap_slots_cache(struct >>>>> swap_slots_cache *cache) >>>>>        cache->cur = 0; >>>>>        if (swap_slot_cache_active) >>>>>            cache->nr = get_swap_pages(SWAP_SLOTS_CACHE_SIZE, >>>>> -                       cache->slots, 1); >>>>> +                       cache->slots, 0); >>>>>          return cache->nr; >>>>>    } >>>>> @@ -311,7 +311,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio) >>>>>          if (folio_test_large(folio)) { >>>>>            if (IS_ENABLED(CONFIG_THP_SWAP)) >>>>> -            get_swap_pages(1, &entry, folio_nr_pages(folio)); >>>>> +            get_swap_pages(1, &entry, folio_order(folio)); >>>> >>>> The only comment I have is that this nr_pages -> order conversion adds >>>> a bit of noise to this patch. >>>> >>>> AFAIKS, it's primarily only required for "cluster->next[order]", >>>> everything else doesn't really require the order. >>>> >>>> I'd just have split that out into a separate patch, or simply >>>> converted nr_pages -> order where required. >>>> >>>> Nothing jumped at me, but I'm not an expert on that code, so I'm >>>> mostly trusting the others ;) >>> >>> The nr_pages -> order conversion replaces ilog2(nr_pages) with >>> (1<>> don't need to worry about whether nr_pages is a power of 2.  Do you >>> think that this makes sense? >> >> I think that David's point was that I should just split out that change to its >> own patch to aid readability? I'm happy to do that if no one objects. > > Yes. Or avoiding it and not caring about a ilog vs. 1<