Received: by 2002:ab2:3141:0:b0:1ed:23cc:44d1 with SMTP id i1csp402524lqg; Fri, 1 Mar 2024 08:32:17 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWrs4Mj0jG3kiLnksM6kyp7zoDdVGcgIYhB1XGoI6b0W4IVJgE5y06pxkG7mLte8nOcImO3Fbr6Wiy1stbFZggi/aAv58O84YyKwEuPLg== X-Google-Smtp-Source: AGHT+IESUI5phphcBpsfrYe1I15W1Sw2uHVxQrV2a64ABmWLbkK5f5NvsnqEjCbOitm6qIoIldNm X-Received: by 2002:a05:6402:3099:b0:565:a562:dc66 with SMTP id de25-20020a056402309900b00565a562dc66mr1734319edb.11.1709310737753; Fri, 01 Mar 2024 08:32:17 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709310737; cv=pass; d=google.com; s=arc-20160816; b=HAEo220eU3crHHWBUvzG8F6ymH/H4AjtyMQRmHuqO89lYSKT2NOgewCxIBu8vwIECy 0Yi+ctlWSuO0RxIsEVJcl4QAhqM/k+xwdaCgiWBDCZSWxjxMVISvczLvpcp16/g2iDO1 Y/fyEhX4Yc6u0r9bOUOLrZmHhUFV3fJi6aDli+llx+SmAvCw0NXrVmTsDE0i8iYznHJS c0Yh5Ohy4+AVEZuOjETlPtMT1o0EPtYb88vuxCucgYedd6XfyeX6c6N1jtv4/+jHofH1 toH1lnB7se3C7P7z4BfG1SofgELV3579fL+G+h9O0OWgFCweRm4K08NeeVRLNC5XuptB Y4yQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:references:cc:to:from :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=Hn4/ni23V64lj2bhv2biIgqTzzvD4Z1mUXdOZ0sCZ5A=; fh=J9dLnO7E8EfhhtOvTvo1rPPmyW8nZUri+Edy5an+SEw=; b=Cv4FcUieHCuxM73GWAh/aE+9dI/vutiuGFm4dBtS18B6JEW3Wh/EfdKKS2Immz9tNZ e69UVlWfuYwa7M7eJUkURrO5B/3HieFo6LxxmneCdts0bhbD1s6NKmdhyP8cub7wneti zUIpPaovIjCwgZiS8NDUo64LHnIK+3SST87M62FsJN79qDtDVD0f9JYjzG1FdZJcXdfG yNXkDORoEOrzaAisbHtqsZF6dLpA+afgq9DJ1x92szDnwm00kQ6FLrnc0GxKSKpXAFOf 5/FyjpXzq7lwDswwgKDZAO/CUOvdxmnx0ywPU+f601KrzRwsJBIsF6Sbxipf5MZEBODA rYDA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-88707-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-88707-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id d12-20020a50ea8c000000b00566f4c6f565si108662edo.398.2024.03.01.08.32.17 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Mar 2024 08:32:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-88707-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-88707-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-88707-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 1AB911F2659D for ; Fri, 1 Mar 2024 16:32:17 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 68ADA2916; Fri, 1 Mar 2024 16:31:56 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E47A73C29 for ; Fri, 1 Mar 2024 16:31:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709310715; cv=none; b=Q5WDm9FP0T5YUdSczLUyefETSzSqPN3k0LSO1mfOXn6KpcU1Jsz6teMfqw0TI8wrTspFjsZsIVFTU+yYCSf+F0ltTPRlINV8kmChrg3SvEt5EhPcpRA0L8mMjnEYJhQNXHccgk5WR5wcBWwYSWebfSFnnfbV3dUbBPORoDM37aw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709310715; c=relaxed/simple; bh=2ZACxdM5kk8NSxImonYUYaGDIsIOS1YI2zUB3VwNYs0=; h=Message-ID:Date:MIME-Version:Subject:From:To:Cc:References: In-Reply-To:Content-Type; b=K9PfddRK4uTzAtH+9Q5pV3vEk1Wa8VpGA4bOP7cnAJgOKJBQpU4SqHNtshN2YuP8c0/KnwmjiZLEaSb+krH+hEz23pOOen1BYDk/RZctNdTgho8BCT1Yo/Xk1aJHcHUn6AS6De8QCO8kvzl/ofpCnPwiYLVVtH1KciBZXg1MwbE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5B4D11FB; Fri, 1 Mar 2024 08:32:31 -0800 (PST) Received: from [10.57.68.58] (unknown [10.57.68.58]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2E66B3F73F; Fri, 1 Mar 2024 08:31:50 -0800 (PST) Message-ID: Date: Fri, 1 Mar 2024 16:31:49 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 1/4] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags Content-Language: en-GB From: Ryan Roberts To: David Hildenbrand , Andrew Morton , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20231025144546.577640-1-ryan.roberts@arm.com> <20231025144546.577640-2-ryan.roberts@arm.com> <6541e29b-f25a-48b8-a553-fd8febe85e5a@redhat.com> <2934125a-f2e2-417c-a9f9-3cb1e074a44f@redhat.com> <049818ca-e656-44e4-b336-934992c16028@arm.com> <4a73b16e-9317-477a-ac23-8033004b0637@arm.com> <1195531c-d985-47e2-b7a2-8895fbb49129@redhat.com> <5ebac77a-5c61-481f-8ac1-03bc4f4e2b1d@arm.com> In-Reply-To: <5ebac77a-5c61-481f-8ac1-03bc4f4e2b1d@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit On 01/03/2024 16:27, Ryan Roberts wrote: > On 28/02/2024 15:12, David Hildenbrand wrote: >> On 28.02.24 15:57, Ryan Roberts wrote: >>> On 28/02/2024 12:12, David Hildenbrand wrote: >>>>>> How relevant is it? Relevant enough that someone decided to put that >>>>>> optimization in? I don't know :) >>>>> >>>>> I'll have one last go at convincing you: Huang Ying (original author) commented >>>>> "I believe this should be OK.  Better to compare the performance too." at [1]. >>>>> That implies to me that perhaps the optimization wasn't in response to a >>>>> specific problem after all. Do you have any thoughts, Huang? >>>> >>>> Might make sense to include that in the patch description! >>>> >>>>> OK so if we really do need to keep this optimization, here are some ideas: >>>>> >>>>> Fundamentally, we would like to be able to figure out the size of the swap slot >>>>> from the swap entry. Today swap supports 2 sizes; PAGE_SIZE and PMD_SIZE. For >>>>> PMD_SIZE, it always uses a full cluster, so can easily add a flag to the >>>>> cluster >>>>> to mark it as PMD_SIZE. >>>>> >>>>> Going forwards, we want to support all sizes (power-of-2). Most of the time, a >>>>> cluster will contain only one size of THPs, but this is not the case when a THP >>>>> in the swapcache gets split or when an order-0 slot gets stolen. We expect >>>>> these >>>>> cases to be rare. >>>>> >>>>> 1) Keep the size of the smallest swap entry in the cluster header. Most of the >>>>> time it will be the full size of the swap entry, but sometimes it will cover >>>>> only a portion. In the latter case you may see a false negative for >>>>> swap_page_trans_huge_swapped() meaning we take the slow path, but that is rare. >>>>> There is one wrinkle: currently the HUGE flag is cleared in >>>>> put_swap_folio(). We >>>>> wouldn't want to do the equivalent in the new scheme (i.e. set the whole >>>>> cluster >>>>> to order-0). I think that is safe, but haven't completely convinced myself yet. >>>>> >>>>> 2) allocate 4 bits per (small) swap slot to hold the order. This will give >>>>> precise information and is conceptually simpler to understand, but will cost >>>>> more memory (half as much as the initial swap_map[] again). >>>>> >>>>> I still prefer to avoid this at all if we can (and would like to hear Huang's >>>>> thoughts). But if its a choice between 1 and 2, I prefer 1 - I'll do some >>>>> prototyping. >>>> >>>> Taking a step back: what about we simply batch unmapping of swap entries? >>>> >>>> That is, if we're unmapping a PTE range, we'll collect swap entries (under PT >>>> lock) that reference consecutive swap offsets in the same swap file. >>> >>> Yes in principle, but there are 4 places where free_swap_and_cache() is called, >>> and only 2 of those are really amenable to batching (zap_pte_range() and >>> madvise_free_pte_range()). So the other two users will still take the "slow" >>> path. Maybe those 2 callsites are the only ones that really matter? I can >>> certainly have a stab at this approach. >> >> We can ignore the s390x one. That s390x code should only apply to KVM guest >> memory where ordinary THP are not even supported. (and nobody uses mTHP there yet). >> >> Long story short: the VM can hint that some memory pages are now unused and the >> hypervisor can reclaim them. That's what that callback does (zap guest-provided >> guest memory). No need to worry about any batching for now. >> >> Then, there is the shmem one in shmem_free_swap(). I really don't know how shmem >> handles THP+swapout. >> >> But looking at shmem_writepage(), we split any large folios before moving them >> to the swapcache, so likely we don't care at all, because THP don't apply. >> >>> >>>> >>>> There, we can then first decrement all the swap counts, and then try minimizing >>>> how often we actually have to try reclaiming swap space (lookup folio, see it's >>>> a large folio that we cannot reclaim or could reclaim, ...). >>>> >>>> Might need some fine-tuning in swap code to "advance" to the next entry to try >>>> freeing up, but we certainly can do better than what we would do right now. >>> >>> I'm not sure I've understood this. Isn't advancing just a matter of: >>> >>> entry = swp_entry(swp_type(entry), swp_offset(entry) + 1); >> >> I was talking about the advancing swapslot processing after decrementing the >> swapcounts. >> >> Assume you decremented 512 swapcounts and some of them went to 0. AFAIU, you'd >> have to start with the first swapslot that has now a swapcount=0 one and try to >> reclaim swap. >> >> Assume you get a small folio, then you'll have to proceed with the next swap >> slot and try to reclaim swap. >> >> Assume you get a large folio, then you can skip more swapslots (depending on >> offset into the folio etc). >> >> If you get what I mean. :) >> > > I've implemented the batching as David suggested, and I'm pretty confident it's > correct. The only problem is that during testing I can't provoke the code to > take the path. I've been pouring through the code but struggling to figure out > under what situation you would expect the swap entry passed to > free_swap_and_cache() to still have a cached folio? Does anyone have any idea? > > This is the original (unbatched) function, after my change, which caused David's > concern that we would end up calling __try_to_reclaim_swap() far too much: > > int free_swap_and_cache(swp_entry_t entry) > { > struct swap_info_struct *p; > unsigned char count; > > if (non_swap_entry(entry)) > return 1; > > p = _swap_info_get(entry); > if (p) { > count = __swap_entry_free(p, entry); > if (count == SWAP_HAS_CACHE) > __try_to_reclaim_swap(p, swp_offset(entry), > TTRS_UNMAPPED | TTRS_FULL); > } > return p != NULL; > } > > The trouble is, whenever its called, count is always 0, so > __try_to_reclaim_swap() never gets called. > > My test case is allocating 1G anon memory, then doing madvise(MADV_PAGEOUT) over > it. Then doing either a munmap() or madvise(MADV_FREE), both of which cause this > function to be called for every PTE, but count is always 0 after > __swap_entry_free() so __try_to_reclaim_swap() is never called. I've tried for > order-0 as well as PTE- and PMD-mapped 2M THP. > > I'm guessing the swapcache was already reclaimed as part of MADV_PAGEOUT? I'm > using a block ram device as my backing store - I think this does synchronous IO > so perhaps if I have a real block device with async IO I might have more luck? Ahh I just switched to SSD as swap device and now its getting called. I guess that's the reason. Sorry for the noise. > Just a guess... > > Or perhaps this code path is a corner case? In which case, perhaps its not worth > adding the batching optimization after all? > > Thanks, > Ryan >