Received: by 2002:a05:7208:9594:b0:7e:5202:c8b4 with SMTP id gs20csp2598494rbb; Wed, 28 Feb 2024 06:57:53 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCUa+4jeoPRgeDxewRt+Iu8BYAO6iAP1AW8pRq/sQuAMKNaOTsShxozrpSBkwwSGLVKfsoXCzxjdBr4WwZWHj8EcXw1sptnXBCCWaRvRZA== X-Google-Smtp-Source: AGHT+IFTo83ZwTStn9Ql6c9J6CketS6RmaTufrJwSaamurrRsC+hLcIpvww/zqIK9/c3wWSnjQI0 X-Received: by 2002:a17:903:230e:b0:1dc:c10b:1745 with SMTP id d14-20020a170903230e00b001dcc10b1745mr4082909plh.16.1709132272410; Wed, 28 Feb 2024 06:57:52 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709132272; cv=pass; d=google.com; s=arc-20160816; b=sZ0IMNH5kijxFwLON2x7pgwMVy1c7gNMmUNGntT1UxPYBTyL4qzRZp8q7yFCzqugPt KTGnoaj60rLbe89YBQA0D7aj7eyCgaXMqZ5LMYHgKzkzOkLtrkMBAoj4tIGfP4EqB6Bq KrzHBl7Pix7phBYd3E8hmtZHe8Dnsp+mc4rfYHxvCz8xWZouhS42ZNwcj55XkCFOC9vn QF0phojC03EKy3KACCWy0gYiu8+1cb/nlAuDAOhNK8V5dAKmSSkb1tztG4TKlqOy8NSJ N9U+6nAbwvjU5aMKu/KgndJeNajmZlGHsGQluzdo25iij2t2zIxfeKgju9ImGvX7LeO4 JH7w== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=ICjt/UqO6GnZ/Mx0WIbm08Z6udWCzgJ0bpIB2cuTXPo=; fh=J9dLnO7E8EfhhtOvTvo1rPPmyW8nZUri+Edy5an+SEw=; b=Xq/ZbHhAJbpwjqWikxW+IHFK4ZPEJ6/3RjeK4vL6P0jA+HVrP5R3sD+8r11Cyc1Aa3 x9WQyAI5AYrT1/ezlGvHuGh07K5dC98hWL6xWMlrB9+yDvSTLOPCfgwBB1QHA/sDM8Ia T8g3I+VjwdhYQgtDm0+9XDgBgikAQ8WuCQDNOPSN9SqQmk2XUSivfK7e198VUtst5K2Q JfIplUJEyM9eu4lzb9JADj1VCUIEpcUFOLEL2APcymEyOcFVvAS35PzUVRb0baNtBsCe jEkcT4v1nj9S/cu5FCH2tVKaqHA+vFiOLQ62UC93hHDPjpKBY4ipI5VdzyiS6h29teM9 sP4w==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-85227-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-85227-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id f7-20020a170902684700b001dbb13a5b14si3298280pln.622.2024.02.28.06.57.52 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Feb 2024 06:57:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-85227-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-85227-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-85227-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id C39A728B040 for ; Wed, 28 Feb 2024 14:57:51 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id DC64A15B0FF; Wed, 28 Feb 2024 14:57:45 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B363873531 for ; Wed, 28 Feb 2024 14:57:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709132265; cv=none; b=Bi3gQ4AHloKrxlI3+mFWrxH+hL1bt12yc5MMw3cvN5W7UvK21KnFZ6LT/B2ZfotFUzobPy3OzkVFes4/bH8aPoz8llcSoqcpwy88yRhumwX3gNLb6qKC+CVP3qk7q3bg89M6t7ylji5dUClU/Qh3EwkkiUn4ggpVs9YK7EbAtgA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709132265; c=relaxed/simple; bh=u+CxlfyPsAmnf6ezTygBrM4z1l4e5MJ76fX/bJTH2bw=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=QK9oop6NKqIQ4jK2lUxy7KHf6q8xS6dVLGRpzgn8c8JLu7Mu5M8HhHudQJGOdw5pPht1kzV+CAlE8HUc5rpInxUu6/RieQj5zJfrqT3zod7SMqX65m/nDsMoKyZ+yPlC3LixL2rRpNK95Y5Tjvo4A2cQBpTdOVAfoq7/p9QuzL8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 678941FB; Wed, 28 Feb 2024 06:58:15 -0800 (PST) Received: from [10.1.38.163] (XHFQ2J9959.cambridge.arm.com [10.1.38.163]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AF1003F73F; Wed, 28 Feb 2024 06:57:34 -0800 (PST) Message-ID: <4a73b16e-9317-477a-ac23-8033004b0637@arm.com> Date: Wed, 28 Feb 2024 14:57:33 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 1/4] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags Content-Language: en-GB To: David Hildenbrand , Andrew Morton , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20231025144546.577640-1-ryan.roberts@arm.com> <20231025144546.577640-2-ryan.roberts@arm.com> <6541e29b-f25a-48b8-a553-fd8febe85e5a@redhat.com> <2934125a-f2e2-417c-a9f9-3cb1e074a44f@redhat.com> <049818ca-e656-44e4-b336-934992c16028@arm.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit On 28/02/2024 12:12, David Hildenbrand wrote: >>> How relevant is it? Relevant enough that someone decided to put that >>> optimization in? I don't know :) >> >> I'll have one last go at convincing you: Huang Ying (original author) commented >> "I believe this should be OK.  Better to compare the performance too." at [1]. >> That implies to me that perhaps the optimization wasn't in response to a >> specific problem after all. Do you have any thoughts, Huang? > > Might make sense to include that in the patch description! > >> OK so if we really do need to keep this optimization, here are some ideas: >> >> Fundamentally, we would like to be able to figure out the size of the swap slot >> from the swap entry. Today swap supports 2 sizes; PAGE_SIZE and PMD_SIZE. For >> PMD_SIZE, it always uses a full cluster, so can easily add a flag to the cluster >> to mark it as PMD_SIZE. >> >> Going forwards, we want to support all sizes (power-of-2). Most of the time, a >> cluster will contain only one size of THPs, but this is not the case when a THP >> in the swapcache gets split or when an order-0 slot gets stolen. We expect these >> cases to be rare. >> >> 1) Keep the size of the smallest swap entry in the cluster header. Most of the >> time it will be the full size of the swap entry, but sometimes it will cover >> only a portion. In the latter case you may see a false negative for >> swap_page_trans_huge_swapped() meaning we take the slow path, but that is rare. >> There is one wrinkle: currently the HUGE flag is cleared in put_swap_folio(). We >> wouldn't want to do the equivalent in the new scheme (i.e. set the whole cluster >> to order-0). I think that is safe, but haven't completely convinced myself yet. >> >> 2) allocate 4 bits per (small) swap slot to hold the order. This will give >> precise information and is conceptually simpler to understand, but will cost >> more memory (half as much as the initial swap_map[] again). >> >> I still prefer to avoid this at all if we can (and would like to hear Huang's >> thoughts). But if its a choice between 1 and 2, I prefer 1 - I'll do some >> prototyping. > > Taking a step back: what about we simply batch unmapping of swap entries? > > That is, if we're unmapping a PTE range, we'll collect swap entries (under PT > lock) that reference consecutive swap offsets in the same swap file. Yes in principle, but there are 4 places where free_swap_and_cache() is called, and only 2 of those are really amenable to batching (zap_pte_range() and madvise_free_pte_range()). So the other two users will still take the "slow" path. Maybe those 2 callsites are the only ones that really matter? I can certainly have a stab at this approach. > > There, we can then first decrement all the swap counts, and then try minimizing > how often we actually have to try reclaiming swap space (lookup folio, see it's > a large folio that we cannot reclaim or could reclaim, ...). > > Might need some fine-tuning in swap code to "advance" to the next entry to try > freeing up, but we certainly can do better than what we would do right now. I'm not sure I've understood this. Isn't advancing just a matter of: entry = swp_entry(swp_type(entry), swp_offset(entry) + 1);