Received: by 2002:a05:7412:b995:b0:f9:9502:5bb8 with SMTP id it21csp1103323rdb; Fri, 22 Dec 2023 15:16:46 -0800 (PST) X-Google-Smtp-Source: AGHT+IFjJ84/1JUdRAW7D1/fAVnMTLSdLt+spBJZHffraGGPsEF71Zo0cldniOSPsK+9fh49bil9 X-Received: by 2002:a05:6808:f87:b0:3b8:9dec:1b7f with SMTP id o7-20020a0568080f8700b003b89dec1b7fmr2385001oiw.5.1703287006500; Fri, 22 Dec 2023 15:16:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703287006; cv=none; d=google.com; s=arc-20160816; b=NnljeMSRwclbvNhdVMazIJGxOgz3PIEcA+xruDpbpmqHavTc2giZay2DKTLO12MYDv HTbQHwj4Lrte0nGm2w+ALm6DCR1NTaQbFCPsbn7H+Q2xCaOsWojFoXKQNbsjZUsD2eRu 3hukE9+2HRBYssNsS2qmjfYPiwUE2VR0I4Y3peEAmvGqevYK9nFyOcwDHHM2kN2QbvSc T6HZZuh9zjrXUVZjdo8R4u4zFy15cE1XPlrxvdu+P08hfK8FiT5d0T7Nwod+UvMFbMh1 cJyv/GxbQFVNSpSMp+0+84kz5hus9oiWkN5PsVJX2v4I03hcmP3UlFDtF8iQ9Tker+NF CxJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=2sEr1AiNaFtUviujCf/L2c+hnZauxs0mM9dItjP1uhE=; fh=Xv+hJtGywkGKdw1qLrXxU8kflxVMUP2luIHKD0J4s1A=; b=bEhq4BTf3aU7H9A7elm8GOSNjc+UkXiUHjG7QyfBvUP66dK6brJZpHwVfiZBslLYKk IuMnlkvvWkP5t9FQwdmZZx3dPAI7mZlgofxFrMwsMKmd/M0mRHSx9lbuIkfpjK8GGuwT QpPPz15Bbv+5b2sqUiq5i5jEq+gjTzKTS7BDzl3GcGoLrsBw4UMa9GFe/t7oPq7pFb6E WyW4aNTSfF3kCSLLU/WTeYx8I7AJz2vRP8IYCilTW7iz04I8a1HQeznk7HRpQG/LYy+l xXj1JTMXvpuzuce1803d3N//oWYo3ZB0zHoTc3U5Kcg0kciIGdgdCJRK7CDZySixWC4P momg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=aerR3g1U; spf=pass (google.com: domain of linux-kernel+bounces-10110-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10110-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id p29-20020aa79e9d000000b006d5aa09b7casi3823744pfq.396.2023.12.22.15.16.46 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 15:16:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10110-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=aerR3g1U; spf=pass (google.com: domain of linux-kernel+bounces-10110-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10110-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 138C8284C26 for ; Fri, 22 Dec 2023 23:16:46 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 23AAD364AC; Fri, 22 Dec 2023 23:16:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="aerR3g1U" X-Original-To: linux-kernel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5BFA4364A0 for ; Fri, 22 Dec 2023 23:16:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4C65BC433C7; Fri, 22 Dec 2023 23:16:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703287000; bh=PuoJSH9edJoo7oeOjVNTX3jweEOc6MbjFIsvStWC698=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=aerR3g1U+YyKbb3C+3gfRjRJXeoL2jnnN7yfEMpxtc2pkeGswJJCVwAvBrQJf325w hP0/s34sO2lIObDxMgZDVi7h+pjCtDJ33POD3VrEK5BNi6+hJFFwrfIbpZhUloK1ft ldxE2PkNhkeEwtAjAWfSwCuw9lzm2O7zfoAODNydNC4YIGQuL3hsHAx55Dk0FyRuxk dSHrh0EFwzBOIvljHGdywML83g6OJ/AHfs1F4hDMJSbGfgqku8qroP5TfYtDoeqyOB oGLp4yKc9zMXxVp1uoMX9S/5jKVZWbZwquHj7XwlTwqWW31h2YI46+4LsapQRc0hpE 8WkWj1uCcXL0A== Date: Fri, 22 Dec 2023 15:16:37 -0800 From: Chris Li To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Wei Xu , Yu Zhao , Greg Thelen , Chun-Tse Shao , Suren Baghdasaryan , Yosry Ahmed , Brain Geffon , Minchan Kim , Michal Hocko , Mel Gorman , Huang Ying , Nhat Pham , Johannes Weiner , Kairui Song , Zhongkun He , Kemeng Shi , Barry Song , Hugh Dickins Subject: Re: [PATCH] mm: swap: async free swap slot cache entries Message-ID: References: <20231221-async-free-v1-1-94b277992cb0@kernel.org> <20231222115208.ab4d2aeacdafa4158b14e532@linux-foundation.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231222115208.ab4d2aeacdafa4158b14e532@linux-foundation.org> On Fri, Dec 22, 2023 at 11:52:08AM -0800, Andrew Morton wrote: > On Thu, 21 Dec 2023 22:25:39 -0800 Chris Li wrote: > > > We discovered that 1% swap page fault is 100us+ while 50% of > > the swap fault is under 20us. > > > > Further investigation show that a large portion of the time > > spent in the free_swap_slots() function for the long tail case. > > > > The percpu cache of swap slots is freed in a batch of 64 entries > > inside free_swap_slots(). These cache entries are accumulated > > from previous page faults, which may not be related to the current > > process. > > > > Doing the batch free in the page fault handler causes longer > > tail latencies and penalizes the current process. > > > > Move free_swap_slots() outside of the swapin page fault handler into an > > async work queue to avoid such long tail latencies. > > This will require a larger amount of total work than the current Yes, there will be a tiny little bit of extra overhead to schedule the job on to the other work queue. > scheme. So we're trading that off against better latency. > > Why is this a good tradeoff? That is a very good question. Both Hugh and Wei had asked me similar questions before. +Hugh. The TL;DR is that it makes the swap more palleralizedable. Because morden computers typically have more than one CPU and the CPU utilization is rarely reached to 100%. We are actually not trading the latency for some one run slower. Most of the time the real impact is that the current swapin page fault can return quicker so more work can submit to the kernel sooner, at the same time the other idle CPU can pick up the non latency critical work of freeing of the swap slot cache entries. The net effect is that we speed things up and increase the overall system utilization rather than slow things down. The test result of chromebook and Google production server should be able to show that it is beneficial to both laptop and server workloads, making them more responsive in swap related workload. Chris