Received: by 2002:a05:7412:b995:b0:f9:9502:5bb8 with SMTP id it21csp2186039rdb; Sun, 24 Dec 2023 23:10:19 -0800 (PST) X-Google-Smtp-Source: AGHT+IGmn2rikjac7CInU+snD8AVeXPm8jnVZd8qCbKRHUD+yBMXQyhpnhOUHF6HM4yzvXGFN2GP X-Received: by 2002:a05:6870:e410:b0:203:a854:cbab with SMTP id n16-20020a056870e41000b00203a854cbabmr6797410oag.111.1703488219127; Sun, 24 Dec 2023 23:10:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703488219; cv=none; d=google.com; s=arc-20160816; b=EV68OYurYgSGrUliccMHOkKGmT1QoShlwkkvKjMNzBVbnNuT5YBCLW3cM+IYYrAKeC 0pUugX2q+k6O5JzVccxxOzLsZ75Wv8279NirPhEjiIB43bxPUNfTCvDjTO8m20D8rToQ FtRzYk0Oc3998VxHvfrX44jCr4+LJ+wl0almLgViOL7SJXEXeWEp4UUxFY41mbia+A17 1iv8fm+flgmLSyG9FoV137XRRg6ZhfyPCe91aX6IuKB/Ijh7W98FsPtiUaQlE0JXpSbb mrZ5uBk3xjRaOlAAqLzBfAmkgNvI5tl01J18aZB8zWDzhCLUmtHsJAR+hi2woX6NiBZ7 88Eg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:list-unsubscribe:list-subscribe:list-id:precedence :user-agent:message-id:date:references:in-reply-to:subject:cc:to :from:dkim-signature; bh=j0+Pc5CdvWW8ArkO7mdAYOXAOkB3UXK0jR0SM2lwIEE=; fh=G6UN9ixFBIG1lL03L0v5SH6t/33KJ4QE/7sWS1P004M=; b=pm7FDcrrzVPMO7+epVQmjKYnowGXBF+10E2p+sj2rhRpIuve64BqNxcgc7ZqpuBVxb MDtUXYbYuKeacqr3lUu6AvqGpA92xNKKMO3p2Bx3fQ53OoOJVLYCHls5eD/FRKQFEfSC CA4JOwPpNskTpVCVyNHEsmWTB9fRJherxRxddYHv9XTIGsfOFTOLGTMhYxHFEYPEjm34 GJcijM49Z8pF6iOoFgrZl++gZeWuW/WJXkcft/9g+PZiK6JpXE9lzNmrJPNxgljp9jVX g0sXucYoS9mEXn6QVLqCqCBXMUt7bRbVdtLII6V4yQuDuanfbrgO9pUXE+9MmLqLdWZp RWIg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=kzFumd3k; spf=pass (google.com: domain of linux-kernel+bounces-10920-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10920-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id x62-20020a638641000000b005c6977f9c0dsi4769189pgd.214.2023.12.24.23.10.18 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 24 Dec 2023 23:10:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10920-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=kzFumd3k; spf=pass (google.com: domain of linux-kernel+bounces-10920-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10920-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 0ABE4B2102E for ; Mon, 25 Dec 2023 07:10:15 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6C5CE1846; Mon, 25 Dec 2023 07:10:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="kzFumd3k" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1FCDE15A8 for ; Mon, 25 Dec 2023 07:10:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703488205; x=1735024205; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=dzZ7z0rN7lG5oRVUv+N/PVnDDGgfhgQlgF7xn3bk66Y=; b=kzFumd3kXa0yZe4phuN0ozeMKj7dsuTWAUWuOxZQEuusshWZIiJV2gHk wzhId5cuPDtzJsQdT9Juyx7rWQygRR9FANU2TUaoDFqgLB1zJgkcLl3Wb CbqPVCTJFKR0y0hz2lN3vOvyFZ/Qacws2iqjjzp/FizNYGqLwLlYoLqL4 k04nLzbUN6ijl9XrSXpwzE7TM0t1g8k+GCRyY0WUbtNnLC3ZXOx0nSE2/ SL+MmEN5PMnN27EADlNNlWWw0sCNXoEbRJ+KqXzhUBsVPbGUFR8mqDR8N ZuNTLLHpSjEP9eiqybvvjmq1KRsAOC+ONfw0jA07TN3w3ThqcRpdc2z2B w==; X-IronPort-AV: E=McAfee;i="6600,9927,10934"; a="3105348" X-IronPort-AV: E=Sophos;i="6.04,302,1695711600"; d="scan'208";a="3105348" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Dec 2023 23:10:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10934"; a="806604119" X-IronPort-AV: E=Sophos;i="6.04,302,1695711600"; d="scan'208";a="806604119" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Dec 2023 23:09:58 -0800 From: "Huang, Ying" To: Chris Li Cc: Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Wei Xu , Yu Zhao , Greg Thelen , Chun-Tse Shao , Suren Baghdasaryan , Yosry Ahmed , Brain Geffon , Minchan Kim , Michal Hocko , Mel Gorman , Nhat Pham , Johannes Weiner , Kairui Song , Zhongkun He , Kemeng Shi , Barry Song , Hugh Dickins , Tim Chen Subject: Re: [PATCH] mm: swap: async free swap slot cache entries In-Reply-To: (Chris Li's message of "Fri, 22 Dec 2023 15:16:37 -0800") References: <20231221-async-free-v1-1-94b277992cb0@kernel.org> <20231222115208.ab4d2aeacdafa4158b14e532@linux-foundation.org> Date: Mon, 25 Dec 2023 15:07:59 +0800 Message-ID: <87o7eeg3ow.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=ascii Chris Li writes: > On Fri, Dec 22, 2023 at 11:52:08AM -0800, Andrew Morton wrote: >> On Thu, 21 Dec 2023 22:25:39 -0800 Chris Li wrote: >> >> > We discovered that 1% swap page fault is 100us+ while 50% of >> > the swap fault is under 20us. >> > >> > Further investigation show that a large portion of the time >> > spent in the free_swap_slots() function for the long tail case. >> > >> > The percpu cache of swap slots is freed in a batch of 64 entries >> > inside free_swap_slots(). These cache entries are accumulated >> > from previous page faults, which may not be related to the current >> > process. >> > >> > Doing the batch free in the page fault handler causes longer >> > tail latencies and penalizes the current process. >> > >> > Move free_swap_slots() outside of the swapin page fault handler into an >> > async work queue to avoid such long tail latencies. >> >> This will require a larger amount of total work than the current > > Yes, there will be a tiny little bit of extra overhead to schedule the job > on to the other work queue. > >> scheme. So we're trading that off against better latency. >> >> Why is this a good tradeoff? > > That is a very good question. Both Hugh and Wei had asked me similar questions > before. +Hugh. > > The TL;DR is that it makes the swap more palleralizedable. > > Because morden computers typically have more than one CPU and the CPU utilization > is rarely reached to 100%. We are actually not trading the latency for some one > run slower. Most of the time the real impact is that the current swapin page fault > can return quicker so more work can submit to the kernel sooner, at the same time > the other idle CPU can pick up the non latency critical work of freeing of the > swap slot cache entries. The net effect is that we speed things up and increase > the overall system utilization rather than slow things down. You solution depends on there is enough idle time in the system. This isn't always true. In general, all async solutions have 2 possible issues. a) Unrelated applications may be punished. Because they may wait for CPU which is running the async operations. In the original solution, the application swap more will be punished. b) The CPU time cannot be charged to appropriate applications. The original behavior isn't perfect too. But it's better than async worker. Given the runtime of worker is at 100us level, these issues may be not severe. But I think that you may need to explain them at least. And, when swap slots freeing batching was introduced, it was mainly used to reduce the lock contention of sis->lock (via swap_info_get_cont()). So, we may move some operations (e.g., mem_cgroup_uncharge_swap, clear_shadow_from_swap_cache(), etc.) out of batched operation (before calling free_swap_slot()) to reduce the latency impact. > The test result of chromebook and Google production server should be able to show > that it is beneficial to both laptop and server workloads, making them more responsive > in swap related workload. -- Best Regards, Huang, Ying