Received: by 2002:ab2:69cc:0:b0:1f4:be93:e15a with SMTP id n12csp1244552lqp; Sun, 14 Apr 2024 23:19:49 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWHNgrDsfT4wa3cas0+KEVaJD8+kPwORNcPMuN3aK3xPrkJrsOR0zV/3wM56g4a32z580sZv7nWVouLuYobGQvMIxRDg3uOvSIyAPBcoQ== X-Google-Smtp-Source: AGHT+IE6YUMQt8af3dhCuXB5a3Hhyz0KGaVYq4/K0IoITFsfYHFyZV1eYYlbi1GTkwNyR791WOT+ X-Received: by 2002:a17:907:da6:b0:a52:61a3:a7db with SMTP id go38-20020a1709070da600b00a5261a3a7dbmr1919623ejc.22.1713161989645; Sun, 14 Apr 2024 23:19:49 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1713161989; cv=pass; d=google.com; s=arc-20160816; b=yoMhGkomdf7Rnd2BFwfEPWG1P+RYQ0YJKmJNNdweozXGq9VrMn0TmQuzeo/wsiEBVy W9xQApHE4oaj17VPzZOfOk/bzFo7fDS6U4S8fDVqyAJLfby+Nx7UTz1yF6z4mMpcxkoI zye/JqczMveoQIV50qdFXE/JDxQyjA7WeU5eGEEFA890qjqqAajYSTJw5vuxYqYdBjzz 1PHgCncw/PfvfMFXZ5yEbOY//aJpvmz7BikdjSSfQ/Rplrol/QtDFmqZ+Nwb0/q6xTRS 9jT35utA9qtRO7PQgZUsIYuCIyX/EW71fmSKr8a8PRKSQ75/yO8NmiEO1zUTviwE4G4+ FRvQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:list-unsubscribe:list-subscribe:list-id:precedence :user-agent:message-id:date:references:in-reply-to:subject:cc:to :from:dkim-signature; bh=bsXRQOnhI7m9+ArFUNfHMb6KtlJgzVcSCjIn/YHymrk=; fh=f5iXLJJ6ipNm5uL8hOx5AzNVi0bymIHPviXd+spkTi0=; b=ig9GhOIzGIepTcmqSE8kkZ/ovk7drnsYJxPMU5ku6vuOCW4PPGzTqya/AeiST0oNnW huVBur6yP4peX+nT2sLf5pgVCUTc1hMPQUNR0QeTv9SdaXcweTHGsxan7mblWBLI9OhR aOcZFzIOphJZD2JrP1MrWaM+OOtMDqVasY7ouQbp1243zY8bcvMeDNhz/E/GaC0QYSw8 DXQ+xteA/65DREHJAli56CbMp66ItLm/bV49f4AnpQ1hMJ5emJRh3wiMvwVX0OBFoCR8 Pq4TeDGiXcOv1IdPrNojWthcvQxO1MuKdK3xIWszPvsgf0dCXsF/hNzD9kG/XoTyhzQ2 +uGA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=QWccoPHu; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-144577-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-144577-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id ds17-20020a170907725100b00a517997352bsi2055088ejc.581.2024.04.14.23.19.49 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Apr 2024 23:19:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-144577-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=QWccoPHu; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-144577-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-144577-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 33F191F2249E for ; Mon, 15 Apr 2024 06:19:49 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A3213F9C8; Mon, 15 Apr 2024 06:19:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="QWccoPHu" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0E3F4C138 for ; Mon, 15 Apr 2024 06:19:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.19 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713161980; cv=none; b=EjGJm+XzW9U2ZimBpivjjg2PcfEpqYooXZnFjjWEUdGfg+IgtjdrN4A/0JQkwnur4kM6RK1VAf2Qxa3T6WMUMLwOQ3KzT3xHcvfEws/rePgGr1RLDSxEVxAjMroq4CFYCa/mlnjzQAzg0M2BNdsGNncx/urMzFy8UB53l0CKX1A= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713161980; c=relaxed/simple; bh=lJQwJ8XaCikZrjdfNcnmhKxcehDi16qkdeUs/IIX7ic=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=gauG9I/JYOFkvPEZwbPYyeCH58sUi9k5PJrAK7uWS+z4U2ITjFFzFU/Pig7uBiMeXAGAyioxvJcbFgzxyn90QZTIIXX8FX9PP4h7QqrIHIP404c/j8YviQMCNFSLDvwfcUCm9yanwILVmpzVKVDWBHc5ORGsYfxkJ3mf7Yg2v3I= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=QWccoPHu; arc=none smtp.client-ip=192.198.163.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713161979; x=1744697979; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=lJQwJ8XaCikZrjdfNcnmhKxcehDi16qkdeUs/IIX7ic=; b=QWccoPHuySj4HzzDgHOT1rWSKhQ09xHBkIx8UMogtHFYoJFPhnU5IcyV FMddna+25pRN5XrbJD5sUDFn5vmkF+eIeT+I4GmKY/+oTmE+Y6Jh78E3C bGUbSHSl4Uk4AWMFq9zWah+FhxBK/llMFho7KYZf+57gVDi7a50zjIrYV vk5MULzBf4ntyz5Fb87xI5sIVOjwISkfdExWgj7SsoaqeIUwqCN4HgRBh eIFmQJNs9QWOOn7hvYn42/WFV1Skr0o4wIhOdCtKFNr6DPCTvPRcyexdW q0pLOolveEHYypcETUudzlkaCO7c+ZVocabiHgO8KayG1BR4hNnU7zokL Q==; X-CSE-ConnectionGUID: glA7H2IRTQKHSG8lvOHlRw== X-CSE-MsgGUID: 9MliN/AMS1y30qqHOTHi0g== X-IronPort-AV: E=McAfee;i="6600,9927,11044"; a="8390532" X-IronPort-AV: E=Sophos;i="6.07,202,1708416000"; d="scan'208";a="8390532" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2024 23:19:38 -0700 X-CSE-ConnectionGUID: 0Kd1SlWJS4yLJTSJSdTkVw== X-CSE-MsgGUID: jMIJiqviSPGZpGr31tJ9ew== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,202,1708416000"; d="scan'208";a="59251237" Received: from unknown (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2024 23:19:33 -0700 From: "Huang, Ying" To: Barry Song <21cnbao@gmail.com> Cc: akpm@linux-foundation.org, linux-mm@kvack.org, baolin.wang@linux.alibaba.com, chrisl@kernel.org, david@redhat.com, hanchuanhua@oppo.com, hannes@cmpxchg.org, hughd@google.com, kasong@tencent.com, ryan.roberts@arm.com, surenb@google.com, v-songbaohua@oppo.com, willy@infradead.org, xiang@kernel.org, yosryahmed@google.com, yuzhao@google.com, ziy@nvidia.com, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 1/5] mm: swap: introduce swap_free_nr() for batched swap_free() In-Reply-To: <20240409082631.187483-2-21cnbao@gmail.com> (Barry Song's message of "Tue, 9 Apr 2024 20:26:27 +1200") References: <20240409082631.187483-1-21cnbao@gmail.com> <20240409082631.187483-2-21cnbao@gmail.com> Date: Mon, 15 Apr 2024 14:17:40 +0800 Message-ID: <87y19f2lq3.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=ascii Barry Song <21cnbao@gmail.com> writes: > From: Chuanhua Han > > While swapping in a large folio, we need to free swaps related to the whole > folio. To avoid frequently acquiring and releasing swap locks, it is better > to introduce an API for batched free. > > Signed-off-by: Chuanhua Han > Co-developed-by: Barry Song > Signed-off-by: Barry Song > --- > include/linux/swap.h | 5 +++++ > mm/swapfile.c | 51 ++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 56 insertions(+) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 11c53692f65f..b7a107e983b8 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -483,6 +483,7 @@ extern void swap_shmem_alloc(swp_entry_t); > extern int swap_duplicate(swp_entry_t); > extern int swapcache_prepare(swp_entry_t); > extern void swap_free(swp_entry_t); > +extern void swap_free_nr(swp_entry_t entry, int nr_pages); > extern void swapcache_free_entries(swp_entry_t *entries, int n); > extern void free_swap_and_cache_nr(swp_entry_t entry, int nr); > int swap_type_of(dev_t device, sector_t offset); > @@ -564,6 +565,10 @@ static inline void swap_free(swp_entry_t swp) > { > } > > +void swap_free_nr(swp_entry_t entry, int nr_pages) > +{ > +} > + > static inline void put_swap_folio(struct folio *folio, swp_entry_t swp) > { > } > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 28642c188c93..f4c65aeb088d 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1356,6 +1356,57 @@ void swap_free(swp_entry_t entry) > __swap_entry_free(p, entry); > } > > +/* > + * Free up the maximum number of swap entries at once to limit the > + * maximum kernel stack usage. > + */ > +#define SWAP_BATCH_NR (SWAPFILE_CLUSTER > 512 ? 512 : SWAPFILE_CLUSTER) > + > +/* > + * Called after swapping in a large folio, IMHO, it's not good to document the caller in the function definition. Because this will discourage function reusing. > batched free swap entries > + * for this large folio, entry should be for the first subpage and > + * its offset is aligned with nr_pages Why do we need this? > + */ > +void swap_free_nr(swp_entry_t entry, int nr_pages) > +{ > + int i, j; > + struct swap_cluster_info *ci; > + struct swap_info_struct *p; > + unsigned int type = swp_type(entry); > + unsigned long offset = swp_offset(entry); > + int batch_nr, remain_nr; > + DECLARE_BITMAP(usage, SWAP_BATCH_NR) = { 0 }; > + > + /* all swap entries are within a cluster for mTHP */ > + VM_BUG_ON(offset % SWAPFILE_CLUSTER + nr_pages > SWAPFILE_CLUSTER); > + > + if (nr_pages == 1) { > + swap_free(entry); > + return; > + } Is it possible to unify swap_free() and swap_free_nr() into one function with acceptable performance? IIUC, the general rule in mTHP effort is to avoid duplicate functions between mTHP and normal small folio. Right? > + > + remain_nr = nr_pages; > + p = _swap_info_get(entry); > + if (p) { > + for (i = 0; i < nr_pages; i += batch_nr) { > + batch_nr = min_t(int, SWAP_BATCH_NR, remain_nr); > + > + ci = lock_cluster_or_swap_info(p, offset); > + for (j = 0; j < batch_nr; j++) { > + if (__swap_entry_free_locked(p, offset + i * SWAP_BATCH_NR + j, 1)) > + __bitmap_set(usage, j, 1); > + } > + unlock_cluster_or_swap_info(p, ci); > + > + for_each_clear_bit(j, usage, batch_nr) > + free_swap_slot(swp_entry(type, offset + i * SWAP_BATCH_NR + j)); > + > + bitmap_clear(usage, 0, SWAP_BATCH_NR); > + remain_nr -= batch_nr; > + } > + } > +} > + > /* > * Called after dropping swapcache to decrease refcnt to swap entries. > */ put_swap_folio() implements batching in another method. Do you think that it's good to use the batching method in that function here? It avoids to use bitmap operations and stack space. -- Best Regards, Huang, Ying