Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp6415274rdb; Thu, 14 Dec 2023 19:06:45 -0800 (PST) X-Google-Smtp-Source: AGHT+IGU3zjJca9965BdwV2gIOWRnPDiE0C6O0WZmvIaz8kMkXceyMFW9DYLOarPpwcz1LBhfhJX X-Received: by 2002:a17:902:8c81:b0:1d3:6c64:8fb9 with SMTP id t1-20020a1709028c8100b001d36c648fb9mr1663922plo.21.1702609604726; Thu, 14 Dec 2023 19:06:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702609604; cv=none; d=google.com; s=arc-20160816; b=RTiRFYq0XgCvGAMHbhY4ibS9gfKWnI58D7Gz3zzvAUDIDRNhbMXPyjRWavugBe8bOA nfHQ26DtuYZ7z29KUaYY0deeKm6Odpnrx0glPaGxeWlBN8NFTaC7uztaLQxv6xEA0b5T MNEc2iAHBh9FiiQFTx+NkppeVbA9fZ+rBziOhHNJTznigFJvcoDwq3tdneoKbztCkTEY qRNaEKKzpRRYmPTYgqhwKUaCgHZ06DMvA6CShZVI9bpoHt4q+raxwqXdeIX3WFp0qDl1 x5Drb6T7kDWrBqFwzHRny2P3FrCrovGLqOewJujnkrDTNFw6arIBsr3L+/HFsU2KwmcO 2k2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=3DgV6bM9pzxTQgOaG78i0iewW9nAM9YLRB4sGQNnMvs=; fh=e/lYAZv0jxm2cZkAj0YI7HQYCmykHHBG++THxQpi9KE=; b=JsWpfWZBOjfzWxG/syGu9OP5U4CSjYYJxQ2XMcyo437spl5v9ugTbmEHKYQeXhoIeF NDMjVrzTuGLNTCv2lrM6qYlXBNHMTFcdTjPUT5FiYKnn14zFZJWjlZ+h3VsitfaAY6mQ HLc22mgVObuaGuuQd8gMuMJwySCDrhKa7ul9lBRTdnYoTn/7Xms8oMr9Wu7+9I+6YfJB fDhLq3y8H6Dmq7dXJNYFpnLJsZtDoE5bIkaqwFw14rSuowebUQiFjSNnhqkSqQGur+Dl dB1V2e1xWpjxpoJDef2M4F49QcX7LzuNcE9OVnQSg01PfHT1iDq9MPaGx2zGlDGhup5S bo6A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="dWiN6hF/"; spf=pass (google.com: domain of linux-kernel+bounces-388-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-388-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id x7-20020a170902ec8700b001d09249f030si12595475plg.415.2023.12.14.19.06.44 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Dec 2023 19:06:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-388-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="dWiN6hF/"; spf=pass (google.com: domain of linux-kernel+bounces-388-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-388-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id C1845B21A26 for ; Fri, 15 Dec 2023 03:06:40 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C0730468E; Fri, 15 Dec 2023 03:06:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="dWiN6hF/" X-Original-To: linux-kernel@vger.kernel.org Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F85C1FBC for ; Fri, 15 Dec 2023 03:06:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=3DgV6bM9pzxTQgOaG78i0iewW9nAM9YLRB4sGQNnMvs=; b=dWiN6hF/55uDTgyc7XcNN3ykE5 jOvx1nMdiEtmq9cuqFtv/KpLwgr5gEZhr3ssLp9gMHSTczLIK4GGjEVKvQ3Y1IR8zDIdXezZQ115U Ene2NfA7LFDkMFTrhaFG0GkYioYsuxQkH0Ppg2VTwCAzmF8hh/s8966iuRqtDgjZ345ltcNSgZSdy ErDLHHKPKITU+0UhaIXx3nEQJbQuzVrDvuvhZBgbGM30DnFh+R+FBa/nP4NZ6KuhuMvOz/6OJz2j0 B62qSycDc4i7SZoUB56aaDFfkGe6vzBQ2awW3etoTC3iqNWc8KZ/BMN3er0jWlcTP43V/Ih+GB7VZ rJXhF3tA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rDyWr-00Cl0h-Gv; Fri, 15 Dec 2023 03:06:21 +0000 Date: Fri, 15 Dec 2023 03:06:21 +0000 From: Matthew Wilcox To: Jianfeng Wang Cc: akpm@linux-foundation.org, tim.c.chen@linux.intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] mm: remove redundant lru_add_drain() prior to unmapping pages Message-ID: References: <20231214222717.50277-1-jianfeng.w.wang@oracle.com> <9792a7f5-62cd-45e1-b7d6-406592cc566d@oracle.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9792a7f5-62cd-45e1-b7d6-406592cc566d@oracle.com> On Thu, Dec 14, 2023 at 03:59:00PM -0800, Jianfeng Wang wrote: > On 12/14/23 3:00 PM, Matthew Wilcox wrote: > > On Thu, Dec 14, 2023 at 02:27:17PM -0800, Jianfeng Wang wrote: > >> When unmapping VMA pages, pages will be gathered in batch and released by > >> tlb_finish_mmu() if CONFIG_MMU_GATHER_NO_GATHER is not set. The function > >> tlb_finish_mmu() is responsible for calling free_pages_and_swap_cache(), > >> which calls lru_add_drain() to drain cached pages in folio_batch before > >> releasing gathered pages. Thus, it is redundant to call lru_add_drain() > >> before gathering pages, if CONFIG_MMU_GATHER_NO_GATHER is not set. > >> > >> Remove lru_add_drain() prior to gathering and unmapping pages in > >> exit_mmap() and unmap_region() if CONFIG_MMU_GATHER_NO_GATHER is not set. > >> > >> Note that the page unmapping process in oom_killer (e.g., in > >> __oom_reap_task_mm()) also uses tlb_finish_mmu() and does not have > >> redundant lru_add_drain(). So, this commit makes the code more consistent. > > > > Shouldn't we put this in __tlb_gather_mmu() which already has the > > CONFIG_MMU_GATHER_NO_GATHER ifdefs? That would presuambly help with, eg > > zap_page_range_single() too. > > > > Thanks. It makes sense to me. > This commit is motivated by a workload that use mmap/unmap heavily. > While the mmu_gather feature is also used by hugetlb, madvise, mprotect, > etc., thus I prefer to have another standalone commit (following this one) > that moves lru_add_drain() to __tlb_gather_mmu() to unify these cases for > not making redundant lru_add_drain() calls when using mmu_gather. That's not normally the approach we take.