Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp6166145rdb; Thu, 14 Dec 2023 09:58:42 -0800 (PST) X-Google-Smtp-Source: AGHT+IEqCyD+RdBd81qoD9aEDZnIIWoT5BCdqgOgKtiBo7OsVl1OAzkH6kMOs60gsmE+ohljZXJW X-Received: by 2002:a17:903:120e:b0:1d3:2a94:cb53 with SMTP id l14-20020a170903120e00b001d32a94cb53mr6843724plh.5.1702576722128; Thu, 14 Dec 2023 09:58:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702576722; cv=none; d=google.com; s=arc-20160816; b=nOCVvIiLDyuoavtLrb03gFfsgJtY4NHPOpsNg9zpoteflxKdJtNfxqfZQdcoBrAPRP Yxi/ML1QtFaZChqucKvKZgd6HXmDx/TfLqylQBIK4NzZKhN9g1TRdl4WQI9rtNAWY/X8 LoMN21do7OP/RMjYUyS1X53P7KyreqicbWmfI2NciWS2Iu+GWW/KcHXM+6PY9J/4z1gz YM/euMmi5ukuk1vE5cY8p0DhVPv5gqZjZeGFQ0fFwgRc4Fvu8H3RTNyc3pSQUA08iBs0 ddA3xGdwcMX5ynFO5M+j0YUWPAudLJVyafP3MmFdxveaLx93Ft27EWpgTUt/mJR7rnkP GnnA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent :content-transfer-encoding:references:in-reply-to:date:cc:to:from :subject:message-id:dkim-signature; bh=neTXbCbIfSqZiRto9DSJVSgHTfUhY08kMUpIGVjFVbg=; fh=DlJeH15Yby2LxPi+vt+QDwUOZvINP16lrh3F4BN0Lr8=; b=lGPjX78M0FnQhO4WOxtzORSKBcR+QpgXwDAo4tv/f9ClY/DyRrvF6t0Uo5fX7WGTor ZqWReYOYQq4m+hfUJNmsj+v3GwB9xCA3JwyHtbSbQ7pbQGlzW11sbZc6XziyAXSKPobB 0EcQMi9RxFZ89+ewYMjwoLufQ8hNIpftzVxBstYcXiv4KZ6J9OQ2h+ugXKg+M70CaDy1 rUucnWJLtLyVtQklGHppkASpK58h17GMb+/GIZk3qguoiPl0LmsC1sLz4Y535zsA92j/ orDC5JCySR4dj5omWx3a5gq20R+3weM8KAokpmVRf8eIluk603QtWEuH6FkyhE1TZ1gs 1y6A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=HRX19mAa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from fry.vger.email (fry.vger.email. [2620:137:e000::3:8]) by mx.google.com with ESMTPS id jw17-20020a170903279100b001cfde84f92esi11447582plb.82.2023.12.14.09.58.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Dec 2023 09:58:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) client-ip=2620:137:e000::3:8; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=HRX19mAa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 4A5C28083DE3; Thu, 14 Dec 2023 09:58:03 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230523AbjLNR5g (ORCPT + 99 others); Thu, 14 Dec 2023 12:57:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230406AbjLNR5f (ORCPT ); Thu, 14 Dec 2023 12:57:35 -0500 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E40E124 for ; Thu, 14 Dec 2023 09:57:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1702576662; x=1734112662; h=message-id:subject:from:to:cc:date:in-reply-to: references:content-transfer-encoding:mime-version; bh=fNGLTdSt0U0vp5H92ugUMf6iM40RklxGBua+2WTEdjY=; b=HRX19mAavjxzXNIH96YKntOEZWhKC3dIoQho2vPqdy7ZDAOf4cdTEIJK C6HqeyVsM+HKH4ANwCt8XA9W0dIEmzO5pAMEYuRv3Oy025Um10w/AhjCK FGrCDxf+QAQdeujDLADAD/GTafT/YUOs7cfvJ7cKF+L5KviMT63e+3J6o G4pkB3tZT3U8is3B16pgfgWruMqOmqCjC+lbB9clINgEZJJ+2EzX8GGRJ R/YQt0CqtUwqpaEt32CCrdLIML9IIwe9piyNMga6m+p50gQTA/k7nMoz0 J9L7IduC5nhU+StuKg8GkyHjUKCR1/KFIBTax6hyt2z7xhe1AqYQelMsf A==; X-IronPort-AV: E=McAfee;i="6600,9927,10924"; a="8516312" X-IronPort-AV: E=Sophos;i="6.04,276,1695711600"; d="scan'208";a="8516312" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Dec 2023 09:57:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,276,1695711600"; d="scan'208";a="17723053" Received: from priyammi-mobl.amr.corp.intel.com (HELO [10.209.30.248]) ([10.209.30.248]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Dec 2023 09:57:40 -0800 Message-ID: <962b75502b6456a8b698a4ca89d6deedec118ef6.camel@linux.intel.com> Subject: Re: [PATCH] mm: remove redundant lru_add_drain() prior to unmapping pages From: Tim Chen To: Jianfeng Wang , akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Thu, 14 Dec 2023 09:57:39 -0800 In-Reply-To: References: <20231213072805.74201-1-jianfeng.w.wang@oracle.com> <3c7d9b8878220571cb7e0760c3a463951252b762.camel@linux.intel.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable User-Agent: Evolution 3.44.4 (3.44.4-2.fc36) MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Thu, 14 Dec 2023 09:58:03 -0800 (PST) On Wed, 2023-12-13 at 17:03 -0800, Jianfeng Wang wrote: > On 12/13/23 2:57 PM, Tim Chen wrote: > > On Tue, 2023-12-12 at 23:28 -0800, Jianfeng Wang wrote: > > > When unmapping VMA pages, pages will be gathered in batch and release= d by > > > tlb_finish_mmu() if CONFIG_MMU_GATHER_NO_GATHER is not set. The funct= ion > > > tlb_finish_mmu() is responsible for calling free_pages_and_swap_cache= (), > > > which calls lru_add_drain() to drain cached pages in folio_batch befo= re > > > releasing gathered pages. Thus, it is redundant to call lru_add_drain= () > > > before gathering pages, if CONFIG_MMU_GATHER_NO_GATHER is not set. > > >=20 > > > Remove lru_add_drain() prior to gathering and unmapping pages in > > > exit_mmap() and unmap_region() if CONFIG_MMU_GATHER_NO_GATHER is not = set. > > >=20 > > > Note that the page unmapping process in oom_killer (e.g., in > > > __oom_reap_task_mm()) also uses tlb_finish_mmu() and does not have > > > redundant lru_add_drain(). So, this commit makes the code more consis= tent. > > >=20 > > > Signed-off-by: Jianfeng Wang > > > --- > > > mm/mmap.c | 4 ++++ > > > 1 file changed, 4 insertions(+) > > >=20 > > > diff --git a/mm/mmap.c b/mm/mmap.c > > > index 1971bfffcc03..0451285dee4f 100644 > > > --- a/mm/mmap.c > > > +++ b/mm/mmap.c > > > @@ -2330,7 +2330,9 @@ static void unmap_region(struct mm_struct *mm, = struct ma_state *mas, > > > struct mmu_gather tlb; > > > unsigned long mt_start =3D mas->index; > > > =20 > > > +#ifdef CONFIG_MMU_GATHER_NO_GATHER > >=20 > > In your comment you say skip lru_add_drain() when CONFIG_MMU_GATHER_NO_= GATHER > > is *not* set. So shouldn't this be > >=20 > > #ifndef CONFIG_MMU_GATHER_NO_GATHER ? > >=20 > Hi Tim, >=20 > The mmu_gather feature is used to gather pages produced by unmap_vmas() a= nd > release them in batch in tlb_finish_mmu(). The feature is *on* if > CONFIG_MMU_GATHER_NO_GATHER is *not* set. Note that: tlb_finish_mmu() wil= l call > free_pages_and_swap_cache()/lru_add_drain() only when the feature is on. Thanks for the explanation. Looking at the code, lru_add_drain() is executed for #ifndef CONFIG_MMU_GAT= HER_NO_GATHER in tlb_finish_mmu(). So the logic of your patch is fine. =C2=A0 The #ifndef CONFIG_MMU_GATHER_NO_GATHER means mmu_gather feature is on. The double negative throws me off on in my first= read of your commit log. Suggest that you add a comment in code to make it easier for future code maintenence: /* defer lru_add_drain() to tlb_finish_mmu() for ifndef CONFIG_MMU_GATHER_N= O_GATHER */ Is your change of skipping the extra lru_add_drain() motivated by some perf= ormance reason in a workload? Wonder whether it is worth adding an extra ifdef in the code= . Tim >=20 > Yes, this commit aims to skip lru_add_drain() when CONFIG_MMU_GATHER_NO_G= ATHER > is *not* set (i.e. when the mmu_gather feature is on) because it is redun= dant.=20 >=20 > If CONFIG_MMU_GATHER_NO_GATHER is set, pages will be released in unmap_vm= as(). > tlb_finish_mmu() will not call lru_add_drain(). So, it is still necessary= to > keep the lru_add_drain() call to clear cached pages before unmap_vmas(), = as > folio_batchs hold a reference count for pages in them. >=20 > The same applies to the other case. >=20 > Thanks, > - Jianfeng >=20 > > > lru_add_drain(); > > > +#endif > > > tlb_gather_mmu(&tlb, mm); > > > update_hiwater_rss(mm); > > > unmap_vmas(&tlb, mas, vma, start, end, tree_end, mm_wr_locked); > > > @@ -3300,7 +3302,9 @@ void exit_mmap(struct mm_struct *mm) > > > return; > > > } > > > =20 > > > +#ifdef CONFIG_MMU_GATHER_NO_GATHER > >=20 > > same question as above. > >=20 > > > lru_add_drain(); > > > +#endif > > > flush_cache_mm(mm); > > > tlb_gather_mmu_fullmm(&tlb, mm); > > > /* update_hiwater_rss(mm) here? but nobody should be looking */ > >=20