Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp6550345rwb; Mon, 14 Nov 2022 23:29:16 -0800 (PST) X-Google-Smtp-Source: AA0mqf4l0qiZoKngUMKM9nd2ary4j0m1m2jSZCEzKWnesQ6t9tSd8gy5fVaDA+jUQb0q3D/Ygfy+ X-Received: by 2002:a17:903:2789:b0:186:9b19:1dbb with SMTP id jw9-20020a170903278900b001869b191dbbmr2920552plb.59.1668497355913; Mon, 14 Nov 2022 23:29:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668497355; cv=none; d=google.com; s=arc-20160816; b=p2FYHUspHxgW4hrvBCxvha40EisSxx5S79WOTdJYK6UReNg/vHnm4hyd+Hu9mPmSYB oP2YR/gUSPj+im4rFtcDtVaj78KFENM5QsRA3Y82NITWZFfBuWF86P0T6CQhGskMte21 K/HzLOZ8VBBgjx1fpio0apA2oQJeCFdV9Ar2NhXPzNvc5jv0mLRmFeiwokLhqXR+SQEz j6VzViL6q9nSK/C03lfkUcMdY2swEVmRAAE9FLa3P4aQTtKW8NLHpAy7/4Cdi/ysJpSG oLm29uB6JfvPTcKJzoLhubUbS51pyIBcsDq5Hx2WfK3dT50dRVCmUx/n3n33JV3g+nuL J7mA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:subject:user-agent:mime-version:date:message-id; bh=0wj9o2tmayGMMKrjJwenXcoDR54vxY/nGTlcam794UA=; b=1IUlvtPGaIgPQ7Vy4ho9SfeKyWsGTQFCEj8uxQfzH23a2/0GZSRyNuUY+183NQwiMb Yy4tFumnQSqc6sXfim3JB34DFDLCbSBMQ2Ro8ox8u6pHs//ajNegQtNFRf5o6IiYMCJb pLLS7FQb+0O9qBXLbw9bcmPiuu6GZD6d+A5ahRV/Gph8rFX82OQ8Ce+JOh8EC4CHTvKG I/QMvfT7XDNIIkV+q9mDfdA4Getn6nASK66YGyhAXQXKtkAC9mtnWvOpQbVXoatJJK+L I1k1GBsFXxJyDtOJmkUUPV0E5wXWL14Wvhwihz8fhZIODB9TRkMQKAa8ybfNL1LntfQl +SAQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c189-20020a6335c6000000b0046afed15623si11369849pga.372.2022.11.14.23.29.04; Mon, 14 Nov 2022 23:29:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232406AbiKOGo4 (ORCPT + 88 others); Tue, 15 Nov 2022 01:44:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230520AbiKOGoz (ORCPT ); Tue, 15 Nov 2022 01:44:55 -0500 Received: from out30-42.freemail.mail.aliyun.com (out30-42.freemail.mail.aliyun.com [115.124.30.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D958C1836F; Mon, 14 Nov 2022 22:44:53 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R271e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046056;MF=xhao@linux.alibaba.com;NM=1;PH=DS;RN=31;SR=0;TI=SMTPD_---0VUsFbJs_1668494685; Received: from 30.240.98.93(mailfrom:xhao@linux.alibaba.com fp:SMTPD_---0VUsFbJs_1668494685) by smtp.aliyun-inc.com; Tue, 15 Nov 2022 14:44:48 +0800 Message-ID: Date: Tue, 15 Nov 2022 14:44:44 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.4.1 Subject: Re: [PATCH v6 1/2] mm/tlbbatch: Introduce arch_tlbbatch_should_defer() To: Yicong Yang , akpm@linux-foundation.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, catalin.marinas@arm.com, will@kernel.org, anshuman.khandual@arm.com, linux-doc@vger.kernel.org Cc: corbet@lwn.net, peterz@infradead.org, arnd@arndb.de, punit.agrawal@bytedance.com, linux-kernel@vger.kernel.org, darren@os.amperecomputing.com, yangyicong@hisilicon.com, huzhanyuan@oppo.com, lipeifeng@oppo.com, zhangshiming@oppo.com, guojian@oppo.com, realmz6@gmail.com, linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, Barry Song <21cnbao@gmail.com>, wangkefeng.wang@huawei.com, prime.zeng@hisilicon.com, Anshuman Khandual , Barry Song References: <20221115031425.44640-1-yangyicong@huawei.com> <20221115031425.44640-2-yangyicong@huawei.com> From: haoxin In-Reply-To: <20221115031425.44640-2-yangyicong@huawei.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,NICE_REPLY_A,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY, USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 在 2022/11/15 上午11:14, Yicong Yang 写道: > From: Anshuman Khandual > > The entire scheme of deferred TLB flush in reclaim path rests on the > fact that the cost to refill TLB entries is less than flushing out > individual entries by sending IPI to remote CPUs. But architecture > can have different ways to evaluate that. Hence apart from checking > TTU_BATCH_FLUSH in the TTU flags, rest of the decision should be > architecture specific. > > Signed-off-by: Anshuman Khandual > [https://lore.kernel.org/linuxppc-dev/20171101101735.2318-2-khandual@linux.vnet.ibm.com/] > Signed-off-by: Yicong Yang > [Rebase and fix incorrect return value type] > Reviewed-by: Kefeng Wang > Reviewed-by: Anshuman Khandual > Reviewed-by: Barry Song > Tested-by: Punit Agrawal > --- > arch/x86/include/asm/tlbflush.h | 12 ++++++++++++ > mm/rmap.c | 9 +-------- > 2 files changed, 13 insertions(+), 8 deletions(-) > > diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h > index cda3118f3b27..8a497d902c16 100644 > --- a/arch/x86/include/asm/tlbflush.h > +++ b/arch/x86/include/asm/tlbflush.h > @@ -240,6 +240,18 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a) > flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false); > } > > +static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm) > +{ > + bool should_defer = false; > + > + /* If remote CPUs need to be flushed then defer batch the flush */ > + if (cpumask_any_but(mm_cpumask(mm), get_cpu()) < nr_cpu_ids) > + should_defer = true; > + put_cpu(); > + > + return should_defer; > +} > + > static inline u64 inc_mm_tlb_gen(struct mm_struct *mm) > { > /* > diff --git a/mm/rmap.c b/mm/rmap.c > index 2ec925e5fa6a..a9ab10bc0144 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -685,17 +685,10 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable) > */ > static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags) > { > - bool should_defer = false; > - > if (!(flags & TTU_BATCH_FLUSH)) > return false; > > - /* If remote CPUs need to be flushed then defer batch the flush */ > - if (cpumask_any_but(mm_cpumask(mm), get_cpu()) < nr_cpu_ids) > - should_defer = true; > - put_cpu(); > - > - return should_defer; > + return arch_tlbbatch_should_defer(mm); > } > LGTM, thanks Reviewed-by: Xin Hao > /*