Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp5264347rwb; Mon, 14 Nov 2022 01:44:19 -0800 (PST) X-Google-Smtp-Source: AA0mqf4l8ihPWQyiYJBHwSpIVese9BvuiFVyGw4QQjDhzb1+5OFm8dH3J/5W9m1bYEQgB1eV0MxH X-Received: by 2002:a05:6402:413:b0:463:a83c:e019 with SMTP id q19-20020a056402041300b00463a83ce019mr10502895edv.253.1668419059179; Mon, 14 Nov 2022 01:44:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668419059; cv=none; d=google.com; s=arc-20160816; b=BrvKZPFUAFfwapzwV5SlpSejIKfMrhs5Itlak49nl20gIDvtgt4EvZqDPK/M1rzsEN sC56YS/W722iMEWLLm8MtsToxGCF+CyYaKWwtXUGP5Xynr0Mu/PhmxdVExasAGOz75OZ Rkaf4ZfMYc/RLSPQnbaCiIXf2tWP0XfmcFKPPBjKGm4X2scMDMYyHOms839pzce0lnbR dE8hcENBMDNrw+8BNiie7MVodSc/Dli4V7oC0IBXmQwoYFMb1LogXwRc+0BSZWwuoju0 OUZZBB8cQ/e3piLmwTAVoS06XfxjKaZ9ZwQXs0rXtn0MsJOebptVvJAIKwyqbD+SFzXD 3mYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:subject:user-agent:mime-version:date:message-id; bh=mMGvqwutR0cNNkkk/w//hjldZpsq9cm2tWXBkDJTpP0=; b=NVaIrivUySsd+8W/wtZIIDlzlrWhjmzwi4dwQUcXdf3DxJdwm58Wzu8cQArg/wpats yRuUaY1Pe52HfixlsCe3iCCSF2k2jWaK/YuXzZ20q158K4Z8tEASKrLjxZ9l/3hmhwD8 W5hsBfHoGLjfMVPdp+/bqdSbJ03vi7o/o+eI/0TlcFCnlCe4zSGnHId2naARx8tXqElx RGPuMyNBPYfbAA177QJ3HTZVuyTa+SL9yIlrM8HahudsVSP5H4JdljG+Q/bjRlEyY99v y/uHKrfMYkCGZS6r8NdgOUoxUcqkR3IhZn89Z8xo8Ix8prHjlBdC4pesjNQHrj8p6uvM LhGA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i18-20020a50d752000000b00461c3758704si7677285edj.36.2022.11.14.01.43.57; Mon, 14 Nov 2022 01:44:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235760AbiKNIAa (ORCPT + 88 others); Mon, 14 Nov 2022 03:00:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235540AbiKNIA2 (ORCPT ); Mon, 14 Nov 2022 03:00:28 -0500 Received: from out30-43.freemail.mail.aliyun.com (out30-43.freemail.mail.aliyun.com [115.124.30.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE350B3; Mon, 14 Nov 2022 00:00:26 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046059;MF=xhao@linux.alibaba.com;NM=1;PH=DS;RN=32;SR=0;TI=SMTPD_---0VUjZSxU_1668412818; Received: from 30.240.96.240(mailfrom:xhao@linux.alibaba.com fp:SMTPD_---0VUjZSxU_1668412818) by smtp.aliyun-inc.com; Mon, 14 Nov 2022 16:00:20 +0800 Message-ID: <9b6e1098-3d0a-d01c-ede2-344730a9450e@linux.alibaba.com> Date: Mon, 14 Nov 2022 16:00:17 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.4.1 Subject: Re: [PATCH v5 2/2] arm64: support batched/deferred tlb shootdown during page reclamation To: Yicong Yang , akpm@linux-foundation.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, catalin.marinas@arm.com, will@kernel.org, anshuman.khandual@arm.com, linux-doc@vger.kernel.org Cc: corbet@lwn.net, peterz@infradead.org, arnd@arndb.de, punit.agrawal@bytedance.com, linux-kernel@vger.kernel.org, darren@os.amperecomputing.com, yangyicong@hisilicon.com, huzhanyuan@oppo.com, lipeifeng@oppo.com, zhangshiming@oppo.com, guojian@oppo.com, realmz6@gmail.com, linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, Barry Song <21cnbao@gmail.com>, wangkefeng.wang@huawei.com, prime.zeng@hisilicon.com, Barry Song , Nadav Amit , Mel Gorman References: <20221028081255.19157-1-yangyicong@huawei.com> <20221028081255.19157-3-yangyicong@huawei.com> From: haoxin In-Reply-To: <20221028081255.19157-3-yangyicong@huawei.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,NICE_REPLY_A,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY, USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 在 2022/10/28 下午4:12, Yicong Yang 写道: > From: Barry Song > > on x86, batched and deferred tlb shootdown has lead to 90% > performance increase on tlb shootdown. on arm64, HW can do > tlb shootdown without software IPI. But sync tlbi is still > quite expensive. > > Even running a simplest program which requires swapout can > prove this is true, > #include > #include > #include > #include > > int main() > { > #define SIZE (1 * 1024 * 1024) > volatile unsigned char *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, > MAP_SHARED | MAP_ANONYMOUS, -1, 0); > > memset(p, 0x88, SIZE); > > for (int k = 0; k < 10000; k++) { > /* swap in */ > for (int i = 0; i < SIZE; i += 4096) { > (void)p[i]; > } > > /* swap out */ > madvise(p, SIZE, MADV_PAGEOUT); > } > } > > Perf result on snapdragon 888 with 8 cores by using zRAM > as the swap block device. > > ~ # perf record taskset -c 4 ./a.out > [ perf record: Woken up 10 times to write data ] > [ perf record: Captured and wrote 2.297 MB perf.data (60084 samples) ] > ~ # perf report > # To display the perf.data header info, please use --header/--header-only options. > # To display the perf.data header info, please use --header/--header-only options. > # > # > # Total Lost Samples: 0 > # > # Samples: 60K of event 'cycles' > # Event count (approx.): 35706225414 > # > # Overhead Command Shared Object Symbol > # ........ ....... ................. ............................................................................. > # > 21.07% a.out [kernel.kallsyms] [k] _raw_spin_unlock_irq > 8.23% a.out [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore > 6.67% a.out [kernel.kallsyms] [k] filemap_map_pages > 6.16% a.out [kernel.kallsyms] [k] __zram_bvec_write > 5.36% a.out [kernel.kallsyms] [k] ptep_clear_flush > 3.71% a.out [kernel.kallsyms] [k] _raw_spin_lock > 3.49% a.out [kernel.kallsyms] [k] memset64 > 1.63% a.out [kernel.kallsyms] [k] clear_page > 1.42% a.out [kernel.kallsyms] [k] _raw_spin_unlock > 1.26% a.out [kernel.kallsyms] [k] mod_zone_state.llvm.8525150236079521930 > 1.23% a.out [kernel.kallsyms] [k] xas_load > 1.15% a.out [kernel.kallsyms] [k] zram_slot_lock > > ptep_clear_flush() takes 5.36% CPU in the micro-benchmark > swapping in/out a page mapped by only one process. If the > page is mapped by multiple processes, typically, like more > than 100 on a phone, the overhead would be much higher as > we have to run tlb flush 100 times for one single page. > Plus, tlb flush overhead will increase with the number > of CPU cores due to the bad scalability of tlb shootdown > in HW, so those ARM64 servers should expect much higher > overhead. > > Further perf annonate shows 95% cpu time of ptep_clear_flush > is actually used by the final dsb() to wait for the completion > of tlb flush. This provides us a very good chance to leverage > the existing batched tlb in kernel. The minimum modification > is that we only send async tlbi in the first stage and we send > dsb while we have to sync in the second stage. > > With the above simplest micro benchmark, collapsed time to > finish the program decreases around 5%. > > Typical collapsed time w/o patch: > ~ # time taskset -c 4 ./a.out > 0.21user 14.34system 0:14.69elapsed > w/ patch: > ~ # time taskset -c 4 ./a.out > 0.22user 13.45system 0:13.80elapsed > > Also, Yicong Yang added the following observation. > Tested with benchmark in the commit on Kunpeng920 arm64 server, > observed an improvement around 12.5% with command > `time ./swap_bench`. > w/o w/ > real 0m13.460s 0m11.771s > user 0m0.248s 0m0.279s > sys 0m12.039s 0m11.458s > > Originally it's noticed a 16.99% overhead of ptep_clear_flush() > which has been eliminated by this patch: > > [root@localhost yang]# perf record -- ./swap_bench && perf report > [...] > 16.99% swap_bench [kernel.kallsyms] [k] ptep_clear_flush > > It is tested on 4,8,128 CPU platforms and shows to be beneficial on > large systems but may not have improvement on small systems like on > a 4 CPU platform. So make ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH depends > on CONFIG_EXPERT for this stage and only make this enabled on systems > with more than 8 CPUs. User can modify this threshold according to > their own platforms by CONFIG_NR_CPUS_FOR_BATCHED_TLB. > > Cc: Anshuman Khandual > Cc: Jonathan Corbet > Cc: Nadav Amit > Cc: Mel Gorman > Tested-by: Yicong Yang > Tested-by: Xin Hao > Signed-off-by: Barry Song > Signed-off-by: Yicong Yang > Reviewed-by: Kefeng Wang > --- > .../features/vm/TLB/arch-support.txt | 2 +- > arch/arm64/Kconfig | 6 +++ > arch/arm64/include/asm/tlbbatch.h | 12 +++++ > arch/arm64/include/asm/tlbflush.h | 46 ++++++++++++++++++- > arch/x86/include/asm/tlbflush.h | 3 +- > mm/rmap.c | 10 ++-- > 6 files changed, 71 insertions(+), 8 deletions(-) > create mode 100644 arch/arm64/include/asm/tlbbatch.h > > diff --git a/Documentation/features/vm/TLB/arch-support.txt b/Documentation/features/vm/TLB/arch-support.txt > index 039e4e91ada3..2caf815d7c6c 100644 > --- a/Documentation/features/vm/TLB/arch-support.txt > +++ b/Documentation/features/vm/TLB/arch-support.txt > @@ -9,7 +9,7 @@ > | alpha: | TODO | > | arc: | TODO | > | arm: | TODO | > - | arm64: | N/A | > + | arm64: | ok | > | csky: | TODO | > | hexagon: | TODO | > | ia64: | TODO | > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index 505c8a1ccbe0..72975e82c7d7 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -93,6 +93,7 @@ config ARM64 > select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 > select ARCH_SUPPORTS_NUMA_BALANCING > select ARCH_SUPPORTS_PAGE_TABLE_CHECK > + select ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH if EXPERT > select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT > select ARCH_WANT_DEFAULT_BPF_JIT > select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT > @@ -268,6 +269,11 @@ config ARM64_CONT_PMD_SHIFT > default 5 if ARM64_16K_PAGES > default 4 > > +config ARM64_NR_CPUS_FOR_BATCHED_TLB > + int "Threshold to enable batched TLB flush" > + default 8 > + depends on ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH > + > config ARCH_MMAP_RND_BITS_MIN > default 14 if ARM64_64K_PAGES > default 16 if ARM64_16K_PAGES > diff --git a/arch/arm64/include/asm/tlbbatch.h b/arch/arm64/include/asm/tlbbatch.h > new file mode 100644 > index 000000000000..fedb0b87b8db > --- /dev/null > +++ b/arch/arm64/include/asm/tlbbatch.h > @@ -0,0 +1,12 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +#ifndef _ARCH_ARM64_TLBBATCH_H > +#define _ARCH_ARM64_TLBBATCH_H > + > +struct arch_tlbflush_unmap_batch { > + /* > + * For arm64, HW can do tlb shootdown, so we don't > + * need to record cpumask for sending IPI > + */ > +}; > + > +#endif /* _ARCH_ARM64_TLBBATCH_H */ > diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h > index 412a3b9a3c25..b21cdeb57a18 100644 > --- a/arch/arm64/include/asm/tlbflush.h > +++ b/arch/arm64/include/asm/tlbflush.h > @@ -254,17 +254,23 @@ static inline void flush_tlb_mm(struct mm_struct *mm) > dsb(ish); > } > > -static inline void flush_tlb_page_nosync(struct vm_area_struct *vma, > +static inline void __flush_tlb_page_nosync(struct mm_struct *mm, > unsigned long uaddr) > { > unsigned long addr; > > dsb(ishst); > - addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm)); > + addr = __TLBI_VADDR(uaddr, ASID(mm)); > __tlbi(vale1is, addr); > __tlbi_user(vale1is, addr); > } > > +static inline void flush_tlb_page_nosync(struct vm_area_struct *vma, > + unsigned long uaddr) > +{ > + return __flush_tlb_page_nosync(vma->vm_mm, uaddr); > +} > + > static inline void flush_tlb_page(struct vm_area_struct *vma, > unsigned long uaddr) > { > @@ -272,6 +278,42 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, > dsb(ish); > } > > +#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH > + > +static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm) > +{ > + /* > + * TLB batched flush is proved to be beneficial for systems with large > + * number of CPUs, especially system with more than 8 CPUs. TLB shutdown > + * is cheap on small systems which may not need this feature. So use > + * a threshold for enabling this to avoid potential side effects on > + * these platforms. > + */ > + if (num_online_cpus() <= CONFIG_ARM64_NR_CPUS_FOR_BATCHED_TLB) > + return false; > + > +#ifdef CONFIG_ARM64_WORKAROUND_REPEAT_TLBI > + if (unlikely(this_cpu_has_cap(ARM64_WORKAROUND_REPEAT_TLBI))) Maybe there need a comment  why set  ARM64_WORKAROUND_REPEAT_TLBI it should return false? > + return false; > +#endif > + > + return true; > +} > + > +static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch, > + struct mm_struct *mm, > + unsigned long uaddr) > +{ > + __flush_tlb_page_nosync(mm, uaddr); > +} > + > +static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) > +{ > + dsb(ish); > +} > + > +#endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */ > + > /* > * This is meant to avoid soft lock-ups on large TLB flushing ranges and not > * necessarily a performance improvement. > diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h > index 8a497d902c16..5bd78ae55cd4 100644 > --- a/arch/x86/include/asm/tlbflush.h > +++ b/arch/x86/include/asm/tlbflush.h > @@ -264,7 +264,8 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm) > } > > static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch, > - struct mm_struct *mm) > + struct mm_struct *mm, > + unsigned long uaddr) > { > inc_mm_tlb_gen(mm); > cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); > diff --git a/mm/rmap.c b/mm/rmap.c > index a9ab10bc0144..a1b408ff44e5 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -640,12 +640,13 @@ void try_to_unmap_flush_dirty(void) > #define TLB_FLUSH_BATCH_PENDING_LARGE \ > (TLB_FLUSH_BATCH_PENDING_MASK / 2) > > -static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable) > +static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable, > + unsigned long uaddr) > { > struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; > int batch, nbatch; > > - arch_tlbbatch_add_mm(&tlb_ubc->arch, mm); > + arch_tlbbatch_add_mm(&tlb_ubc->arch, mm, uaddr); > tlb_ubc->flush_required = true; > > /* > @@ -723,7 +724,8 @@ void flush_tlb_batched_pending(struct mm_struct *mm) > } > } > #else > -static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable) > +static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable, > + unsigned long uaddr) > { > } > > @@ -1596,7 +1598,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, > */ > pteval = ptep_get_and_clear(mm, address, pvmw.pte); > > - set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); > + set_tlb_ubc_flush_pending(mm, pte_dirty(pteval), address); > } else { > pteval = ptep_clear_flush(vma, address, pvmw.pte); > }