Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp7109468ybi; Thu, 13 Jun 2019 09:42:30 -0700 (PDT) X-Google-Smtp-Source: APXvYqzQXxWQwVE2b+oF/xtSTuEFrAT2SYchPQYd4TzHLZy//eASK46d9EaQP/f3eHGL5UXaUvyx X-Received: by 2002:a63:1f04:: with SMTP id f4mr32463198pgf.423.1560444150759; Thu, 13 Jun 2019 09:42:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560444150; cv=none; d=google.com; s=arc-20160816; b=lCU5DSFzXwx/AxkCMR3TmOOFBTwzYS349kkVYFNTJ3LhaOHd1FtItsd4K9kMqiWrrT EC/WkSmcbi1+urNZMR1J9LtEB2xMivO8T3oe4MzIuFV6VsZeMW5RAmrqup2Yg6K8A3NA k3yEwwRIYuKIfK3hw1OtOJiFDpbhxdO7Biz503JFFdyrWPoZP3tnEIY6QVwMNd2XciXZ z7Jutlno+kFikY5SHH/+tNeHOgO3LGOAxXoROg6cZqv03SS5fFtDEXL6RXm7uq4hV5QQ xvDA4JVQLjZQC2ewgzUq5cd07eDIZw5lX39OPFVaYjDHD6bEzbR6sgGoxXp8oe7KpSEh XNqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=EWV8HqUW5nD4lYeuHZCchPJIr3ycVzu67AOhwAlhylY=; b=qTUq659ZEQCMIq5TDHdXMIw27pUDs4rfvidJxi1UmSm0DTq9OYE2GS8Wy/EempQxu6 /EMr61kGIavpjMvau/UPfKPebANSZRG5mRP8W+4TSKqTYvUAJLNJjrK2EVsqSHxQXwKD dU7AAW6V30E33RUr0ZDv5ZYPYXxJwyP21prQ+ScTMpiwa/8pba4/BRCospra9GfoHwoJ qLEpXaoi9tbln6m+shwx2yOHjrckm2IGSDSFJNLsyECI8DRBB6zcOPsci/BXuNn++Edr clbzqvNrUfE6EBg3lZO2WRBwh4HiNDsGTgm2THdpv4EPNtoOGC0Mxwbw7osVcmW7xoQH 0BBw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=NONE dis=NONE) header.from=vmware.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d4si97743plj.124.2019.06.13.09.42.15; Thu, 13 Jun 2019 09:42:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=NONE dis=NONE) header.from=vmware.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2392902AbfFMQmL (ORCPT + 99 others); Thu, 13 Jun 2019 12:42:11 -0400 Received: from mail-pg1-f193.google.com ([209.85.215.193]:40517 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730523AbfFMGt2 (ORCPT ); Thu, 13 Jun 2019 02:49:28 -0400 Received: by mail-pg1-f193.google.com with SMTP id d30so10353777pgm.7; Wed, 12 Jun 2019 23:49:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EWV8HqUW5nD4lYeuHZCchPJIr3ycVzu67AOhwAlhylY=; b=UZ49QdGgk8Z46G08bZVj8g4zlEmwOTCw5eYH79Um9ramjkPF+VPoyBr1iH01k1nb0P 9vGLR3Pxt5zVkizQfBUja/4BDlYiOjj/sbWqtHrcAR9UubtAEcHHu1qJ6VYcvQ/azQjK SAcfSS4i0rlANIdbDIuxY/A6KmxoWtvdbKBeMJxA2HjKoAkr3C/x7z8LmuhrOxyzyK5w h/QTjA/r0cK/Cje/sp4lXNccpAws5pWQBIdOK6/TW2jKIpiGu938LUnxCBBJqP28dR5a Bs6NMm4XObmoj93mrT8mdGc2v6861OZZjLY/haXHgbAqbW4/n0+F3z+FMgynzFG7+cmP BbPA== X-Gm-Message-State: APjAAAXuK4DaVyTx7ds2LHJk18H5WnKmN9JbNYXIrM0dPu9miBJAn/UO 31wtsU4Ni6YZLmmNOvzzzyw= X-Received: by 2002:a17:90a:8a10:: with SMTP id w16mr3546662pjn.133.1560408567383; Wed, 12 Jun 2019 23:49:27 -0700 (PDT) Received: from htb-2n-eng-dhcp405.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id i3sm1559973pfa.175.2019.06.12.23.49.19 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Wed, 12 Jun 2019 23:49:26 -0700 (PDT) From: Nadav Amit To: Peter Zijlstra , Andy Lutomirski Cc: linux-kernel@vger.kernel.org, Ingo Molnar , Borislav Petkov , x86@kernel.org, Thomas Gleixner , Dave Hansen , Nadav Amit , "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Sasha Levin , Juergen Gross , Paolo Bonzini , Boris Ostrovsky , linux-hyperv@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, xen-devel@lists.xenproject.org Subject: [PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently Date: Wed, 12 Jun 2019 23:48:08 -0700 Message-Id: <20190613064813.8102-5-namit@vmware.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190613064813.8102-1-namit@vmware.com> References: <20190613064813.8102-1-namit@vmware.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org To improve TLB shootdown performance, flush the remote and local TLBs concurrently. Introduce flush_tlb_multi() that does so. The current flush_tlb_others() interface is kept, since paravirtual interfaces need to be adapted first before it can be removed. This is left for future work. In such PV environments, TLB flushes are not performed, at this time, concurrently. Add a static key to tell whether this new interface is supported. Cc: "K. Y. Srinivasan" Cc: Haiyang Zhang Cc: Stephen Hemminger Cc: Sasha Levin Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: x86@kernel.org Cc: Juergen Gross Cc: Paolo Bonzini Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: Boris Ostrovsky Cc: linux-hyperv@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: virtualization@lists.linux-foundation.org Cc: kvm@vger.kernel.org Cc: xen-devel@lists.xenproject.org Signed-off-by: Nadav Amit --- arch/x86/hyperv/mmu.c | 2 + arch/x86/include/asm/paravirt.h | 8 +++ arch/x86/include/asm/paravirt_types.h | 6 +++ arch/x86/include/asm/tlbflush.h | 6 +++ arch/x86/kernel/kvm.c | 1 + arch/x86/kernel/paravirt.c | 3 ++ arch/x86/mm/tlb.c | 71 ++++++++++++++++++++++----- arch/x86/xen/mmu_pv.c | 2 + 8 files changed, 87 insertions(+), 12 deletions(-) diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c index e65d7fe6489f..ca28b400c87c 100644 --- a/arch/x86/hyperv/mmu.c +++ b/arch/x86/hyperv/mmu.c @@ -233,4 +233,6 @@ void hyperv_setup_mmu_ops(void) pr_info("Using hypercall for remote TLB flush\n"); pv_ops.mmu.flush_tlb_others = hyperv_flush_tlb_others; pv_ops.mmu.tlb_remove_table = tlb_remove_table; + + static_key_disable(&flush_tlb_multi_enabled.key); } diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index c25c38a05c1c..192be7254457 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -47,6 +47,8 @@ static inline void slow_down_io(void) #endif } +DECLARE_STATIC_KEY_TRUE(flush_tlb_multi_enabled); + static inline void __flush_tlb(void) { PVOP_VCALL0(mmu.flush_tlb_user); @@ -62,6 +64,12 @@ static inline void __flush_tlb_one_user(unsigned long addr) PVOP_VCALL1(mmu.flush_tlb_one_user, addr); } +static inline void flush_tlb_multi(const struct cpumask *cpumask, + const struct flush_tlb_info *info) +{ + PVOP_VCALL2(mmu.flush_tlb_multi, cpumask, info); +} + static inline void flush_tlb_others(const struct cpumask *cpumask, const struct flush_tlb_info *info) { diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index 946f8f1f1efc..b93b3d90729a 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -211,6 +211,12 @@ struct pv_mmu_ops { void (*flush_tlb_user)(void); void (*flush_tlb_kernel)(void); void (*flush_tlb_one_user)(unsigned long addr); + /* + * flush_tlb_multi() is the preferred interface, which is capable to + * flush both local and remote CPUs. + */ + void (*flush_tlb_multi)(const struct cpumask *cpus, + const struct flush_tlb_info *info); void (*flush_tlb_others)(const struct cpumask *cpus, const struct flush_tlb_info *info); diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index dee375831962..79272938cf79 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -569,6 +569,9 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a) flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false); } +void native_flush_tlb_multi(const struct cpumask *cpumask, + const struct flush_tlb_info *info); + void native_flush_tlb_others(const struct cpumask *cpumask, const struct flush_tlb_info *info); @@ -593,6 +596,9 @@ static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch, extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); #ifndef CONFIG_PARAVIRT +#define flush_tlb_multi(mask, info) \ + native_flush_tlb_multi(mask, info) + #define flush_tlb_others(mask, info) \ native_flush_tlb_others(mask, info) diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 5169b8cc35bb..00d81e898717 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -630,6 +630,7 @@ static void __init kvm_guest_init(void) kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) { pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others; pv_ops.mmu.tlb_remove_table = tlb_remove_table; + static_key_disable(&flush_tlb_multi_enabled.key); } if (kvm_para_has_feature(KVM_FEATURE_PV_EOI)) diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index 98039d7fb998..ac00afed5570 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -159,6 +159,8 @@ unsigned paravirt_patch_insns(void *insn_buff, unsigned len, return insn_len; } +DEFINE_STATIC_KEY_TRUE(flush_tlb_multi_enabled); + static void native_flush_tlb(void) { __native_flush_tlb(); @@ -363,6 +365,7 @@ struct paravirt_patch_template pv_ops = { .mmu.flush_tlb_user = native_flush_tlb, .mmu.flush_tlb_kernel = native_flush_tlb_global, .mmu.flush_tlb_one_user = native_flush_tlb_one_user, + .mmu.flush_tlb_multi = native_flush_tlb_multi, .mmu.flush_tlb_others = native_flush_tlb_others, .mmu.tlb_remove_table = (void (*)(struct mmu_gather *, void *))tlb_remove_page, diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index c34bcf03f06f..db73d5f1dd43 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -551,7 +551,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f, * garbage into our TLB. Since switching to init_mm is barely * slower than a minimal flush, just switch to init_mm. * - * This should be rare, with native_flush_tlb_others skipping + * This should be rare, with native_flush_tlb_multi skipping * IPIs to lazy TLB mode CPUs. */ switch_mm_irqs_off(NULL, &init_mm, NULL); @@ -635,9 +635,12 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f, this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen); } -static void flush_tlb_func_local(const void *info, enum tlb_flush_reason reason) +static void flush_tlb_func_local(void *info) { const struct flush_tlb_info *f = info; + enum tlb_flush_reason reason; + + reason = (f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN : TLB_LOCAL_MM_SHOOTDOWN; flush_tlb_func_common(f, true, reason); } @@ -655,14 +658,21 @@ static void flush_tlb_func_remote(void *info) flush_tlb_func_common(f, false, TLB_REMOTE_SHOOTDOWN); } -static bool tlb_is_not_lazy(int cpu, void *data) +static inline bool tlb_is_not_lazy(int cpu) { return !per_cpu(cpu_tlbstate.is_lazy, cpu); } -void native_flush_tlb_others(const struct cpumask *cpumask, - const struct flush_tlb_info *info) +static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask); + +void native_flush_tlb_multi(const struct cpumask *cpumask, + const struct flush_tlb_info *info) { + /* + * Do accounting and tracing. Note that there are (and have always been) + * cases in which a remote TLB flush will be traced, but eventually + * would not happen. + */ count_vm_tlb_event(NR_TLB_REMOTE_FLUSH); if (info->end == TLB_FLUSH_ALL) trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL); @@ -682,10 +692,14 @@ void native_flush_tlb_others(const struct cpumask *cpumask, * means that the percpu tlb_gen variables won't be updated * and we'll do pointless flushes on future context switches. * - * Rather than hooking native_flush_tlb_others() here, I think + * Rather than hooking native_flush_tlb_multi() here, I think * that UV should be updated so that smp_call_function_many(), * etc, are optimal on UV. */ + local_irq_disable(); + flush_tlb_func_local((__force void *)info); + local_irq_enable(); + cpumask = uv_flush_tlb_others(cpumask, info); if (cpumask) smp_call_function_many(cpumask, flush_tlb_func_remote, @@ -704,11 +718,39 @@ void native_flush_tlb_others(const struct cpumask *cpumask, * doing a speculative memory access. */ if (info->freed_tables) - smp_call_function_many(cpumask, flush_tlb_func_remote, - (void *)info, 1); - else - on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func_remote, - (void *)info, 1, GFP_ATOMIC, cpumask); + __smp_call_function_many(cpumask, flush_tlb_func_remote, + flush_tlb_func_local, (void *)info, 1); + else { + /* + * Although we could have used on_each_cpu_cond_mask(), + * open-coding it has several performance advantages: (1) we can + * use specialized functions for remote and local flushes; (2) + * no need for indirect branch to test if TLB is lazy; (3) we + * can use a designated cpumask for evaluating the condition + * instead of allocating a new one. + * + * This works under the assumption that there are no nested TLB + * flushes, an assumption that is already made in + * flush_tlb_mm_range(). + */ + struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask); + int cpu; + + cpumask_clear(cond_cpumask); + + for_each_cpu(cpu, cpumask) { + if (tlb_is_not_lazy(cpu)) + __cpumask_set_cpu(cpu, cond_cpumask); + } + __smp_call_function_many(cond_cpumask, flush_tlb_func_remote, + flush_tlb_func_local, (void *)info, 1); + } +} + +void native_flush_tlb_others(const struct cpumask *cpumask, + const struct flush_tlb_info *info) +{ + native_flush_tlb_multi(cpumask, info); } /* @@ -774,10 +816,15 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask, { int this_cpu = smp_processor_id(); + if (static_branch_likely(&flush_tlb_multi_enabled)) { + flush_tlb_multi(cpumask, info); + return; + } + if (cpumask_test_cpu(this_cpu, cpumask)) { lockdep_assert_irqs_enabled(); local_irq_disable(); - flush_tlb_func_local(info, TLB_LOCAL_MM_SHOOTDOWN); + flush_tlb_func_local((__force void *)info); local_irq_enable(); } diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c index beb44e22afdf..0cb277848cb4 100644 --- a/arch/x86/xen/mmu_pv.c +++ b/arch/x86/xen/mmu_pv.c @@ -2474,6 +2474,8 @@ void __init xen_init_mmu_ops(void) pv_ops.mmu = xen_mmu_ops; + static_key_disable(&flush_tlb_multi_enabled.key); + memset(dummy_mapping, 0xff, PAGE_SIZE); } -- 2.20.1