Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp2935940imm; Mon, 24 Sep 2018 12:33:03 -0700 (PDT) X-Google-Smtp-Source: ACcGV60L4QMimTprPFAb6MaMdOfDCo3O6P/LTSBKf11POOMq/POO47nxbq8Uq5SluAeq3ebjfqZ6 X-Received: by 2002:a63:fc46:: with SMTP id r6-v6mr205609pgk.345.1537817583054; Mon, 24 Sep 2018 12:33:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537817583; cv=none; d=google.com; s=arc-20160816; b=oXfI6y1nN5QWnAayu/FziTXW0z7hy3vwchzwB3ktkmtUa8MUI4ik/le1Phxhl5yTVz skcPdNcekb3DHx/ipyGuYb/IqZFzgWfph3VDTZGIY1HSGoQu1tY5YmSvSyEo3WPWgwCu 4OWhScoiz70njNzr8gd7/3PhPt9yjwKWv+wRq2D1wQG0HG5N5nMOTQT0X7AwOIplhUC6 vGRqOf1hxAJPwe2u1BFrz6FlSxHhpUJJszIw8CJFln+y5d5V8U75H3F3k9r+24ECVL7D 7d5pj1EPjOHGyV8vC8Fw/nEtZLWkzW/JuhRdU4/tq/QT3AQvxhCqweEItnXCyIdjS2U4 /fUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=kBSAWyW9k/5fnBRJrUjr5lVUGcNdKeq2BpHVKyAqTOU=; b=nkVcHrN4ohltlMk3GhznvBySSsXe32Ngf1ZAEkscT8DEE30tFCfhE9ehv72/EiKZN7 dp6tp/LQeGhhgyWCzLkc++Eg88aOpGa0j6NhFRVabtY29/Z9RXjk75L0XFHUHBMkD6z6 6tunMc1b2HSXQ9o2nqz/CessGMcSKFYM5tOHgeCfZtwYDqx0k9DIG1TFT8vnlHxvcF3V Y/XDj0blJ370j3TY+KjpU4BxklsQXwYTF4VdY9Y/Up1JralMZGOXTfYzQuGhJug2qqtG O3YjDzHZt6L/2KGFtAvxcyIzK4QhrWgMhhzxJAZDbHj3DgAu3GS0f0WcwfQR+xG3U1pd h83w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l1-v6si144738plg.285.2018.09.24.12.32.48; Mon, 24 Sep 2018 12:33:03 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387552AbeIYAlt (ORCPT + 99 others); Mon, 24 Sep 2018 20:41:49 -0400 Received: from shelob.surriel.com ([96.67.55.147]:36380 "EHLO shelob.surriel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729591AbeIYAls (ORCPT ); Mon, 24 Sep 2018 20:41:48 -0400 Received: from imladris.surriel.com ([96.67.55.152]) by shelob.surriel.com with esmtpsa (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.90_1) (envelope-from ) id 1g4VkD-0002qj-Ni; Mon, 24 Sep 2018 14:38:05 -0400 From: Rik van Riel To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, kernel-team@fb.com, songliubraving@fb.com, mingo@kernel.org, will.deacon@arm.com, hpa@zytor.com, luto@kernel.org, npiggin@gmail.com, Rik van Riel , Linus Torvalds , Thomas Gleixner , efault@gmx.de Subject: [PATCH 1/7] x86/mm/tlb: Always use lazy TLB mode Date: Mon, 24 Sep 2018 14:37:53 -0400 Message-Id: <20180924183759.23955-2-riel@surriel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180924183759.23955-1-riel@surriel.com> References: <20180924183759.23955-1-riel@surriel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Now that CPUs in lazy TLB mode no longer receive TLB shootdown IPIs, except at page table freeing time, and idle CPUs will no longer get shootdown IPIs for things like mprotect and madvise, we can always use lazy TLB mode. Tested-by: Song Liu Signed-off-by: Rik van Riel Acked-by: Dave Hansen Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: efault@gmx.de Cc: kernel-team@fb.com Cc: luto@kernel.org Link: http://lkml.kernel.org/r/20180716190337.26133-7-riel@surriel.com Signed-off-by: Ingo Molnar (cherry picked from commit 95b0e6357d3e4e05349668940d7ff8f3b7e7e11e) --- arch/x86/include/asm/tlbflush.h | 16 ---------------- arch/x86/mm/tlb.c | 15 +-------------- 2 files changed, 1 insertion(+), 30 deletions(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index ad6629537af5..82898cd3d933 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -143,22 +143,6 @@ static inline unsigned long build_cr3_noflush(pgd_t *pgd, u16 asid) #define __flush_tlb_single(addr) __native_flush_tlb_single(addr) #endif -static inline bool tlb_defer_switch_to_init_mm(void) -{ - /* - * If we have PCID, then switching to init_mm is reasonably - * fast. If we don't have PCID, then switching to init_mm is - * quite slow, so we try to defer it in the hopes that we can - * avoid it entirely. The latter approach runs the risk of - * receiving otherwise unnecessary IPIs. - * - * This choice is just a heuristic. The tlb code can handle this - * function returning true or false regardless of whether we have - * PCID. - */ - return !static_cpu_has(X86_FEATURE_PCID); -} - struct tlb_context { u64 ctx_id; u64 tlb_gen; diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 063433ff67bf..d19f424073d9 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -309,20 +309,7 @@ void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm) return; - if (tlb_defer_switch_to_init_mm()) { - /* - * There's a significant optimization that may be possible - * here. We have accurate enough TLB flush tracking that we - * don't need to maintain coherence of TLB per se when we're - * lazy. We do, however, need to maintain coherence of - * paging-structure caches. We could, in principle, leave our - * old mm loaded and only switch to init_mm when - * tlb_remove_page() happens. - */ - this_cpu_write(cpu_tlbstate.is_lazy, true); - } else { - switch_mm(NULL, &init_mm, NULL); - } + this_cpu_write(cpu_tlbstate.is_lazy, true); } /* -- 2.17.1