Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp5657719imm; Tue, 26 Jun 2018 15:32:48 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLJKPqbLz8cbJcbVytjaIxOO+XvTN93zeCXYRezE7Z+1Vz5qbkfDYOnNTE97T8aun9U1PSq X-Received: by 2002:a17:902:9a08:: with SMTP id v8-v6mr3458788plp.148.1530052368716; Tue, 26 Jun 2018 15:32:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530052368; cv=none; d=google.com; s=arc-20160816; b=iOdUDbAF2Zbf9SF9R1UK/eBU1QD4LookwzRv0ThQNudnfLNefwYXr/yBczwVjVgWM2 5PUC6UyZZcTD2kHm59Y7t1Dh41N/fXgB+aI4rGm+GrF+HHKbOheVdVpMUEHVsUBUxm2s o1U4IC+GchxvxvoVLD+SGEk+tOISc5oH24xNWQMGPUszNwCgluqUqgUo+c1FnnEOHo12 CCNB0XuIEBt/CkKk4E3ChjPSAVvk6PTflIzSpELVxJOPGXFTOa0YHX67zIihmhxiXpqh 9N55GWU7b8vzolq+13sb2W7hFqndDFSFbCe68mag6L7apR1WnA5+qZsTmn4wHyP5FzVJ D9QA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=Fq/COpaFekHZjt8ZgyYPEvL3tOlcJWE1jS/pZCcpbsU=; b=npOzsIZSjXggMshfcQvQpruB+Y0REFV+mZUh254r1T7lIHOsB2dzjrL4uBBxHfVYIi 7wtdPAMetioYDzkOdFdXQ08YvICJ3e/zqJCKR0LV/N3mKQL8XtD6jaYB8xY7cfFJ3Ye7 srKe3cH9YwJhc94Am5p9XqpS+du8+cTNbAclt9NDLBlKFxg82J77IfIRa2koUTS7w8wd I4onb1wTDTPztlyRbv5gO08dCC1lWT7TzS/gLM5QzdOhP6UamJHM3sLC2V5wVozsJV/a skyQYB9CmtAhpq2ZaCwNSToJIlLGxAZ3YiZNbI5RwOMI+8f4h1kXfpA/OVjoWOgaSpjp 8HvA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g8-v6si2375481pli.58.2018.06.26.15.32.34; Tue, 26 Jun 2018 15:32:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754417AbeFZRce (ORCPT + 99 others); Tue, 26 Jun 2018 13:32:34 -0400 Received: from shelob.surriel.com ([96.67.55.147]:36864 "EHLO shelob.surriel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933725AbeFZRbn (ORCPT ); Tue, 26 Jun 2018 13:31:43 -0400 Received: from imladris.surriel.com ([96.67.55.152]) by shelob.surriel.com with esmtpsa (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.90_1) (envelope-from ) id 1fXroS-0005n2-Kt; Tue, 26 Jun 2018 13:31:32 -0400 From: Rik van Riel To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, luto@kernel.org, dave.hansen@linux.intel.com, mingo@kernel.org, kernel-team@fb.com, tglx@linutronix.de, efault@gmx.de, songliubraving@fb.com, Rik van Riel Subject: [PATCH 5/6] x86,mm: always use lazy TLB mode Date: Tue, 26 Jun 2018 13:31:25 -0400 Message-Id: <20180626173126.12296-6-riel@surriel.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180626173126.12296-1-riel@surriel.com> References: <20180626173126.12296-1-riel@surriel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Now that CPUs in lazy TLB mode no longer receive TLB shootdown IPIs, except at page table freeing time, and idle CPUs will no longer get shootdown IPIs for things like mprotect and madvise, we can always use lazy TLB mode. Signed-off-by: Rik van Riel Tested-by: Song Liu --- arch/x86/include/asm/tlbflush.h | 16 ---------------- arch/x86/mm/tlb.c | 15 +-------------- 2 files changed, 1 insertion(+), 30 deletions(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 3aa3204b5dc0..511bf5fae8b8 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -148,22 +148,6 @@ static inline unsigned long build_cr3_noflush(pgd_t *pgd, u16 asid) #define __flush_tlb_one_user(addr) __native_flush_tlb_one_user(addr) #endif -static inline bool tlb_defer_switch_to_init_mm(void) -{ - /* - * If we have PCID, then switching to init_mm is reasonably - * fast. If we don't have PCID, then switching to init_mm is - * quite slow, so we try to defer it in the hopes that we can - * avoid it entirely. The latter approach runs the risk of - * receiving otherwise unnecessary IPIs. - * - * This choice is just a heuristic. The tlb code can handle this - * function returning true or false regardless of whether we have - * PCID. - */ - return !static_cpu_has(X86_FEATURE_PCID); -} - struct tlb_context { u64 ctx_id; u64 tlb_gen; diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 03512772395f..96ab4eacda95 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -367,20 +367,7 @@ void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm) return; - if (tlb_defer_switch_to_init_mm()) { - /* - * There's a significant optimization that may be possible - * here. We have accurate enough TLB flush tracking that we - * don't need to maintain coherence of TLB per se when we're - * lazy. We do, however, need to maintain coherence of - * paging-structure caches. We could, in principle, leave our - * old mm loaded and only switch to init_mm when - * tlb_remove_page() happens. - */ - this_cpu_write(cpu_tlbstate.is_lazy, true); - } else { - switch_mm(NULL, &init_mm, NULL); - } + this_cpu_write(cpu_tlbstate.is_lazy, true); } /* -- 2.14.4