Received: by 2002:ad5:4acb:0:0:0:0:0 with SMTP id n11csp791801imw; Wed, 13 Jul 2022 08:06:11 -0700 (PDT) X-Google-Smtp-Source: AGRyM1uPzGpbE4zyq4WIsjEwG/6xpWaYy55hxX5CDB6nr+pYZjA9TodOERDgPrT/m4ixHz0ciBnW X-Received: by 2002:a17:906:7386:b0:715:7024:3df7 with SMTP id f6-20020a170906738600b0071570243df7mr4049509ejl.543.1657724771101; Wed, 13 Jul 2022 08:06:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1657724771; cv=none; d=google.com; s=arc-20160816; b=gto9LcyK8CgGOYL+0jPZC5nZgYuOgPQl7SGLusGH37fTMkqIoIYS3oVYNa/rQcBKbU 3DQmBo9pemOCPhHUuDt6fY/y06qy/d24TZZbUFRBgOZCbOCnRspgi4hUYePJ6/krYWF9 LcHl4eON5VXwBM5/YsFF7023/s0Jxq3LdU+1C6E0Wdot0ppNxjgUe9UQhjvhNc6Rf3SA t6d7sDJ81TZBdAe7IA1EyyisxTuBygYIPsQCSTmGGURbH3gvN7yherMH/WnZRsnY5XaF FxEnbvPPK/Pryv2oqKbRsE2nGWwCqTv80Xu8/jX7cGT5q9ZmvI8qAGVIOiJ6GmD5pJ5d hYhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=DmEQf8bfgfCA5CtXRF04ts0M/PU1l7dzT6PJs8diiLw=; b=iR4CDH3u7v7myooTT3fQJOi45KbxoosEjW+LSTsm3dWUjjOGQap+eVfklYKXIXld2H fH2qTUIFoO5ZWTeebJYJH0mKp915ij0SDi7DuwfH1zFDwbv6rNbqET1aA9IPh3bkXrx8 ye6X7wYOF+iH8F2oVj3Mm/fsl4e/gGgQvKKHPFSG5doAJlP5AOaE70IwbbbAhdJELg8E PX7BQV/qXNfc2W27iOl4r2unfUtqLORWaCYA1TZreRF3iR0syczEN0VXZVGEx1zFsuZN 3u+Nlwx0bHfsc0et8KTUpEeO/Sao9VMgDu2WGHYMIa+Zqe79wW5BvkCIdY3etZVa3B2g rDQQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=SKmNhJd9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w12-20020a50fa8c000000b0043770635d0asi6686932edr.576.2022.07.13.08.05.22; Wed, 13 Jul 2022 08:06:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=SKmNhJd9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236721AbiGMPCY (ORCPT + 99 others); Wed, 13 Jul 2022 11:02:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236720AbiGMPCU (ORCPT ); Wed, 13 Jul 2022 11:02:20 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83CC741991 for ; Wed, 13 Jul 2022 08:02:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1657724538; x=1689260538; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qCQxxpW97qNlDywooKT7HFcvXI52Z+rKG3JMYF5em0M=; b=SKmNhJd9ObY2INvwz0D7w5MUGSPxEOluNZ4oN8w97t6yU5uP2mhcEj0J 4154YwRtdEDVdGSgDGw3XzQmlru1Tx0ou1pH1K7y4kP2atiEoL/w3XrVp yZQxcNOrYgTP8n25Sf5go5T/5MDh0NccrOBJVcngzDnrZHWCRBB/EJwQT cF22fhXx6KRFDdG66fHGZVpC+zWDU31S0XQ+kqvW9Hv8wbS+pbD8ZLX8n k6de5Mbs2D1iNVgakie4ls7TlwiY0tfKG1pTQpxNNyVJnBP13EqC7ex1e XzysZFiNfyshtwE9xeMFWH8qMakLS1nwpMoMZfMtyGEPw8Xa1/3WPmlCN w==; X-IronPort-AV: E=McAfee;i="6400,9594,10407"; a="265644919" X-IronPort-AV: E=Sophos;i="5.92,267,1650956400"; d="scan'208";a="265644919" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jul 2022 08:01:58 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.92,267,1650956400"; d="scan'208";a="599767612" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga007.fm.intel.com with ESMTP; 13 Jul 2022 08:01:54 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 5988BF1; Wed, 13 Jul 2022 18:02:02 +0300 (EEST) From: "Kirill A. Shutemov" To: kirill.shutemov@linux.intel.com Cc: ak@linux.intel.com, andreyknvl@gmail.com, dave.hansen@linux.intel.com, dvyukov@google.com, glider@google.com, hjl.tools@gmail.com, kcc@google.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, luto@kernel.org, peterz@infradead.org, rick.p.edgecombe@intel.com, ryabinin.a.a@gmail.com, tarasmadan@google.com, x86@kernel.org Subject: [PATCHv5.1 04/13] x86/mm: Handle LAM on context switch Date: Wed, 13 Jul 2022 18:02:00 +0300 Message-Id: <20220713150200.17080-1-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220712231328.5294-6-kirill.shutemov@linux.intel.com> References: <20220712231328.5294-6-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.9 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Linear Address Masking mode for userspace pointers encoded in CR3 bits. The mode is selected per-thread. Add new thread features indicate that the thread has Linear Address Masking enabled. switch_mm_irqs_off() now respects these flags and constructs CR3 accordingly. The active LAM mode gets recorded in the tlb_state. Signed-off-by: Kirill A. Shutemov --- v5.1: - Fix build issue with CONFIG_MODULE=y --- arch/x86/include/asm/mmu.h | 3 +++ arch/x86/include/asm/mmu_context.h | 24 +++++++++++++++++ arch/x86/include/asm/tlbflush.h | 35 +++++++++++++++++++++++++ arch/x86/mm/tlb.c | 42 +++++++++++++++++++----------- 4 files changed, 89 insertions(+), 15 deletions(-) diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h index 5d7494631ea9..002889ca8978 100644 --- a/arch/x86/include/asm/mmu.h +++ b/arch/x86/include/asm/mmu.h @@ -40,6 +40,9 @@ typedef struct { #ifdef CONFIG_X86_64 unsigned short flags; + + /* Active LAM mode: X86_CR3_LAM_U48 or X86_CR3_LAM_U57 or 0 (disabled) */ + unsigned long lam_cr3_mask; #endif struct mutex lock; diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index b8d40ddeab00..69c943b2ae90 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -91,6 +91,29 @@ static inline void switch_ldt(struct mm_struct *prev, struct mm_struct *next) } #endif +#ifdef CONFIG_X86_64 +static inline unsigned long mm_lam_cr3_mask(struct mm_struct *mm) +{ + return mm->context.lam_cr3_mask; +} + +static inline void dup_lam(struct mm_struct *oldmm, struct mm_struct *mm) +{ + mm->context.lam_cr3_mask = oldmm->context.lam_cr3_mask; +} + +#else + +static inline unsigned long mm_lam_cr3_mask(struct mm_struct *mm) +{ + return 0; +} + +static inline void dup_lam(struct mm_struct *oldmm, struct mm_struct *mm) +{ +} +#endif + #define enter_lazy_tlb enter_lazy_tlb extern void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk); @@ -168,6 +191,7 @@ static inline int arch_dup_mmap(struct mm_struct *oldmm, struct mm_struct *mm) { arch_dup_pkeys(oldmm, mm); paravirt_arch_dup_mmap(oldmm, mm); + dup_lam(oldmm, mm); return ldt_dup_context(oldmm, mm); } diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 4af5579c7ef7..efe83d33327f 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -100,6 +100,16 @@ struct tlb_state { */ bool invalidate_other; +#ifdef CONFIG_X86_64 + /* + * Active LAM mode. + * + * X86_CR3_LAM_U57/U48 shifted right by X86_CR3_LAM_U57_BIT or 0 if LAM + * disabled. + */ + u8 lam; +#endif + /* * Mask that contains TLB_NR_DYN_ASIDS+1 bits to indicate * the corresponding user PCID needs a flush next time we @@ -356,6 +366,30 @@ static inline bool huge_pmd_needs_flush(pmd_t oldpmd, pmd_t newpmd) } #define huge_pmd_needs_flush huge_pmd_needs_flush +#ifdef CONFIG_X86_64 +static inline unsigned long tlbstate_lam_cr3_mask(void) +{ + unsigned long lam = this_cpu_read(cpu_tlbstate.lam); + + return lam << X86_CR3_LAM_U57_BIT; +} + +static inline void set_tlbstate_cr3_lam_mask(unsigned long mask) +{ + this_cpu_write(cpu_tlbstate.lam, mask >> X86_CR3_LAM_U57_BIT); +} + +#else + +static inline unsigned long tlbstate_lam_cr3_mask(void) +{ + return 0; +} + +static inline void set_tlbstate_cr3_lam_mask(u64 mask) +{ +} +#endif #endif /* !MODULE */ static inline void __native_tlb_flush_global(unsigned long cr4) @@ -363,4 +397,5 @@ static inline void __native_tlb_flush_global(unsigned long cr4) native_write_cr4(cr4 ^ X86_CR4_PGE); native_write_cr4(cr4); } + #endif /* _ASM_X86_TLBFLUSH_H */ diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index d400b6d9d246..4c93f87a8928 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -154,17 +154,18 @@ static inline u16 user_pcid(u16 asid) return ret; } -static inline unsigned long build_cr3(pgd_t *pgd, u16 asid) +static inline unsigned long build_cr3(pgd_t *pgd, u16 asid, unsigned long lam) { if (static_cpu_has(X86_FEATURE_PCID)) { - return __sme_pa(pgd) | kern_pcid(asid); + return __sme_pa(pgd) | kern_pcid(asid) | lam; } else { VM_WARN_ON_ONCE(asid != 0); - return __sme_pa(pgd); + return __sme_pa(pgd) | lam; } } -static inline unsigned long build_cr3_noflush(pgd_t *pgd, u16 asid) +static inline unsigned long build_cr3_noflush(pgd_t *pgd, u16 asid, + unsigned long lam) { VM_WARN_ON_ONCE(asid > MAX_ASID_AVAILABLE); /* @@ -173,7 +174,7 @@ static inline unsigned long build_cr3_noflush(pgd_t *pgd, u16 asid) * boot because all CPU's the have same capabilities: */ VM_WARN_ON_ONCE(!boot_cpu_has(X86_FEATURE_PCID)); - return __sme_pa(pgd) | kern_pcid(asid) | CR3_NOFLUSH; + return __sme_pa(pgd) | kern_pcid(asid) | lam | CR3_NOFLUSH; } /* @@ -274,15 +275,16 @@ static inline void invalidate_user_asid(u16 asid) (unsigned long *)this_cpu_ptr(&cpu_tlbstate.user_pcid_flush_mask)); } -static void load_new_mm_cr3(pgd_t *pgdir, u16 new_asid, bool need_flush) +static void load_new_mm_cr3(pgd_t *pgdir, u16 new_asid, unsigned long lam, + bool need_flush) { unsigned long new_mm_cr3; if (need_flush) { invalidate_user_asid(new_asid); - new_mm_cr3 = build_cr3(pgdir, new_asid); + new_mm_cr3 = build_cr3(pgdir, new_asid, lam); } else { - new_mm_cr3 = build_cr3_noflush(pgdir, new_asid); + new_mm_cr3 = build_cr3_noflush(pgdir, new_asid, lam); } /* @@ -491,6 +493,8 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, { struct mm_struct *real_prev = this_cpu_read(cpu_tlbstate.loaded_mm); u16 prev_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid); + unsigned long prev_lam = tlbstate_lam_cr3_mask(); + unsigned long new_lam = mm_lam_cr3_mask(next); bool was_lazy = this_cpu_read(cpu_tlbstate_shared.is_lazy); unsigned cpu = smp_processor_id(); u64 next_tlb_gen; @@ -520,7 +524,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, * isn't free. */ #ifdef CONFIG_DEBUG_VM - if (WARN_ON_ONCE(__read_cr3() != build_cr3(real_prev->pgd, prev_asid))) { + if (WARN_ON_ONCE(__read_cr3() != build_cr3(real_prev->pgd, prev_asid, prev_lam))) { /* * If we were to BUG here, we'd be very likely to kill * the system so hard that we don't see the call trace. @@ -622,15 +626,16 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, barrier(); } + set_tlbstate_cr3_lam_mask(new_lam); if (need_flush) { this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id); this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen); - load_new_mm_cr3(next->pgd, new_asid, true); + load_new_mm_cr3(next->pgd, new_asid, new_lam, true); trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL); } else { /* The new ASID is already up to date. */ - load_new_mm_cr3(next->pgd, new_asid, false); + load_new_mm_cr3(next->pgd, new_asid, new_lam, false); trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, 0); } @@ -691,6 +696,10 @@ void initialize_tlbstate_and_flush(void) /* Assert that CR3 already references the right mm. */ WARN_ON((cr3 & CR3_ADDR_MASK) != __pa(mm->pgd)); + /* LAM expected to be disabled in CR3 and init_mm */ + WARN_ON(cr3 & (X86_CR3_LAM_U48 | X86_CR3_LAM_U57)); + WARN_ON(mm_lam_cr3_mask(&init_mm)); + /* * Assert that CR4.PCIDE is set if needed. (CR4.PCIDE initialization * doesn't work like other CR4 bits because it can only be set from @@ -699,8 +708,8 @@ void initialize_tlbstate_and_flush(void) WARN_ON(boot_cpu_has(X86_FEATURE_PCID) && !(cr4_read_shadow() & X86_CR4_PCIDE)); - /* Force ASID 0 and force a TLB flush. */ - write_cr3(build_cr3(mm->pgd, 0)); + /* Disable LAM, force ASID 0 and force a TLB flush. */ + write_cr3(build_cr3(mm->pgd, 0, 0)); /* Reinitialize tlbstate. */ this_cpu_write(cpu_tlbstate.last_user_mm_spec, LAST_USER_MM_INIT); @@ -708,6 +717,7 @@ void initialize_tlbstate_and_flush(void) this_cpu_write(cpu_tlbstate.next_asid, 1); this_cpu_write(cpu_tlbstate.ctxs[0].ctx_id, mm->context.ctx_id); this_cpu_write(cpu_tlbstate.ctxs[0].tlb_gen, tlb_gen); + set_tlbstate_cr3_lam_mask(0); for (i = 1; i < TLB_NR_DYN_ASIDS; i++) this_cpu_write(cpu_tlbstate.ctxs[i].ctx_id, 0); @@ -1047,8 +1057,10 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end) */ unsigned long __get_current_cr3_fast(void) { - unsigned long cr3 = build_cr3(this_cpu_read(cpu_tlbstate.loaded_mm)->pgd, - this_cpu_read(cpu_tlbstate.loaded_mm_asid)); + unsigned long cr3 = + build_cr3(this_cpu_read(cpu_tlbstate.loaded_mm)->pgd, + this_cpu_read(cpu_tlbstate.loaded_mm_asid), + tlbstate_lam_cr3_mask()); /* For now, be very restrictive about when this can be called. */ VM_WARN_ON(in_nmi() || preemptible()); -- 2.35.1