Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp400997ybb; Wed, 8 Apr 2020 02:04:30 -0700 (PDT) X-Google-Smtp-Source: APiQypJXnXWrxScHCsJaYm7RAksjQNwIKf+LpToADXkLD2RSkpuXC7/5Cf6qhAmhqDuV5SigV/0G X-Received: by 2002:a54:4882:: with SMTP id r2mr1697938oic.137.1586336670546; Wed, 08 Apr 2020 02:04:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1586336670; cv=none; d=google.com; s=arc-20160816; b=RmR2WZPb2b9fMh6Uot2kCz83hH8E6RZBAiEqGK+9HaAIsArL6oSEDd3Nf16N4vg4Bo gif/MsghY/XSJyt13EuSXMFnH8xNefR0ffa225fFNAmw62nOwvfXzyz562N3zrkQLoA4 Xk2vDDUV197Arkc7+3tfJTjCtfrsrKcWOwEq3bu23Ah9HR4O0vDdBAubj96EFtr0Ht1D l7DdfJnBK4PfCj/oTnCbCPqbTyExFevUJ4GeIZmm4+0jS6WSxUlc/CQZb1on/ntp3ndo k13xQJgRn6Z2KvQAclENafGeYheQl0VX0MvW/F8XXZQah0A/oqmcM8TDmqY9U5y7sPf+ Q5fg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:ironport-sdr:dkim-signature; bh=5JAKkPESHQl4aX9Ryw6OhZCoK//iRItVwPehoakTmAY=; b=q98CP7cxRpgyA5oCHY6OE/+GpURjnEARNPlkdbYVedLPmFRgAHQVlGiV7d+JA6ZXgM VFmEBM8RI7o8Hfn17iNVNxpO4KajF38BuDTBMW9TmTS/RwLx2Su17bzPssTCstwRZAj3 oLuF8lrbLf3Y3XVJyipx5VurnZS8MjWmrSdqaOPU1HfjRLDATsPPTJ+RByvx94ODorUc lw6tXr+7I20v+5GDpVDKI53AqBtYS1Q9Zoyu+UVs1zScs6YwsJDdVzgbrfS0cSTD8cRm Dt30TfD4KWEUG5M8+3o7/W2GIx0YhFKA/XGkP8RwN89ffH35AxHZVZFBCyrVenZu0FTv 2syA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@amazon.com header.s=amazon201209 header.b=VqJ5g16J; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a17si2206416oie.2.2020.04.08.02.04.16; Wed, 08 Apr 2020 02:04:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@amazon.com header.s=amazon201209 header.b=VqJ5g16J; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727836AbgDHJDJ (ORCPT + 99 others); Wed, 8 Apr 2020 05:03:09 -0400 Received: from smtp-fw-6001.amazon.com ([52.95.48.154]:49005 "EHLO smtp-fw-6001.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727815AbgDHJDI (ORCPT ); Wed, 8 Apr 2020 05:03:08 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1586336588; x=1617872588; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=5JAKkPESHQl4aX9Ryw6OhZCoK//iRItVwPehoakTmAY=; b=VqJ5g16Je5C/vaO3wTpDPTYq2n1Q4WA2pHkmizFRBEdE+zLi8RjniYGi 8HeTTosXX0/Hp9Y7SL+U/rQwYwZrT0iUMC8jlqcNQ4SVwOzfznMIGXA// UnDszJsn+cXMCoROnKcmoBVC+zGzaza/nluiLzeY3EF7CWvk31d841KRM 0=; IronPort-SDR: yhQeAVmmkaNP07EAhBj2dmab8vDubprmTHF/PrfzyiQEq62ET1eoGirNCgPdSRMwQyTtvf5lRh BSCNrSbLXHFA== X-IronPort-AV: E=Sophos;i="5.72,357,1580774400"; d="scan'208";a="26066295" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-2b-a7fdc47a.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP; 08 Apr 2020 09:02:55 +0000 Received: from EX13MTAUWA001.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2b-a7fdc47a.us-west-2.amazon.com (Postfix) with ESMTPS id 97506C569F; Wed, 8 Apr 2020 09:02:51 +0000 (UTC) Received: from EX13D01UWA003.ant.amazon.com (10.43.160.107) by EX13MTAUWA001.ant.amazon.com (10.43.160.58) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 8 Apr 2020 09:02:50 +0000 Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by EX13d01UWA003.ant.amazon.com (10.43.160.107) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 8 Apr 2020 09:02:50 +0000 Received: from localhost (10.85.0.235) by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 8 Apr 2020 09:02:49 +0000 From: Balbir Singh To: , CC: , , , , , , "Balbir Singh" Subject: [PATCH v3 3/5] arch/x86/mm: Refactor cond_ibpb() to support other use cases Date: Wed, 8 Apr 2020 19:02:27 +1000 Message-ID: <20200408090229.16467-4-sblbir@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200408090229.16467-1-sblbir@amazon.com> References: <20200408090229.16467-1-sblbir@amazon.com> MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org cond_ibpb() has the necessary bits required to track the previous mm in switch_mm_irqs_off(). This can be reused for other use cases like L1D flushing (on context switch out). Signed-off-by: Balbir Singh --- arch/x86/include/asm/tlbflush.h | 2 +- arch/x86/mm/tlb.c | 43 +++++++++++++++++---------------- 2 files changed, 23 insertions(+), 22 deletions(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 6f66d841262d..69e6ea20679c 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -172,7 +172,7 @@ struct tlb_state { /* Last user mm for optimizing IBPB */ union { struct mm_struct *last_user_mm; - unsigned long last_user_mm_ibpb; + unsigned long last_user_mm_spec; }; u16 loaded_mm_asid; diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 66f96f21a7b6..da5c94286c7d 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -33,10 +33,11 @@ */ /* - * Use bit 0 to mangle the TIF_SPEC_IB state into the mm pointer which is - * stored in cpu_tlb_state.last_user_mm_ibpb. + * Bits to mangle the TIF_SPEC_IB state into the mm pointer which is + * stored in cpu_tlb_state.last_user_mm_spec. */ #define LAST_USER_MM_IBPB 0x1UL +#define LAST_USER_MM_SPEC_MASK (LAST_USER_MM_IBPB) /* * We get here when we do something requiring a TLB invalidation @@ -189,19 +190,24 @@ static void sync_current_stack_to_mm(struct mm_struct *mm) } } -static inline unsigned long mm_mangle_tif_spec_ib(struct task_struct *next) +static inline unsigned long mm_mangle_tif_spec_bits(struct task_struct *next) { unsigned long next_tif = task_thread_info(next)->flags; - unsigned long ibpb = (next_tif >> TIF_SPEC_IB) & LAST_USER_MM_IBPB; + unsigned long spec_bits = (next_tif >> TIF_SPEC_IB) & LAST_USER_MM_SPEC_MASK; - return (unsigned long)next->mm | ibpb; + return (unsigned long)next->mm | spec_bits; } -static void cond_ibpb(struct task_struct *next) +static void cond_mitigation(struct task_struct *next) { + unsigned long prev_mm, next_mm; + if (!next || !next->mm) return; + next_mm = mm_mangle_tif_spec_bits(next); + prev_mm = this_cpu_read(cpu_tlbstate.last_user_mm_spec); + /* * Both, the conditional and the always IBPB mode use the mm * pointer to avoid the IBPB when switching between tasks of the @@ -212,8 +218,6 @@ static void cond_ibpb(struct task_struct *next) * exposed data is not really interesting. */ if (static_branch_likely(&switch_mm_cond_ibpb)) { - unsigned long prev_mm, next_mm; - /* * This is a bit more complex than the always mode because * it has to handle two cases: @@ -243,20 +247,14 @@ static void cond_ibpb(struct task_struct *next) * Optimize this with reasonably small overhead for the * above cases. Mangle the TIF_SPEC_IB bit into the mm * pointer of the incoming task which is stored in - * cpu_tlbstate.last_user_mm_ibpb for comparison. - */ - next_mm = mm_mangle_tif_spec_ib(next); - prev_mm = this_cpu_read(cpu_tlbstate.last_user_mm_ibpb); - - /* + * cpu_tlbstate.last_user_mm_spec for comparison. + * * Issue IBPB only if the mm's are different and one or * both have the IBPB bit set. */ if (next_mm != prev_mm && (next_mm | prev_mm) & LAST_USER_MM_IBPB) indirect_branch_prediction_barrier(); - - this_cpu_write(cpu_tlbstate.last_user_mm_ibpb, next_mm); } if (static_branch_unlikely(&switch_mm_always_ibpb)) { @@ -265,11 +263,12 @@ static void cond_ibpb(struct task_struct *next) * different context than the user space task which ran * last on this CPU. */ - if (this_cpu_read(cpu_tlbstate.last_user_mm) != next->mm) { + if ((prev_mm & ~LAST_USER_MM_SPEC_MASK) != + (unsigned long)next->mm) indirect_branch_prediction_barrier(); - this_cpu_write(cpu_tlbstate.last_user_mm, next->mm); - } } + + this_cpu_write(cpu_tlbstate.last_user_mm_spec, next_mm); } void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, @@ -374,8 +373,10 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, * Avoid user/user BTB poisoning by flushing the branch * predictor when switching between processes. This stops * one process from doing Spectre-v2 attacks on another. + * The hook can also be used for mitigations that rely + * on switch_mm for hooks. */ - cond_ibpb(tsk); + cond_mitigation(tsk); if (IS_ENABLED(CONFIG_VMAP_STACK)) { /* @@ -501,7 +502,7 @@ void initialize_tlbstate_and_flush(void) write_cr3(build_cr3(mm->pgd, 0)); /* Reinitialize tlbstate. */ - this_cpu_write(cpu_tlbstate.last_user_mm_ibpb, LAST_USER_MM_IBPB); + this_cpu_write(cpu_tlbstate.last_user_mm_spec, LAST_USER_MM_IBPB); this_cpu_write(cpu_tlbstate.loaded_mm_asid, 0); this_cpu_write(cpu_tlbstate.next_asid, 1); this_cpu_write(cpu_tlbstate.ctxs[0].ctx_id, mm->context.ctx_id); -- 2.17.1