Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp1544725pxb; Thu, 16 Sep 2021 09:37:10 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzcA1gp+WPzg+PGj7iz/IjXHqr8e8IvEf6S9GNoPxo6Ne7xzWArnyMHLpeUXghwOQ1GjUgU X-Received: by 2002:a02:242d:: with SMTP id f45mr5122270jaa.135.1631810230513; Thu, 16 Sep 2021 09:37:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631810230; cv=none; d=google.com; s=arc-20160816; b=uuFxvumc39QZ59EkiZjdvVFiD+dPQD4mvqr6lWN89ZFAcTC/4Zs7oaoO0961aifePo R1NhIqpwiau8NOoTQHI6Tfq1VvIuIZOiNLQnrWVtzDFZCI56th8oaBoJ3ylF3LTyCl5y qIt+5KPja8sYnyP6hd5sYZ6qpd8NWbmD1D6Fzic8yHEl5MwmLROl5ZcL6mDEQPRyQMMl u6cPXKgQAg6oh+giSDMTypx7fY540zjYHZ8U8KK8YItVbrxfjur9uOuml3wgAHomDyjT kKSVyIxG87U4kENFSm3qu2STafgHqeO3JcOehtQOPt8rXK/ycAt09cFbw4vsVZjb0Fz6 gI9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=bDl6bHg7uDxkPlC2olKf8D81Cdd8/9mdJOPy3c0siCk=; b=TXH6puDyBmwx76T+8ptKsQv4L/K9wwRdYO0plVEWzKgnghl35rSXj9lz1sb1+YSq1D xW1x5EG1IDPonY3Ds+4PFt+6vh/Uk6XMDR6eD8hkr86RI83xgKHfRZSz2stfzo7QAj7U GJBbnjpebZkEopSHfdTZftJwVrah5Rff3xsXpSaU67KUBlwXwMvke0KEXN+rR4s7H0cz ChSY7AcZ35biJob9WNFRByp15TfL0V4fl3+ljObMa1viiC5u5qPOc/HyLOQEB7H0RXp6 CeaMsExxlfOzBBjMtBlUh9Bw0BVWuDKTLKE/G7kZBvrmGAYjl6rmHgyFG7xZE9N+ToGv npmA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=SfoygmBF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ay15si2964480iob.92.2021.09.16.09.36.58; Thu, 16 Sep 2021 09:37:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=SfoygmBF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243842AbhIPQgh (ORCPT + 99 others); Thu, 16 Sep 2021 12:36:37 -0400 Received: from mail.kernel.org ([198.145.29.99]:37486 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242006AbhIPQ3K (ORCPT ); Thu, 16 Sep 2021 12:29:10 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 251896137B; Thu, 16 Sep 2021 16:18:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1631809104; bh=Re8N8cbG+/CTzg5RsonkIH5v2vVp0qmoaUiyiinyl9s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SfoygmBF2GGmtDbVwbVldgBk5LmzrQpf7ps7GSKdHmtb73ocqN879rpTpAKJGaCDq n0g5ZFa5mJyqQSWUD+7RHPLG8utOZjynDuvg65LCJRLNUv1e85j/YLQKepmVe9yXqd 6eeJ9t8cHZGlSONnik29FyihKExlN3zSvqJsrESs= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Marc Zyngier , Jade Alglave , Shameer Kolothum , Will Deacon , Catalin Marinas Subject: [PATCH 5.13 031/380] arm64: mm: Fix TLBI vs ASID rollover Date: Thu, 16 Sep 2021 17:56:28 +0200 Message-Id: <20210916155805.022222446@linuxfoundation.org> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210916155803.966362085@linuxfoundation.org> References: <20210916155803.966362085@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Will Deacon commit 5e10f9887ed85d4f59266d5c60dd09be96b5dbd4 upstream. When switching to an 'mm_struct' for the first time following an ASID rollover, a new ASID may be allocated and assigned to 'mm->context.id'. This reassignment can happen concurrently with other operations on the mm, such as unmapping pages and subsequently issuing TLB invalidation. Consequently, we need to ensure that (a) accesses to 'mm->context.id' are atomic and (b) all page-table updates made prior to a TLBI using the old ASID are guaranteed to be visible to CPUs running with the new ASID. This was found by inspection after reviewing the VMID changes from Shameer but it looks like a real (yet hard to hit) bug. Cc: Cc: Marc Zyngier Cc: Jade Alglave Cc: Shameer Kolothum Signed-off-by: Will Deacon Reviewed-by: Catalin Marinas Link: https://lore.kernel.org/r/20210806113109.2475-2-will@kernel.org Signed-off-by: Catalin Marinas Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/mmu.h | 29 +++++++++++++++++++++++++---- arch/arm64/include/asm/tlbflush.h | 11 ++++++----- 2 files changed, 31 insertions(+), 9 deletions(-) --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -27,11 +27,32 @@ typedef struct { } mm_context_t; /* - * This macro is only used by the TLBI and low-level switch_mm() code, - * neither of which can race with an ASID change. We therefore don't - * need to reload the counter using atomic64_read(). + * We use atomic64_read() here because the ASID for an 'mm_struct' can + * be reallocated when scheduling one of its threads following a + * rollover event (see new_context() and flush_context()). In this case, + * a concurrent TLBI (e.g. via try_to_unmap_one() and ptep_clear_flush()) + * may use a stale ASID. This is fine in principle as the new ASID is + * guaranteed to be clean in the TLB, but the TLBI routines have to take + * care to handle the following race: + * + * CPU 0 CPU 1 CPU 2 + * + * // ptep_clear_flush(mm) + * xchg_relaxed(pte, 0) + * DSB ISHST + * old = ASID(mm) + * | + * | new = new_context(mm) + * \-----------------> atomic_set(mm->context.id, new) + * cpu_switch_mm(mm) + * // Hardware walk of pte using new ASID + * TLBI(old) + * + * In this scenario, the barrier on CPU 0 and the dependency on CPU 1 + * ensure that the page-table walker on CPU 1 *must* see the invalid PTE + * written by CPU 0. */ -#define ASID(mm) ((mm)->context.id.counter & 0xffff) +#define ASID(mm) (atomic64_read(&(mm)->context.id) & 0xffff) static inline bool arm64_kernel_unmapped_at_el0(void) { --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -245,9 +245,10 @@ static inline void flush_tlb_all(void) static inline void flush_tlb_mm(struct mm_struct *mm) { - unsigned long asid = __TLBI_VADDR(0, ASID(mm)); + unsigned long asid; dsb(ishst); + asid = __TLBI_VADDR(0, ASID(mm)); __tlbi(aside1is, asid); __tlbi_user(aside1is, asid); dsb(ish); @@ -256,9 +257,10 @@ static inline void flush_tlb_mm(struct m static inline void flush_tlb_page_nosync(struct vm_area_struct *vma, unsigned long uaddr) { - unsigned long addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm)); + unsigned long addr; dsb(ishst); + addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm)); __tlbi(vale1is, addr); __tlbi_user(vale1is, addr); } @@ -283,9 +285,7 @@ static inline void __flush_tlb_range(str { int num = 0; int scale = 0; - unsigned long asid = ASID(vma->vm_mm); - unsigned long addr; - unsigned long pages; + unsigned long asid, addr, pages; start = round_down(start, stride); end = round_up(end, stride); @@ -305,6 +305,7 @@ static inline void __flush_tlb_range(str } dsb(ishst); + asid = ASID(vma->vm_mm); /* * When the CPU does not support TLB range operations, flush the TLB