Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp2432944pxj; Sun, 6 Jun 2021 02:07:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxqolk9jW9jDjcag7oEV7fOQBx40lSwvgaihYwxMzRnZVK5+1/iigQWJheS3by1vdgPQv1b X-Received: by 2002:aa7:dc51:: with SMTP id g17mr14283544edu.43.1622970447870; Sun, 06 Jun 2021 02:07:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622970447; cv=none; d=google.com; s=arc-20160816; b=w6mBgTHEKU2rv1mDmE0m8UVGKfAWiaGwuBFo1mXzWhgHZCXTOypcNXZtm2I5hxJBYT QQ00uz+tSGWbAQtX57IG0Lcfz5abUBk2xRosJlV12APLMpXr6+ngUCHeszV5Jh9Ryf9g exyqlA3NMsav9cMkVd0RrHSJpqgvKur4PWLCNBcJBMaNFvZ/ukc0vA8agZLjmEQIVS1k svUX2UdvQvMb5aQoE5Ykj4KzAl0QHomlpB5HNDwUUknnMLEauE9f7X+MX4vQfK6vdpOX rJdxt7bnRLoblEP3cEocVSkyJuIBSEfE6MUh2fCrJ5mfrwcg188S8sVC9fNngGL8dasq 1+7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=ik+NpLR/mu2YPG8fXeg0hlJmS5jFzZFR3mqk2Y1/rPo=; b=dbuLgIjEmPt84GgFcqemSyRI559YOir5ClEvJJhfYz95dddSOtsZz4nxQDSy6s+SfJ hcj5QPL6Y5wWiKGG7EDHi+n4AFTbdDNklv4ZJBAuKGqQeasYZ2CH5mEkkH4khOgj3pox /FA6l683mGKiuiZ72KaJ+OBrf/1NYDtrvjHck9T50uVG6Pl9yBsTguDTANpVwCZckdrz VRl3J6JcJY1VP9HyHFNrOrHifaN7gvzNOtvh7IvdRNh/pIHgR5ZBbn5HdHcVRbmqnaqX L8IbmaHqNKfMfUenM8F7D1fxu+JFZ8wcUgqaRU2nc8yXygIGARf2r6ZDGUyZvPl4uLiq 5FwA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=R0fcggRa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z40si9976172ede.300.2021.06.06.02.07.05; Sun, 06 Jun 2021 02:07:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=R0fcggRa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230254AbhFFJHE (ORCPT + 99 others); Sun, 6 Jun 2021 05:07:04 -0400 Received: from mail.kernel.org ([198.145.29.99]:37644 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230250AbhFFJHC (ORCPT ); Sun, 6 Jun 2021 05:07:02 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 86D556142A; Sun, 6 Jun 2021 09:05:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1622970313; bh=HrYp+e9JctxAgwPjSkrzh4E2OYtyKSesSsTWLcQlUec=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=R0fcggRaGTAQupIinSlV2nu4ZC8YFsp+j7cHs56LNYSGAur3rs+E6DCtnm/2ht9Ke wbktyDaPpDS3Ts3WI0EaCTRsK8zXLrnBOd25l6lEavtr/KmQ2rMKshVCVjPTKSIbiZ RwG0bxIJxZNGWlzjmTzjo46DlEq0b04D3N8xI37AYv+HBowVNSY/t8N9raS3FIoI/U DKW3xPHR33w3aPye1v4Whutki0SizCed56EYV21znb32IQUtrywEezzVdd9TJDfFeY 9TSQAbxjY+pqqPnZE9+fH8uf3LakhYGsezhKeR31fs6AKOCGW+rZmxzaXOn+M8vowQ B1Z3YdB1DcuWQ== From: guoren@kernel.org To: guoren@kernel.org, anup.patel@wdc.com, palmerdabbelt@google.com, arnd@arndb.de, wens@csie.org, maxime@cerno.tech, drew@beagleboard.org, liush@allwinnertech.com, lazyparser@gmail.com, wefu@redhat.com Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-sunxi@lists.linux.dev, Guo Ren , Christoph Hellwig Subject: [RFC PATCH v2 02/11] riscv: asid: Add ASID-based tlbflushing methods Date: Sun, 6 Jun 2021 09:03:59 +0000 Message-Id: <1622970249-50770-5-git-send-email-guoren@kernel.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1622970249-50770-1-git-send-email-guoren@kernel.org> References: <1622970249-50770-1-git-send-email-guoren@kernel.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Guo Ren Implement optimized version of the tlb flushing routines for systems using ASIDs. These are behind the use_asid_allocator static branch to not affect existing systems not using ASIDs. Signed-off-by: Guo Ren Signed-off-by: Liu Shaohua Reviewed-by: Anup Patel Cc: Christoph Hellwig Cc: Chen-Yu Tsai Cc: Drew Fustini Cc: Maxime Ripard Cc: Palmer Dabbelt Cc: Wei Fu Cc: Wei Wu --- arch/riscv/include/asm/mmu_context.h | 2 ++ arch/riscv/include/asm/tlbflush.h | 22 +++++++++++++++++ arch/riscv/mm/context.c | 2 +- arch/riscv/mm/tlbflush.c | 46 +++++++++++++++++++++++++++++++++--- 4 files changed, 68 insertions(+), 4 deletions(-) diff --git a/arch/riscv/include/asm/mmu_context.h b/arch/riscv/include/asm/mmu_context.h index b065941..7030837 100644 --- a/arch/riscv/include/asm/mmu_context.h +++ b/arch/riscv/include/asm/mmu_context.h @@ -33,6 +33,8 @@ static inline int init_new_context(struct task_struct *tsk, return 0; } +DECLARE_STATIC_KEY_FALSE(use_asid_allocator); + #include #endif /* _ASM_RISCV_MMU_CONTEXT_H */ diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h index c84218a..894cf75 100644 --- a/arch/riscv/include/asm/tlbflush.h +++ b/arch/riscv/include/asm/tlbflush.h @@ -22,9 +22,31 @@ static inline void local_flush_tlb_page(unsigned long addr) { ALT_FLUSH_TLB_PAGE(__asm__ __volatile__ ("sfence.vma %0" : : "r" (addr) : "memory")); } + +static inline void local_flush_tlb_all_asid(unsigned long asid) +{ + __asm__ __volatile__ ("sfence.vma x0, %0" + : + : "r" (asid) + : "memory"); +} + +static inline void local_flush_tlb_range_asid(unsigned long start, + unsigned long size, unsigned long asid) +{ + unsigned long tmp, end = ALIGN(start + size, PAGE_SIZE); + + for (tmp = start & PAGE_MASK; tmp < end; tmp += PAGE_SIZE) { + __asm__ __volatile__ ("sfence.vma %0, %1" + : + : "r" (tmp), "r" (asid) + : "memory"); + } +} #else /* CONFIG_MMU */ #define local_flush_tlb_all() do { } while (0) #define local_flush_tlb_page(addr) do { } while (0) +#define local_flush_tlb_range_asid(addr) do { } while (0) #endif /* CONFIG_MMU */ #if defined(CONFIG_SMP) && defined(CONFIG_MMU) diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c index 68aa312..45c1b04 100644 --- a/arch/riscv/mm/context.c +++ b/arch/riscv/mm/context.c @@ -18,7 +18,7 @@ #ifdef CONFIG_MMU -static DEFINE_STATIC_KEY_FALSE(use_asid_allocator); +DEFINE_STATIC_KEY_FALSE(use_asid_allocator); static unsigned long asid_bits; static unsigned long num_asids; diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 720b443..87b4e52 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -4,6 +4,7 @@ #include #include #include +#include void flush_tlb_all(void) { @@ -39,18 +40,57 @@ static void __sbi_tlb_flush_range(struct cpumask *cmask, unsigned long start, put_cpu(); } +static void __sbi_tlb_flush_range_asid(struct cpumask *cmask, + unsigned long start, + unsigned long size, + unsigned long asid) +{ + struct cpumask hmask; + unsigned int cpuid; + + if (cpumask_empty(cmask)) + return; + + cpuid = get_cpu(); + + if (cpumask_any_but(cmask, cpuid) >= nr_cpu_ids) { + if (size == -1) + local_flush_tlb_all_asid(asid); + else + local_flush_tlb_range_asid(start, size, asid); + } else { + riscv_cpuid_to_hartid_mask(cmask, &hmask); + sbi_remote_sfence_vma_asid(cpumask_bits(&hmask), + start, size, asid); + } + + put_cpu(); +} + void flush_tlb_mm(struct mm_struct *mm) { - __sbi_tlb_flush_range(mm_cpumask(mm), 0, -1); + if (static_branch_unlikely(&use_asid_allocator)) + __sbi_tlb_flush_range_asid(mm_cpumask(mm), 0, -1, + atomic_long_read(&mm->context.id)); + else + __sbi_tlb_flush_range(mm_cpumask(mm), 0, -1); } void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) { - __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), addr, PAGE_SIZE); + if (static_branch_unlikely(&use_asid_allocator)) + __sbi_tlb_flush_range_asid(mm_cpumask(vma->vm_mm), addr, PAGE_SIZE, + atomic_long_read(&vma->vm_mm->context.id)); + else + __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), addr, PAGE_SIZE); } void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), start, end - start); + if (static_branch_unlikely(&use_asid_allocator)) + __sbi_tlb_flush_range_asid(mm_cpumask(vma->vm_mm), start, end - start, + atomic_long_read(&vma->vm_mm->context.id)); + else + __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), start, end - start); } -- 2.7.4