Received: by 2002:a05:7412:b995:b0:f9:9502:5bb8 with SMTP id it21csp3829447rdb; Thu, 28 Dec 2023 01:00:00 -0800 (PST) X-Google-Smtp-Source: AGHT+IF8GcBIsFLU96AO92EV1brs7bhNF7SBnLyh/KkSp27r5urnGSRI930ANjIUmUhaVRR9n7SH X-Received: by 2002:a17:906:bcf7:b0:a23:3091:28c1 with SMTP id op23-20020a170906bcf700b00a23309128c1mr4236074ejb.32.1703754000625; Thu, 28 Dec 2023 01:00:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703754000; cv=none; d=google.com; s=arc-20160816; b=Z9OwebpOmlAY40ulh+h0H6ex3rw22zXDB2cKD9p4GAdKhVZkw2F/fn4HGgkdsWMwSv Ex3qslKKI9ZL2UcJ1BBeWdVMzQall/WhqeZ3I1JOcWbGG1BLd8PkxMI8jfOTTR46xoXg jjU036XaXPPzbI0/dHPvfe4jrQU7mHveUbNwIYwVQPfBURxYWrlRQ0vzcvW0TUnzUG5f uYxNU6g1+UEY5ojVTYgm6fFhW38SZflJE/Dxb2pGpV4ClqyUUXpvjNcl80SqGQGYJnNF D7t4U7C7pCaBwkNlfFF8HdPVjuzOllhYDpYXUUCCfrkSf0Nm6Ia4IZB6nykkGv0iO06w 573Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=U0bHBCfybk8M/ty/5LuxJpVChy9zxALoGJGRJoUUtYY=; fh=7yoSLDYtBiATajkf0jny2nUhpLBvg39awZKJqL/P6xI=; b=XypYeHmK1FhRC2G3HwDI5KcpMoVq4CxAkX09jW0FcRQ/fStH1SEI2O7qFXRZEqCvhl Ehq8yxGFtO8jPfZJMKBrShbbz0hQn8vbFENm3RDvbGS3Gm7h4UImmP2gfPhxg46jm2lj FU2IyHp35m2YMAxRpde4906KgnC4w7MBVXiV03+hK1hJR1wVBkuMJgDIaYNf7av5ypjK lgnHFc8Duu7NAJx8RqbdwrO3Xbg6h18D9wCwMYyT6Bsr6zHye9fCKF6z8GuXtWyPNE23 kAHPLxPeEGuR+gJ/x7n8pAVoiKP/O7kDc+w+gz25x3J6Cewt4N7JduD69EVHZzCwAr82 T42A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=TKBsmvO8; spf=pass (google.com: domain of linux-kernel+bounces-12528-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12528-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id jw11-20020a170906e94b00b00a232dc656a8si6369100ejb.1025.2023.12.28.01.00.00 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Dec 2023 01:00:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-12528-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=TKBsmvO8; spf=pass (google.com: domain of linux-kernel+bounces-12528-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12528-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 63B991F2169C for ; Thu, 28 Dec 2023 09:00:00 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 1451F6FDF; Thu, 28 Dec 2023 08:59:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TKBsmvO8" X-Original-To: linux-kernel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 46E2E6FCB; Thu, 28 Dec 2023 08:59:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5389DC433CC; Thu, 28 Dec 2023 08:59:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703753970; bh=4x9t1f04s06EEvjvtQ7yC2/ljPKJqNDBrcj2L/hT3Js=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TKBsmvO8700bopUsWROGXLmoCWFXsTW+dDVnNGSxOkaL/USi53RKl8TdsixW0+meq ZCWxQdbp33ikg/G9ezCtq7NkctrZnj90/OzQ0n9iUxG+YKDW4DQ0nPcXFY42wiBv+r 997dRhgKccetlwPkyjqAZY+f5mx70kEYFtXj+CoiB8mr5EJbZvQA7ejpm/TS62Kx8Z MVCzbTzHlBLytfAxGrORGDpTnrYATlbt8PPwaCWGIg2WfXt/VlliMYXrTExd8VnHSR LxHkWdaNYJGBiXawHPjLTvQ6a2cSRLkXc9Y/wtpnv3MN3QHCLVBpveSUN4P0ShLMkB BBgaja1UR4WBA== From: Jisheng Zhang To: Will Deacon , "Aneesh Kumar K . V" , Andrew Morton , Nick Piggin , Peter Zijlstra , Catalin Marinas , Paul Walmsley , Palmer Dabbelt , Albert Ou , Arnd Bergmann Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Subject: [PATCH 2/2] riscv: tlb: avoid tlb flushing if fullmm == 1 Date: Thu, 28 Dec 2023 16:46:42 +0800 Message-Id: <20231228084642.1765-3-jszhang@kernel.org> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20231228084642.1765-1-jszhang@kernel.org> References: <20231228084642.1765-1-jszhang@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit The mmu_gather code sets fullmm=1 when tearing down the entire address space for an mm_struct on exit or execve. So if the underlying platform supports ASID, the tlb flushing can be avoided because the ASID allocator will never re-allocate a dirty ASID. Use the performance of Process creation in unixbench on T-HEAD TH1520 platform is improved by about 4%. Signed-off-by: Jisheng Zhang --- arch/riscv/include/asm/tlb.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/riscv/include/asm/tlb.h b/arch/riscv/include/asm/tlb.h index 1eb5682b2af6..35f3c214332e 100644 --- a/arch/riscv/include/asm/tlb.h +++ b/arch/riscv/include/asm/tlb.h @@ -12,10 +12,19 @@ static void tlb_flush(struct mmu_gather *tlb); #define tlb_flush tlb_flush #include +#include static inline void tlb_flush(struct mmu_gather *tlb) { #ifdef CONFIG_MMU + /* + * If ASID is supported, the ASID allocator will either invalidate the + * ASID or mark it as used. So we can avoid TLB invalidation when + * pulling down a full mm. + */ + if (static_branch_likely(&use_asid_allocator) && tlb->fullmm) + return; + if (tlb->fullmm || tlb->need_flush_all) flush_tlb_mm(tlb->mm); else -- 2.40.0