Received: by 2002:a05:7412:b995:b0:f9:9502:5bb8 with SMTP id it21csp3829351rdb; Thu, 28 Dec 2023 00:59:45 -0800 (PST) X-Google-Smtp-Source: AGHT+IFhz9Wzp3Zsmh8F4Mie7+/IlbX35PK+d7NUkySEac/6cThHDRHaXhFE2DwfhIAR6PK8KRqH X-Received: by 2002:a81:5442:0:b0:5e7:2cfd:7852 with SMTP id i63-20020a815442000000b005e72cfd7852mr5243343ywb.92.1703753985292; Thu, 28 Dec 2023 00:59:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703753985; cv=none; d=google.com; s=arc-20160816; b=0QOgngOhjmSEq9NMVl5uCT1PfTNuYYtYGqECSTPYydGfYfD5ED8eYb7SoF7czCeSdA HPpEzVkuyGWFXNfBVPGRdBktD84ZNjxzpiFBBZCLC7Hr+ejVP9JeO1f8t+PHJsMZVyy/ yfaaEWzFpq162aeFg/sA8BllMekA781jRmY5y58VgYncmSMdv0MXBIt3B9tOBNQjNsbZ 488RZ/nZkZ34H7sM6YE5UQfJCQC5I/gcpGhQa7tMI/M5CeYP82cEA2bIfJohajnu3jv2 y1wkTmvys1KbtYXH75tn5TjQ3NhGcqCdUvGlT5Y8yPtVRoGYdUpiXeoEBQnGJg+rcb9R a7Og== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=Rbu3zDK4egabfYRNLq5ZDg2lMGMbWX/VoBKV5Lc26JI=; fh=DoOA9XwzitWn/b9G1f1qTy8IVH8KJwoCtLNtKbu9Qp4=; b=UIsqyNof0dWNLrzZ2jIKah6e6fCePeVthIJM8DpKbsBgimabfSdaH6JkzgFVtnRAKP 7R7/FVsy+LRHqaJjcbT8vh3zDaAGxWwliCYC/IxiwxhyQEn27McnvWJgS9U4WccHtDXx GtUsYuVGPw+Nqa4YtVoO+UGSsx4MaxNSRzsSZe5VsUZW9hzxUe8EBXov5q0ynkgwMYsK xaQ4pIfsoWUVAo1kPX2a0CSNB5qnZBihLJl5hiIGeRvRHtWDVeqRK6UbVZv/B3yDVJ3F bgxWPw2GJ0WY5C5rMmim+Tsn0w2xdbm8Qwawjy9FacqZVb0pL4M+jsc3brK9EALdSXEG WD3A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="VPt/k23K"; spf=pass (google.com: domain of linux-kernel+bounces-12527-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12527-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id z7-20020a0cda87000000b0067f6b364849si16669586qvj.568.2023.12.28.00.59.45 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Dec 2023 00:59:45 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-12527-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="VPt/k23K"; spf=pass (google.com: domain of linux-kernel+bounces-12527-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12527-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 0D9CD1C21D4B for ; Thu, 28 Dec 2023 08:59:45 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 23B3C6FAC; Thu, 28 Dec 2023 08:59:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="VPt/k23K" X-Original-To: linux-kernel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 543306FA1; Thu, 28 Dec 2023 08:59:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EB820C433CA; Thu, 28 Dec 2023 08:59:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703753966; bh=16+Nu0Wb1JkSQStq2mcU9kdq58a4r2QX4o0XnGMbW1g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VPt/k23KQR/ek+Wk4Mb0Ffgqjpr0C4tGoF8uMnFE+csMqgfr093NXmCFQN67kLxmg 2wX671IgLVNFprhYGlOqZT9sQd45M//tMA96Y48PuyxuAUw2YYrGJheIf3jUNzegsO oZr1LwJ+Pd98LSUtkZQ1GQJGY/kKAzRUqhRAQ2LGss8txZ4sD1RLgnpeb+dxBj+bEm lTFy+2WWHg42Kl6nbbR4U7U8bNB7+1AF95XqLf72vf4OQxZI1A08+R/V8QaP95mAv4 W39UhXGY1i/064M112ShThexcH0ieACEEyC3qc6Oesf9LT7f5gDAuaHeHCYJG86mq+ AIf2i7+FdSkcQ== From: Jisheng Zhang To: Will Deacon , "Aneesh Kumar K . V" , Andrew Morton , Nick Piggin , Peter Zijlstra , Catalin Marinas , Paul Walmsley , Palmer Dabbelt , Albert Ou , Arnd Bergmann Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Nadav Amit , Andrea Arcangeli , Andy Lutomirski , Dave Hansen , Thomas Gleixner , Yu Zhao , x86@kernel.org Subject: [PATCH 1/2] mm/tlb: fix fullmm semantics Date: Thu, 28 Dec 2023 16:46:41 +0800 Message-Id: <20231228084642.1765-2-jszhang@kernel.org> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20231228084642.1765-1-jszhang@kernel.org> References: <20231228084642.1765-1-jszhang@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Nadav Amit fullmm in mmu_gather is supposed to indicate that the mm is torn-down (e.g., on process exit) and can therefore allow certain optimizations. However, tlb_finish_mmu() sets fullmm, when in fact it want to say that the TLB should be fully flushed. Change tlb_finish_mmu() to set need_flush_all and check this flag in tlb_flush_mmu_tlbonly() when deciding whether a flush is needed. At the same time, bring the arm64 fullmm on process exit optimization back. Signed-off-by: Nadav Amit Signed-off-by: Jisheng Zhang Cc: Andrea Arcangeli Cc: Andrew Morton Cc: Andy Lutomirski Cc: Dave Hansen Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Will Deacon Cc: Yu Zhao Cc: Nick Piggin Cc: x86@kernel.org --- arch/arm64/include/asm/tlb.h | 5 ++++- include/asm-generic/tlb.h | 2 +- mm/mmu_gather.c | 2 +- 3 files changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index 846c563689a8..6164c5f3b78f 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -62,7 +62,10 @@ static inline void tlb_flush(struct mmu_gather *tlb) * invalidating the walk-cache, since the ASID allocator won't * reallocate our ASID without invalidating the entire TLB. */ - if (tlb->fullmm) { + if (tlb->fullmm) + return; + + if (tlb->need_flush_all) { if (!last_level) flush_tlb_mm(tlb->mm); return; diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 129a3a759976..f2d46357bcbb 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -452,7 +452,7 @@ static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) * these bits. */ if (!(tlb->freed_tables || tlb->cleared_ptes || tlb->cleared_pmds || - tlb->cleared_puds || tlb->cleared_p4ds)) + tlb->cleared_puds || tlb->cleared_p4ds || tlb->need_flush_all)) return; tlb_flush(tlb); diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 4f559f4ddd21..79298bac3481 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -384,7 +384,7 @@ void tlb_finish_mmu(struct mmu_gather *tlb) * On x86 non-fullmm doesn't yield significant difference * against fullmm. */ - tlb->fullmm = 1; + tlb->need_flush_all = 1; __tlb_reset_range(tlb); tlb->freed_tables = 1; } -- 2.40.0