Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp1716941rdb; Thu, 7 Dec 2023 07:06:22 -0800 (PST) X-Google-Smtp-Source: AGHT+IF3Lf41SGtZJHfDt1rfOile0reBSYA2tyA8+bM1PeoJk3hRngmnQXDT7XMu55HM4n8c6Tk5 X-Received: by 2002:a05:6a20:840e:b0:189:bde9:9cb0 with SMTP id c14-20020a056a20840e00b00189bde99cb0mr3466458pzd.27.1701961581635; Thu, 07 Dec 2023 07:06:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701961581; cv=none; d=google.com; s=arc-20160816; b=rrOIDavy1AjmwXnMvCESQPtL5VrcqejNIotISLT5Um5uif1SpW5mkE+krjMJatXyUJ p736mYCRyLUIJ0V1iEEOfGscQ6pm0YykGgLBcKmdTG4aBiNC54tqqfqkYqZ3b340Qu53 d1huZOj0eKALn+ctmeNytHmfHFKzdRaZTKdY723gLDX1kYLWPyEUzooMJW4IQ2aewVFt /FigzjEl7ztw8YYiLEWh1islgl1bLJpxyyY8U8I0eKVPd/9fMaB6+4hAbcxJYfYh3SlB ZHydFrd2qADE/t2CO5zBF7hYj2xKF1xoV3d5SeIOo+WdUsFxh9YIjT94Re+BfmTgel/H 0jvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=LVN+VzhbTBys9mVhAQGqZxkaIjV/7TRqzMN+a14FBcU=; fh=ghJZAeZ4zrru54jVI3XtuZXbadJtp4HIJMVSTZEe2l4=; b=LMVmIYwxP84XV4f77+s54Ftt8V+oHfJqfVIblyVoEb+JhV6FdAOP0Yf4NI+lm2kOCl WiLuc8JhpwBTQ6Ootrh0C21YjSKhERMYO5BbELHEzZEMZ1MnxIuZ9+ZqYJUFyU+EvXTG 3WzwCuMPmQE/vzCyEFXJZuquPYKf3uluZMvXeLGv/iHT1oIBxDJVYIZFpLuQsHP39J38 AVnmxvQzGmFjgZyJW7cYtbmUwAUZlL9nAKP26KbAbCpiY+F2SbdGjqkkpTyZSz10nw8v AioHsANgaEFqg+nfDHB9RPFv9+wc8dOEyrodutbjGJNyWihRu20Y+0RTXRRf+kHnJVRJ C1Bg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=EyZdHs8Z; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from morse.vger.email (morse.vger.email. [23.128.96.31]) by mx.google.com with ESMTPS id i3-20020a635403000000b005c6259da615si1221163pgb.271.2023.12.07.07.06.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Dec 2023 07:06:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) client-ip=23.128.96.31; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=EyZdHs8Z; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id B1DEE81CDBA5; Thu, 7 Dec 2023 07:06:12 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443396AbjLGPF5 (ORCPT + 99 others); Thu, 7 Dec 2023 10:05:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1443171AbjLGPFt (ORCPT ); Thu, 7 Dec 2023 10:05:49 -0500 Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com [IPv6:2a00:1450:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15DD0170E for ; Thu, 7 Dec 2023 07:05:55 -0800 (PST) Received: by mail-wr1-x436.google.com with SMTP id ffacd0b85a97d-33349b3f99aso1074041f8f.0 for ; Thu, 07 Dec 2023 07:05:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1701961553; x=1702566353; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LVN+VzhbTBys9mVhAQGqZxkaIjV/7TRqzMN+a14FBcU=; b=EyZdHs8Zqz8f/yWq6LPZvF4MI2eXyquhSLxJV7WGLWRx107tdftsJjL2Id6BSOC0qE lWBupDT7/zSqVzewHMq7mukHteV25QmszD6/rhsDboRJHAXtDkinT3yEP5LKUzdYEOsU b6Np79lb7XikBilN5KN/9y4FwtXBhQHN6tKSniWWLvN+DmmU2dIRBnCTfsKbZtC6v0lm cVNjI/UFTT1/41688M+KsqPN7WZ3q9BxHJwdDCL48gZx4Iq59fv1QDhhgkgGJ8S+zRk1 o1TqNOV8/JqqYgFaweabOrT5wifF5FmzgwhIbPfxL0T166v1gHYLxk9ng+ZCZpi+YYIB JIEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701961553; x=1702566353; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LVN+VzhbTBys9mVhAQGqZxkaIjV/7TRqzMN+a14FBcU=; b=DXchl/4n8pJ9VoLKR6ISmvCkH0p4iYNgRPOW4DgFq/hMI/2RgCADErIZV07XY2ngod ej94YujxLjxV+5jeQSMPyUZdfNRYrOb62mdlVdsB3G3eMEfaGiMAElN5KoiiLCOIJczu C2ZRQ3GeCsbfAbfThMDinSa3kV3phMR7blAk5Ztvo7/L0F1XgDeWaepqY4FMla0d8LIK FuTyRx4F8s0QqGLpHldxpKbAANVL9XzRZalYjNAEkxFLaR5ST97O1BVb6jdyfUQunprJ 0XuBP96tYouogKW/fERGf6OxP8yMM2/G66eSPxqkndmrZrKfv22QjkYQ0s+5EzUe1cqv llhg== X-Gm-Message-State: AOJu0YwsFQJxagCgRC7npvJRQw8pQpoIoz9KdUN2QgKEs06pVf/bnOiM n4vrsInOcBCASENgaRxmSH8D9g== X-Received: by 2002:adf:ea82:0:b0:333:1907:c2a3 with SMTP id s2-20020adfea82000000b003331907c2a3mr1503404wrm.21.1701961553264; Thu, 07 Dec 2023 07:05:53 -0800 (PST) Received: from alex-rivos.home (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id p13-20020a056000018d00b00333415503a7sm1644486wrx.22.2023.12.07.07.05.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Dec 2023 07:05:53 -0800 (PST) From: Alexandre Ghiti To: Catalin Marinas , Will Deacon , Thomas Bogendoerfer , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrew Morton , Ved Shanbhogue , Matt Evans , Dylan Jhong , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-mm@kvack.org Cc: Alexandre Ghiti Subject: [PATCH RFC/RFT 2/4] riscv: Add a runtime detection of invalid TLB entries caching Date: Thu, 7 Dec 2023 16:03:46 +0100 Message-Id: <20231207150348.82096-3-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231207150348.82096-1-alexghiti@rivosinc.com> References: <20231207150348.82096-1-alexghiti@rivosinc.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Thu, 07 Dec 2023 07:06:12 -0800 (PST) This mechanism allows to completely bypass the sfence.vma introduced by the previous commit for uarchs that do not cache invalid TLB entries. Signed-off-by: Alexandre Ghiti --- arch/riscv/mm/init.c | 124 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 124 insertions(+) diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 379403de6c6f..2e854613740c 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -56,6 +56,8 @@ bool pgtable_l5_enabled = IS_ENABLED(CONFIG_64BIT) && !IS_ENABLED(CONFIG_XIP_KER EXPORT_SYMBOL(pgtable_l4_enabled); EXPORT_SYMBOL(pgtable_l5_enabled); +bool tlb_caching_invalid_entries; + phys_addr_t phys_ram_base __ro_after_init; EXPORT_SYMBOL(phys_ram_base); @@ -750,6 +752,18 @@ static void __init disable_pgtable_l4(void) satp_mode = SATP_MODE_39; } +static void __init enable_pgtable_l5(void) +{ + pgtable_l5_enabled = true; + satp_mode = SATP_MODE_57; +} + +static void __init enable_pgtable_l4(void) +{ + pgtable_l4_enabled = true; + satp_mode = SATP_MODE_48; +} + static int __init print_no4lvl(char *p) { pr_info("Disabled 4-level and 5-level paging"); @@ -826,6 +840,112 @@ static __init void set_satp_mode(uintptr_t dtb_pa) memset(early_pud, 0, PAGE_SIZE); memset(early_pmd, 0, PAGE_SIZE); } + +/* Determine at runtime if the uarch caches invalid TLB entries */ +static __init void set_tlb_caching_invalid_entries(void) +{ +#define NR_RETRIES_CACHING_INVALID_ENTRIES 50 + uintptr_t set_tlb_caching_invalid_entries_pmd = ((unsigned long)set_tlb_caching_invalid_entries) & PMD_MASK; + // TODO the test_addr as defined below could go into another pud... + uintptr_t test_addr = set_tlb_caching_invalid_entries_pmd + 2 * PMD_SIZE; + pmd_t valid_pmd; + u64 satp; + int i = 0; + + /* To ease the page table creation */ + disable_pgtable_l5(); + disable_pgtable_l4(); + + /* Establish a mapping for set_tlb_caching_invalid_entries() in sv39 */ + create_pgd_mapping(early_pg_dir, + set_tlb_caching_invalid_entries_pmd, + (uintptr_t)early_pmd, + PGDIR_SIZE, PAGE_TABLE); + + /* Handle the case where set_tlb_caching_invalid_entries straddles 2 PMDs */ + create_pmd_mapping(early_pmd, + set_tlb_caching_invalid_entries_pmd, + set_tlb_caching_invalid_entries_pmd, + PMD_SIZE, PAGE_KERNEL_EXEC); + create_pmd_mapping(early_pmd, + set_tlb_caching_invalid_entries_pmd + PMD_SIZE, + set_tlb_caching_invalid_entries_pmd + PMD_SIZE, + PMD_SIZE, PAGE_KERNEL_EXEC); + + /* Establish an invalid mapping */ + create_pmd_mapping(early_pmd, test_addr, 0, PMD_SIZE, __pgprot(0)); + + /* Precompute the valid pmd here because the mapping for pfn_pmd() won't exist */ + valid_pmd = pfn_pmd(PFN_DOWN(set_tlb_caching_invalid_entries_pmd), PAGE_KERNEL); + + local_flush_tlb_all(); + satp = PFN_DOWN((uintptr_t)&early_pg_dir) | SATP_MODE_39; + csr_write(CSR_SATP, satp); + + /* + * Set stvec to after the trapping access, access this invalid mapping + * and legitimately trap + */ + // TODO: Should I save the previous stvec? +#define ASM_STR(x) __ASM_STR(x) + asm volatile( + "la a0, 1f \n" + "csrw " ASM_STR(CSR_TVEC) ", a0 \n" + "ld a0, 0(%0) \n" + ".align 2 \n" + "1: \n" + : + : "r" (test_addr) + : "a0" + ); + + /* Now establish a valid mapping to check if the invalid one is cached */ + early_pmd[pmd_index(test_addr)] = valid_pmd; + + /* + * Access the valid mapping multiple times: indeed, we can't use + * sfence.vma as a barrier to make sure the cpu did not reorder accesses + * so we may trap even if the uarch does not cache invalid entries. By + * trying a few times, we make sure that those uarchs will see the right + * mapping at some point. + */ + + i = NR_RETRIES_CACHING_INVALID_ENTRIES; + +#define ASM_STR(x) __ASM_STR(x) + asm_volatile_goto( + "la a0, 1f \n" + "csrw " ASM_STR(CSR_TVEC) ", a0 \n" + ".align 2 \n" + "1: \n" + "addi %0, %0, -1 \n" + "blt %0, zero, %l[caching_invalid_entries] \n" + "ld a0, 0(%1) \n" + : + : "r" (i), "r" (test_addr) + : "a0" + : caching_invalid_entries + ); + + csr_write(CSR_SATP, 0ULL); + local_flush_tlb_all(); + + /* If we don't trap, the uarch does not cache invalid entries! */ + tlb_caching_invalid_entries = false; + goto clean; + +caching_invalid_entries: + csr_write(CSR_SATP, 0ULL); + local_flush_tlb_all(); + + tlb_caching_invalid_entries = true; +clean: + memset(early_pg_dir, 0, PAGE_SIZE); + memset(early_pmd, 0, PAGE_SIZE); + + enable_pgtable_l4(); + enable_pgtable_l5(); +} #endif /* @@ -1072,6 +1192,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) #endif #if defined(CONFIG_64BIT) && !defined(CONFIG_XIP_KERNEL) + set_tlb_caching_invalid_entries(); set_satp_mode(dtb_pa); #endif @@ -1322,6 +1443,9 @@ static void __init setup_vm_final(void) local_flush_tlb_all(); pt_ops_set_late(); + + pr_info("uarch caches invalid entries: %s", + tlb_caching_invalid_entries ? "yes" : "no"); } #else asmlinkage void __init setup_vm(uintptr_t dtb_pa) -- 2.39.2