Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp165195pxb; Wed, 20 Jan 2021 04:06:15 -0800 (PST) X-Google-Smtp-Source: ABdhPJyK5gSgSlEcFXM07PJAPnHcgMagM4/Pc6B3fhmQ2/diSQuQ+LyzqM8ki2TKmfWXtJL7J9t/ X-Received: by 2002:a05:6402:4d2:: with SMTP id n18mr7121086edw.309.1611144257386; Wed, 20 Jan 2021 04:04:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611144257; cv=none; d=google.com; s=arc-20160816; b=GNVXtEbqeHVDGD8oc6PJ/AL2OLkUxRWO10/QfUWzBVNf3DDLdlk5X4n22PN64igAwk 68qt006sUJyjDxuu8JE0YPJcTqCI20TDCaF6nmeR4G4tEH3dYQvPWV1HFwbNqpP3VZld X58BPwYttsQgAFMbm57LDXP3MNeL5RjP2OFK3s4wwFsnbDQ5Dtkl2FnEOBFZU7SEXkF2 bFWpHtB+HRC66a79hyi55A8bidix1gKXEbVxhf+9Jf1qXlVrn3QcNGW6gA8C1PePi1pS Kco2A2i2G0rKXxnjY4jZvodpgYC89JhxQJcRN43I3u+bjqrXbzCRDe58EOul2N5XglcV o9ow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version; bh=DK+FZYOgme3Ym7xK/6JU3jj5omlMd5NXSakDyCnkIA0=; b=Kt1mS3ziloHMOGAopXqlz/oRx3K0BuhFbB/8SSc1siV9tn6HSGZo9YLUNz6BCLwmfW 91LFLeNIvyZAcJXteFgT8f49GOAi/j39B7onU6J+006vLFNTC/UPU3UJtKe6CjJvQV4h fS/EZOl5oCNTUllWomNhOgiSsf6mvb5Jz1voIQwtayTUlD4H1xfO3tnd2U4AHPW9E3Pv 6GYtQ+he9G2fWeXB6IxPOVZw89aPYxh7d4xrC9r6yIpyVoasyE9vOduf++OiAeE/dWKa L3YppSEKb70f/S4FQy7wJf6FAPbyxKy8tE5DASDbyIqWnH8p4Q7M2k3T8VPQ4tCHummx iivQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id x20si768738edv.330.2021.01.20.04.03.38; Wed, 20 Jan 2021 04:04:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733140AbhATLkT (ORCPT + 99 others); Wed, 20 Jan 2021 06:40:19 -0500 Received: from mail-oi1-f172.google.com ([209.85.167.172]:41614 "EHLO mail-oi1-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388699AbhATLAP (ORCPT ); Wed, 20 Jan 2021 06:00:15 -0500 Received: by mail-oi1-f172.google.com with SMTP id 15so24575767oix.8 for ; Wed, 20 Jan 2021 02:59:59 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=DK+FZYOgme3Ym7xK/6JU3jj5omlMd5NXSakDyCnkIA0=; b=G4T4aj0/HehZXc8VzLaxdwZ/CJbS1gcmbTJS0Ak5dT6OMAtdWYXMvfmAELf5o9lsLq kzPUj3O9wxWvmMtYZ8MA30zQ5SP3OeCP3MOXnjP2EoATUv13hrNUBmnJ4ZXlsIFffLmH hiIFQceMc1iTYa6KaorlHEb2Q7FDn/JFX5hXtqL25Nx9ibCMsLlNB5oJdrllZLVbUroQ /clIL+D1ZG0LqALlKmCUceUi98QmMzj/IcHLtZQ/+XlvrFbO+8FJDZSyQDMkvkLh6MWF HmkYSkeWKPVQBfCy/SsLWXVpJ1rlyo2zZZJ0+mlOm7l/NdnTVCLQnP58oL3iSQ2EIeOS i3nQ== X-Gm-Message-State: AOAM533dTQgD+tI/SuctQx6o+fQvVhOZ6ACqUw2ceJ1HkVUQ41r3qJWs KPtgAUACPZQCUUEDxDS2oaz6Wg88cbq1PH7M8GU= X-Received: by 2002:aca:4b16:: with SMTP id y22mr2455753oia.148.1611140371974; Wed, 20 Jan 2021 02:59:31 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Geert Uytterhoeven Date: Wed, 20 Jan 2021 11:59:20 +0100 Message-ID: Subject: Re: [PATCH 3/4] RISC-V: Fix L1_CACHE_BYTES for RV32 To: Palmer Dabbelt Cc: Atish Patra , Atish Patra , Albert Ou , Anup Patel , Linux Kernel Mailing List , linux-riscv , Paul Walmsley , Nick Kossifidis , Andrew Morton , Ard Biesheuvel , Mike Rapoport Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Jan 17, 2021 at 8:03 PM Geert Uytterhoeven wrote: > On Fri, Jan 15, 2021 at 11:44 PM Palmer Dabbelt wrote: > > On Thu, 14 Jan 2021 23:59:04 PST (-0800), geert@linux-m68k.org wrote: > > > On Thu, Jan 14, 2021 at 10:11 PM Atish Patra wrote: > > >> On Thu, Jan 14, 2021 at 11:46 AM Palmer Dabbelt wrote: > > >> > On Thu, 14 Jan 2021 10:33:01 PST (-0800), atishp@atishpatra.org wrote: > > >> > > On Wed, Jan 13, 2021 at 9:10 PM Palmer Dabbelt wrote: > > >> > >> > > >> > >> On Thu, 07 Jan 2021 01:26:51 PST (-0800), Atish Patra wrote: > > >> > >> > SMP_CACHE_BYTES/L1_CACHE_BYTES should be defined as 32 instead of > > >> > >> > 64 for RV32. Otherwise, there will be hole of 32 bytes with each memblock > > >> > >> > allocation if it is requested to be aligned with SMP_CACHE_BYTES. > > >> > >> > > > >> > >> > Signed-off-by: Atish Patra > > >> > >> > --- > > >> > >> > arch/riscv/include/asm/cache.h | 4 ++++ > > >> > >> > 1 file changed, 4 insertions(+) > > >> > >> > > > >> > >> > diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h > > >> > >> > index 9b58b104559e..c9c669ea2fe6 100644 > > >> > >> > --- a/arch/riscv/include/asm/cache.h > > >> > >> > +++ b/arch/riscv/include/asm/cache.h > > >> > >> > @@ -7,7 +7,11 @@ > > >> > >> > #ifndef _ASM_RISCV_CACHE_H > > >> > >> > #define _ASM_RISCV_CACHE_H > > >> > >> > > > >> > >> > +#ifdef CONFIG_64BIT > > >> > >> > #define L1_CACHE_SHIFT 6 > > >> > >> > +#else > > >> > >> > +#define L1_CACHE_SHIFT 5 > > >> > >> > +#endif > > >> > >> > > > >> > >> > #define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT) > > >> > >> > > >> > >> Should we not instead just > > >> > >> > > >> > >> #define SMP_CACHE_BYTES L1_CACHE_BYTES > > >> > >> > > >> > >> like a handful of architectures do? > > >> > >> > > >> > > > > >> > > The generic code already defines it that way in include/linux/cache.h > > >> > > > > >> > >> The cache size is sort of fake here, as we don't have any non-coherent > > >> > >> mechanisms, but IIRC we wrote somewhere that it's recommended to have 64-byte > > >> > >> cache lines in RISC-V implementations as software may assume that for > > >> > >> performance reasons. Not really a strong reason, but I'd prefer to just make > > >> > >> these match. > > >> > >> > > >> > > > > >> > > If it is documented somewhere in the kernel, we should update that. I > > >> > > think SMP_CACHE_BYTES being 64 > > >> > > actually degrades the performance as there will be a fragmented memory > > >> > > blocks with 32 bit bytes gap wherever > > >> > > SMP_CACHE_BYTES is used as an alignment requirement. > > >> > > > >> > I don't buy that: if you're trying to align to the cache size then the gaps are > > >> > the whole point. IIUC the 64-byte cache lines come from DDR, not XLEN, so > > >> > there's really no reason for these to be different between the base ISAs. > > >> > > > >> > > >> Got your point. I noticed this when fixing the resource tree issue > > >> where the SMP_CACHE_BYTES > > >> alignment was not intentional but causing the issue. The real issue > > >> was solved via another patch in this series though. > > >> > > >> Just to clarify, if the allocation function intends to allocate > > >> consecutive memory, it should use 32 instead of SMP_CACHE_BYTES. > > >> This will lead to a #ifdef macro in the code. > > >> > > >> > > In addition to that, Geert Uytterhoeven mentioned some panic on vex32 > > >> > > without this patch. > > >> > > I didn't see anything in Qemu though. > > >> > > > >> > Something like that is probably only going to show up on real hardware, QEMU > > >> > doesn't really do anything with the cache line size. That said, as there's > > >> > nothing in our kernel now related to non-coherent memory there really should > > >> > only be performance issue (at least until we have non-coherent systems). > > >> > > > >> > I'd bet that the change is just masking some other bug, either in the software > > >> > or the hardware. I'd prefer to root cause this rather than just working around > > >> > it, as it'll probably come back later and in a more difficult way to find. > > >> > > > >> > > >> Agreed. @Geert Uytterhoeven Can you do a further analysis of the panic > > >> you were saying ? > > >> We may need to change an alignment requirement to 32 for RV32 manually > > >> at some place in code. > > > > > > My findings were in > > > https://lore.kernel.org/linux-riscv/CAMuHMdWf6K-5y02+WJ6Khu1cD6P0n5x1wYQikrECkuNtAA1pgg@mail.gmail.com/ > > > > > > Note that when the memblock.reserved list kept increasing, it kept on > > > adding the same entry to the list. But that was fixed by "[PATCH 1/4] > > > RISC-V: Do not allocate memblock while iterating reserved memblocks". > > > > > > After that, only the (reproducible) "Unable to handle kernel paging > > > request at virtual address 61636473" was left, always at the same place. > > > No idea where the actual corruption happened. > > > > Thanks. Presumably I need an FPGA to run this? That will make it tricky to > > find anything here on my end. > > In theory, it should work with the LiteX simulation, too. > I.e. follow the instructions at > https://github.com/litex-hub/linux-on-litex-vexriscv > You can find prebuilt binaries at > https://github.com/litex-hub/linux-on-litex-vexriscv/issues/164 > Take images/opensbi.bin from opensbi_2020_12_15.zip, and > images/rootfs.cpio from linux_2021_01_11.zip. > Take images/Image from your own kernel build. > > Unfortunately it seems the simulator is currently broken, and kernels > (both prebuilt and my own config) hang after > "sched_clock: 64 bits at 1000kHz, resolution 1000ns, wraps every > 2199023255500ns" In the mean time, the simulator got fixed. Unfortunately the crash does not happen on the simulator. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say "programmer" or something like that. -- Linus Torvalds