Received: by 2002:ac0:946b:0:0:0:0:0 with SMTP id j40csp3178912imj; Mon, 11 Feb 2019 15:35:07 -0800 (PST) X-Google-Smtp-Source: AHgI3IbPR40RZC5K6LehCgimmKEKvXiwyE+BscrqhGVx6FqynSxCWmrs3eIBha2YcFb+I+9LfIQm X-Received: by 2002:a62:a1a:: with SMTP id s26mr836131pfi.31.1549928107680; Mon, 11 Feb 2019 15:35:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1549928107; cv=none; d=google.com; s=arc-20160816; b=Qm5z+qfWJsBjQ2uuYnoQEnvYFDeDoGgyn4rJXayS4oPvd6l/by6ZI487lTx5bQdTr7 Z0Lqe6uXbsMq8ik9I2gu+/DOFWf0EoalwjNaFFotfTY99DyEfMrHA0ZId5AYnbiQRblF +D1qtodnQtg7Tk8Nbbg/ODUL9pE8iWEN4ZTuLBYjylFk2uO+yzJ1JrP2kjU+yxLj7izB zIdiTG9iYoyBEaZ/IM7XHFogCbz7EjX9ywKwSCpmJ8vyutG1vYucWcsPIvwptAsmqrid fvRgePHAp1CW6qm7mu61T1TJbUYMo0VSKy49/wSxQNphtzsSnPPnC1TzLkuAQRfDIxSl IJOQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:references:cc:to:from:subject; bh=1glK6oxJf2jb2Gzl0u5WmusBmAGiqR6gudPU2GymtYU=; b=P/kboeeATMGrFWBkaANC1/G5M3zUyVz4rKlcAtf4qb6WLMYP9OCRxvxXypguVvjLs1 J+IePlQJL32CdkHhQiJCmVNjv7Zu9ep/qEh5+/gpZS4nw0FTbkVEzKP7rx5qM5XvQCrb bNWOf5MiqO2Y1+Cu8wYkr6cU6A/vH4OiCILqGTEzVYEOiIYOm8BdwiQ92afxeDfSx5o7 fW7DxC3oY+0UeUOMfplnbvH2+6dM7ByUB5nfc70UW8lj9yNMFpU4+FD1vrJn7jmAzu6m bYg5gqtqUcZtq0YJF+Jq36Nq7njIeL6ljBLlKmXxsotdVW0LWx+T2Rk/22lmdvyzwLR2 RKNg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=a+Oh3++W; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h2si10430242pgq.310.2019.02.11.15.34.51; Mon, 11 Feb 2019 15:35:07 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=a+Oh3++W; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727852AbfBKXeb (ORCPT + 99 others); Mon, 11 Feb 2019 18:34:31 -0500 Received: from hqemgate16.nvidia.com ([216.228.121.65]:10946 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727052AbfBKXeb (ORCPT ); Mon, 11 Feb 2019 18:34:31 -0500 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 11 Feb 2019 15:34:29 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Mon, 11 Feb 2019 15:34:28 -0800 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Mon, 11 Feb 2019 15:34:28 -0800 Received: from [172.17.136.14] (10.124.1.5) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 11 Feb 2019 23:34:27 +0000 Subject: Re: [PATCH] arm64: tegra: add topology data for Tegra194 cpu From: Bo Yan To: Thierry Reding CC: , , , , References: <1548959754-3941-1-git-send-email-byan@nvidia.com> <20190131222517.GB13156@mithrandir> X-Nvconfidentiality: public Message-ID: Date: Mon, 11 Feb 2019 15:34:27 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0 MIME-Version: 1.0 In-Reply-To: X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL101.nvidia.com (172.20.187.10) Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Language: en-US Content-Transfer-Encoding: quoted-printable DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1549928069; bh=1glK6oxJf2jb2Gzl0u5WmusBmAGiqR6gudPU2GymtYU=; h=X-PGP-Universal:Subject:From:To:CC:References:X-Nvconfidentiality: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=a+Oh3++WvJ1KcToIdihDS+zk6JpMwjP7yevmjLO34NFQMoBCh9hK7s7IoxOmEgoH1 4Sm7UdL8MsBoYFXvd/jLTSw7EJJloM3oTQ/LpQGY8nradvagaGxxS964Mz/wyfyukO wMjn9EAgy5ZmiCX/yUl/8Kq6kVY49CEa8BMXEwB+BgRpte9sMNC8lxtHGd5Lwj7LzM 7nBJFMfHOcYD4b8b8e9Y8UVe5RRtJvk6MKkgASojjdViBcYZE/vsfzTy7lE4mHK8A8 yCz+bKYD9zsp42DVJKDrdhsNDsGSaPViOEF7qrNH00Wkl491dYBlZdLWxjTr1RyZ7Y tSLpgO51wvpFA== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org To make this simpler, I think it's best to isolate the cache information=20 in its own patch. So I will amend this patch to include topology=20 information only. On 1/31/19 3:29 PM, Bo Yan wrote: >=20 > On 1/31/19 2:25 PM, Thierry Reding wrote: >> On Thu, Jan 31, 2019 at 10:35:54AM -0800, Bo Yan wrote: >>> The xavier CPU architecture includes 8 CPU cores organized in >>> 4 clusters. Add cpu-map data for topology initialization, add >>> cache data for cache node creation in sysfs. >>> >>> Signed-off-by: Bo Yan >>> --- >>> =A0 arch/arm64/boot/dts/nvidia/tegra194.dtsi | 148=20 >>> +++++++++++++++++++++++++++++-- >>> =A0 1 file changed, 140 insertions(+), 8 deletions(-) >>> >>> diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi=20 >>> b/arch/arm64/boot/dts/nvidia/tegra194.dtsi >>> index 6dfa1ca..7c2a1fb 100644 >>> --- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi >>> +++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi >>> @@ -870,63 +870,195 @@ >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0 #address-cells =3D <1>; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0 #size-cells =3D <0>; >=20 >> These don't seem to be well-defined. They are mentioned in a very weird >> locations (Documentation/devicetree/booting-without-of.txt) but there >> seem to be examples and other device tree files that use them so maybe >> those are all valid. It might be worth mentioning these in other places >> where people can more easily find them. >=20 > It might be logical to place a reference to this document=20 > (booting-without-of.txt) in architecture specific documents, for=20 > example, arm/cpus.txt. I see the need for improved documentation, but=20 > this probably should be best done in a separate change. >> >> According to the above document, {i,d}-cache-line-size are deprecated in >> favour of {i,d}-cache-block-size. >=20 > Mostly, this seems to be derived from the oddity of PowerPC, which might= =20 > have different cache-line-size and cache-block-size. I don't know if=20 > there are other examples? It looks like the {i,d}-cache-line-size are=20 > being used in dts files for almost all architectures, the only exception= =20 > is arch/sh/boot/dts/j2_mimas_v2.dts. On ARM and ARM64, cache-line-size=20 > is the same as cache-block-size. So I am wondering whether the=20 > booting-without-of.txt should be fixed instead? just to keep it=20 > consistent among dts files, especially in arm64. >=20 >> >> I also don't see any mention of {i,d}-cache_sets in the device tree >> bindings, though riscv/cpus.txt mentions {i,d}-cache-sets (note the >> hyphen instead of underscore) in the examples. arm/l2c2x0.txt and >> arm/uniphier/cache-unifier.txt describe cache-sets, though that's >> slightly different. >> >> Might make sense to document all these in more standard places. Maybe >> adding them to arm/cpus.txt. For consistency with other properties, I >> think there should be called {i,d}-cache-sets like for RISC-V. >> >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 l2-cache =3D <&l2_0>; >> >> This seems to be called next-level-cache everywhere else, though it's >> only formally described in arm/uniphier/cache-uniphier.txt. So might >> also make sense to add this to arm/cpus.txt. >=20 > the improved documentation is certainly desired, I agree. >> >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0 }; >>> -=A0=A0=A0=A0=A0=A0=A0 cpu@1 { >>> +=A0=A0=A0=A0=A0=A0=A0 cl0_1: cpu@1 { >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 compatible =3D "nvidia,tegra194= -carmel", "arm,armv8"; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 device_type =3D "cpu"; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 reg =3D <0x10001>; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 enable-method =3D "psci"; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-size =3D <131072>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-line-size =3D <64>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-sets =3D <512>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache-size =3D <65536>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache-line-size =3D <64>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache_sets =3D <256>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 l2-cache =3D <&l2_0>; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0 }; >>> -=A0=A0=A0=A0=A0=A0=A0 cpu@2 { >>> +=A0=A0=A0=A0=A0=A0=A0 cl1_0: cpu@2 { >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 compatible =3D "nvidia,tegra194= -carmel", "arm,armv8"; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 device_type =3D "cpu"; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 reg =3D <0x100>; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 enable-method =3D "psci"; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-size =3D <131072>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-line-size =3D <64>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-sets =3D <512>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache-size =3D <65536>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache-line-size =3D <64>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache_sets =3D <256>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 l2-cache =3D <&l2_1>; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0 }; >>> -=A0=A0=A0=A0=A0=A0=A0 cpu@3 { >>> +=A0=A0=A0=A0=A0=A0=A0 cl1_1: cpu@3 { >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 compatible =3D "nvidia,tegra194= -carmel", "arm,armv8"; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 device_type =3D "cpu"; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 reg =3D <0x101>; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 enable-method =3D "psci"; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-size =3D <131072>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-line-size =3D <64>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-sets =3D <512>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache-size =3D <65536>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache-line-size =3D <64>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache_sets =3D <256>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 l2-cache =3D <&l2_1>; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0 }; >>> -=A0=A0=A0=A0=A0=A0=A0 cpu@4 { >>> +=A0=A0=A0=A0=A0=A0=A0 cl2_0: cpu@4 { >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 compatible =3D "nvidia,tegra194= -carmel", "arm,armv8"; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 device_type =3D "cpu"; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 reg =3D <0x200>; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 enable-method =3D "psci"; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-size =3D <131072>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-line-size =3D <64>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-sets =3D <512>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache-size =3D <65536>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache-line-size =3D <64>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache_sets =3D <256>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 l2-cache =3D <&l2_2>; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0 }; >>> -=A0=A0=A0=A0=A0=A0=A0 cpu@5 { >>> +=A0=A0=A0=A0=A0=A0=A0 cl2_1: cpu@5 { >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 compatible =3D "nvidia,tegra194= -carmel", "arm,armv8"; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 device_type =3D "cpu"; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 reg =3D <0x201>; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 enable-method =3D "psci"; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-size =3D <131072>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-line-size =3D <64>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-sets =3D <512>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache-size =3D <65536>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache-line-size =3D <64>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache_sets =3D <256>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 l2-cache =3D <&l2_2>; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0 }; >>> -=A0=A0=A0=A0=A0=A0=A0 cpu@6 { >>> +=A0=A0=A0=A0=A0=A0=A0 cl3_0: cpu@6 { >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 compatible =3D "nvidia,tegra194= -carmel", "arm,armv8"; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 device_type =3D "cpu"; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 reg =3D <0x10300>; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 enable-method =3D "psci"; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-size =3D <131072>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-line-size =3D <64>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-sets =3D <512>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache-size =3D <65536>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache-line-size =3D <64>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache_sets =3D <256>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 l2-cache =3D <&l2_3>; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0 }; >>> -=A0=A0=A0=A0=A0=A0=A0 cpu@7 { >>> +=A0=A0=A0=A0=A0=A0=A0 cl3_1: cpu@7 { >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 compatible =3D "nvidia,tegra194= -carmel", "arm,armv8"; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 device_type =3D "cpu"; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 reg =3D <0x10301>; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 enable-method =3D "psci"; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-size =3D <131072>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-line-size =3D <64>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i-cache-sets =3D <512>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache-size =3D <65536>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache-line-size =3D <64>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 d-cache_sets =3D <256>; >>> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 l2-cache =3D <&l2_3>; >>> =A0=A0=A0=A0=A0=A0=A0=A0=A0 }; >>> =A0=A0=A0=A0=A0 }; >>> +=A0=A0=A0 l2_0: l2-cache0 { >>> +=A0=A0=A0=A0=A0=A0=A0 cache-size =3D <2097152>; >>> +=A0=A0=A0=A0=A0=A0=A0 cache-line-size =3D <64>; >>> +=A0=A0=A0=A0=A0=A0=A0 cache-sets =3D <2048>; >>> +=A0=A0=A0=A0=A0=A0=A0 next-level-cache =3D <&l3>; >>> +=A0=A0=A0 }; >> >> Does this need a compatible string? Also, are there controllers behind >> these caches? I'm just wondering if these also need reg properties and >> unit-addresses. >=20 > No need for compatible string. No reg properties and addresses. These=20 > will be parsed by drivers/of/base.c and drivers/base/cacheinfo.c, they=20 > are generic. >> >> arm/l2c2x0.txt and arm/uniphier/cache-uniphier.txt describe an >> additional property that you don't specify here: cache-level. This >> sounds useful to have so that we don't have to guess the cache level >> from the name, which may or may not work depending on what people name >> the nodes. >=20 > the cache level property is implied in device tree hierarchy, so after=20 > system boots up, I can find cache level in related sysfs nodes: >=20 > =A0=A0=A0 [root@alarm cache]# cat index*/level > =A0=A0=A0 1 > =A0=A0=A0 1 > =A0=A0=A0 2 > =A0=A0=A0 3 >=20 >=20 >> >> Also, similar to the L1 cache, cache-block-size is preferred over >> cache-line-size. >> >>> +=A0=A0=A0 l3: l3-cache { >>> +=A0=A0=A0=A0=A0=A0=A0 cache-size =3D <4194304>; >>> +=A0=A0=A0=A0=A0=A0=A0 cache-line-size =3D <64>; >>> +=A0=A0=A0=A0=A0=A0=A0 cache-sets =3D <4096>; >>> +=A0=A0=A0 }; >> >> The same comments apply as for the L2 caches. >> >> Thierry >>