Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp573931ybb; Fri, 20 Mar 2020 04:27:27 -0700 (PDT) X-Google-Smtp-Source: ADFU+vtJSdeONybRelIZrpXkzGvo5Fq/ua+a3/as7E12fTZ/mt0JyQoIwNKILuIjifafdLDVrp5w X-Received: by 2002:aca:edcf:: with SMTP id l198mr5849614oih.97.1584703647169; Fri, 20 Mar 2020 04:27:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1584703647; cv=none; d=google.com; s=arc-20160816; b=JmjrfbffX+IDpa/4qtzCnHLJXSr6RQI7V1qDBAsCN5YP+1TXDXwrEkY4gLRjy4MmRT BPdEC+fBRqS1qDzKdIBTkYdOTzhFNG/jzLowOvfp+QQUeeXTzArJ3e9Sbna66bRdWPIM AEQPr7HrHF0GG4YmFJ7Eq2J/Bg/jyY/2zFSC8Afd9AHuq+U0pm7nYEegs+R0ILbloOVv gv7/pSVAqrWtKGmE4aZsdeJJosvA7sSYMLhaNgFMw5hNRcJ30q/PbX4lpCVRH5X/pa5e fQHMZjQouYz/St8NVK9nWMe6FsvNrhNxCDqcyL2Ngqq2lzgQAu02dz6VpICqf+bb+3x8 8eHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=dkvIW1EevH4raB8OfVc7kqzaN6xDF1zbeHbyIp+nzIU=; b=Iubexq8cqywQvyXr7H4NzGuehOuEQorVF2CTldAz0zUtVC5Xs2mcSvXPi1BzNKrHXd N+rLiZfbuuSlLobc3dkNMI8gsVOnKT1RJysd4TnZ30BJ9NUj3lAtEAN6HjeLkl1DwAQ/ +2pmgAQbAfXOijpR2dKYheXb6JYEcCHBbP+exvdNwX9UzmHDUR8QOki7/qQTTgGlwFD+ s98HAk9m8Dk+uOtetpFm/Zz4cz9nPd4t7sSMuwNSVQG2hCqspmgIGnpDiv/KvayIMCjH juaadzmDj8mrQKVzpCmc5MRj27+Sa/qsNuvpSSTHUa+0XC0pncaSM2MT06kLy9mXETlF XSWQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q131si2441700oig.203.2020.03.20.04.27.14; Fri, 20 Mar 2020 04:27:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727291AbgCTLZr (ORCPT + 99 others); Fri, 20 Mar 2020 07:25:47 -0400 Received: from foss.arm.com ([217.140.110.172]:47686 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727280AbgCTLZq (ORCPT ); Fri, 20 Mar 2020 07:25:46 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F046231B; Fri, 20 Mar 2020 04:25:45 -0700 (PDT) Received: from C02TD0UTHF1T.local (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7E2EE3F85E; Fri, 20 Mar 2020 04:25:44 -0700 (PDT) Date: Fri, 20 Mar 2020 11:25:38 +0000 From: Mark Rutland To: Tuan Phan Cc: Tuan Phan , Will Deacon , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH] driver/perf: Add PMU driver for the ARM DMC-620 memory controller. Message-ID: <20200320105315.GA35932@C02TD0UTHF1T.local> References: <1584491381-31492-1-git-send-email-tuanphan@os.amperecomputing.com> <20200319151646.GC4876@lakrids.cambridge.arm.com> <23AD5E45-15E3-4487-9B0D-0D9554DD9DE8@amperemail.onmicrosoft.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <23AD5E45-15E3-4487-9B0D-0D9554DD9DE8@amperemail.onmicrosoft.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 19, 2020 at 12:03:43PM -0700, Tuan Phan wrote: > Hi Mark, > Please find my comments below. Hi Tuan, As Will said, *please* set up a more usual mail clinet configuration if you can. The reply style (with lines starting with '=>') is unusual and very painful to spot. > > On Mar 19, 2020, at 8:16 AM, Mark Rutland wrote: > > > > Hi Tuan, > > > > On Tue, Mar 17, 2020 at 05:29:38PM -0700, Tuan Phan wrote: > >> DMC-620 PMU supports total 10 counters which each is > >> independently programmable to different events and can > >> be started and stopped individually. > > > > Looking at the TRM for DMC-620, the PMU registers are not in a separate > > frame from the other DMC control registers, and start at offset 0xA00 > > (AKA 2560). I would generally expect that access to the DMC control > > registers was restricted to the secure world; is that not the case on > > your platform? > > > > I ask because if those are not restricted, the Normal world could > > potentially undermine the Secure world through this (e.g. playing with > > training settings, messing with the physical memory map, injecting RAS > > errors). Have you considered this? > => Only PMU registers can be accessed within normal world. I only pass > PMU registers (offset 0xA00) to kernel so shouldn’t be problem. Sure, you only *describe* that in the ACPI tables, but I can't see how that's access control is enforced in the hardware, because these registers fall in the same 4K page as other control registers, and AFAICT the IP doesn't distinguish S/NS accesses. If the Secure world on your part uses DRAM (including the secure portions of IPs like SMMUs), you *must* ensure that the Normal world cannot access those other control registers, or it can corrupt Secure world state and escalate privilege. Is that not a concern here? > >> DMC-620 PMU devices are named as arm_dmc620_ where > >> is the UID of DMC-620 PMU ACPI node. Currently, it only > >> supports ACPI. Other platforms feel free to test and add > >> support for device tree. > >> > >> Usage example: > >> #perf stat -e arm_dmc620_0/clk_cycle_count/ -C 0 > >> Get perf event for clk_cycle_count counter. > >> > >> #perf stat -e arm_dmc620_0/clkdiv2_allocate,mask=0x1f,match=0x2f, > >> incr=2,invert=1/ -C 0 > >> The above example shows how to specify mask, match, incr, > >> invert parameters for clkdiv2_allocate event. > > > > [...] > > > >> +#define DMC620_CNT_MAX_PERIOD 0xffffffff > >> +#define DMC620_PMU_CLKDIV2_MAX_COUNTERS 8 > >> +#define DMC620_PMU_CLK_MAX_COUNTERS 2 > >> +#define DMC620_PMU_MAX_COUNTERS \ > >> + (DMC620_PMU_CLKDIV2_MAX_COUNTERS + DMC620_PMU_CLK_MAX_COUNTERS) > >> + > >> +#define DMC620_PMU_OVERFLOW_STATUS_CLKDIV2_OFFSET 8 > > > > This appears to be relative to 0xA00. What exactly does your ACPI > > description provide? The whole set of DMC registers, or just the PMU > > registers? > => Just PMU registers from 0xA00 to 0xBFF. I don't believe that is correct; see below w.r.t. the ACPI binding. > >> +static int arm_dmc620_pmu_dev_init(struct arm_dmc620_pmu *dmc620_pmu) > >> +{ > >> + struct platform_device *pdev = dmc620_pmu->pdev; > >> + int ret; > >> + > >> + ret = devm_request_irq(&pdev->dev, dmc620_pmu->irq, > >> + arm_dmc620_pmu_handle_irq, > >> + IRQF_SHARED, > >> + dev_name(&pdev->dev), dmc620_pmu); > > > > This should have IRQF_NOBALANCING | IRQF_NO_THREAD. I don't think we > > should have IRQF_SHARED. > => I agree on having IRQF_NOBALANCING and IRQF_NO_THREAD. But > IRQF_SHARED is needed. In our platform all DMC620s share same IRQs and > any cpus can access the pmu registers. Linux needs to ensure that the same instance is concistently accessed from the same CPU, and needs to migrate the IRQ to handle that. Given we do that on a per-instance basis, we cannot share the IRQ with another instance. Please feed back to you HW designers that muxing IRQs like this causes significant problems for software. > > > > [...] > > > >> +static const struct acpi_device_id arm_dmc620_acpi_match[] = { > >> + { "ARMHD620", 0}, > >> + {}, > >> +}; > > > > Just to check, was this ID allocated by Arm, or have you allocated it? > => ID was allocated by ARM. Please refer to > https://static.docs.arm.com/den0093/a/DEN0093_ACPI_Arm_Components_1.0.pdf Thanks for the link! For this _HID, the document says the _CRS contains a QWordMemory Base address, which the full description is: | Base Address of the DMC620 in the system address map ... which would presumably mean the *entire* set of MMIO registers, not just the PMU portion. The example firmly hints that it is the entire set of MMIO registers: | In this example, the DMC620 register space is mapped to a 64K range | that begins at offset 0x80CAFE0000 in the system address space Thanks, Mark.