Received: by 10.223.185.111 with SMTP id b44csp1525383wrg; Sat, 10 Mar 2018 07:47:23 -0800 (PST) X-Google-Smtp-Source: AG47ELvt0c+UHHx9vDTRNZh8pq/Q7YTi4N6qEPB73ypeajLUC//HhQ/khUdWLF/+oEVfaOui7x6v X-Received: by 10.98.178.17 with SMTP id x17mr2371225pfe.2.1520696843170; Sat, 10 Mar 2018 07:47:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1520696843; cv=none; d=google.com; s=arc-20160816; b=ceGT0mHC+vv0Hz9mr3uM/Gk6ploUmHCuF5Hh4RG7qjE6kRehG4LtMN1eFVZ/sct2DP KErzicCkcYPdMVXAiSWMqpXGmrRAmh8CfdIH3/LhllYuy3itqPJBd471KxBCI2UEaLXd NQKSYZ7+LVNO8P5hBVCRhFdjeTVxAfYO0rKY2OE+z7JG/D7oYVYK8eFyrPulMIemx8qi anVOS0rBH9xHuweqXUjVSc2rONqhcp5cl7IrHezqRgkXL7cOcxGv6X8ux+uWDbmgbtwm 9nbG8XdtKwsyr6CIlNzg4zlWOBuFCkZP1ZjMOY3zBbCjl3WoRQD9PGPduXu9AVGLe8yL zhBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=I/aZinM9JQs9WigEozZL08UKeIWh+r27YzDxdEwSz+Q=; b=Oqx8jSvg778sEq4FTiFcJwgRFxXy4zd0guKJy3LYvh8EfrHPVNTc3Vjg3FNbPvdHfm x79M+xv+9SigBadE5tdOkGQ9vA9dP5mVaHSNhOHtTagApwC4wUz9x2H+Saqe1mejOfGb JQCPoh6ugAy2Fb7MJOb2scwCYtr5DTIAnFWZM/TJfOweMTpDgn4ht9dUDIH/z05RYMGu EeQ2Zap88qhlev1vLteGNm/h1ZyJjqJAsehmnJ1/1UYDOd5SmSkQb8Jd3zCywY5V0OA8 sgUUuFtD9/DEgIY/3OQ/BGTu9gWzVyofOyqllaHiqbl7IZP8/JJjJonOT32Ni53mssG4 bWiw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f89si2852637pfe.185.2018.03.10.07.47.08; Sat, 10 Mar 2018 07:47:23 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932446AbeCJPpu (ORCPT + 99 others); Sat, 10 Mar 2018 10:45:50 -0500 Received: from foss.arm.com ([217.140.101.70]:38728 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932150AbeCJPpt (ORCPT ); Sat, 10 Mar 2018 10:45:49 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 08A161596; Sat, 10 Mar 2018 07:45:49 -0800 (PST) Received: from salmiak (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A16A23F25C; Sat, 10 Mar 2018 07:45:44 -0800 (PST) Date: Sat, 10 Mar 2018 15:45:36 +0000 From: Mark Rutland To: Saravana Kannan Cc: Suzuki K Poulose , will.deacon@arm.com, robh@kernel.org, sudeep.holla@arm.com, mathieu.poirier@linaro.org, peterz@infradead.org, jonathan.cameron@huawei.com, linux-kernel@vger.kernel.org, marc.zyngier@arm.com, leo.yan@linaro.org, frowand.list@gmail.com, linux-arm-kernel@lists.infradead.org, rananta@codeaurora.org, avilaj@codeaurora.org, Lorenzo Pieralisi , Charles Garcia-Tobin Subject: Re: [PATCH v11 8/8] perf: ARM DynamIQ Shared Unit PMU support Message-ID: <20180310154527.f6j2fgqimbtn4eme@salmiak> References: <20180102112533.13640-1-suzuki.poulose@arm.com> <20180102112533.13640-9-suzuki.poulose@arm.com> <5AA1CE48.5030203@codeaurora.org> <20180309133531.fepm2suvdmvm4muv@lakrids.cambridge.arm.com> <5AA30F5C.2010402@codeaurora.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5AA30F5C.2010402@codeaurora.org> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 09, 2018 at 02:49:00PM -0800, Saravana Kannan wrote: > > > > Looking at the code, I didn't see any specific handling of cluster > > > > power collapse. AFAIK, the HW counters do not retain config (what event > > > > they are counting) or value (the current count) across power collapse. > > > > Wouldn't you need to register for some kind of PM_ENTER/EXIT notifiers > > > > to handle that? > > > > > > Good point, yes *somebody* needs to save-restore the registers. But who ? As far > > > as the kernel is concerned, it doesn't control the DSU states. Also, as of now > > > there is no reliable way to get the "ENTER/EXIT" notifications for the DSU power > > > domain state changes. All we do is use the PMU, assuming it is available. AFAIT, > > > it should really be done at EL3, which manages the DSU, but may be I am wrong. > > > > Given this can happen behind the back of the kernel, if FW doesn't > > save/restore this state, we'll have to inhibit cpuidle on a CPU > > associated with the DSU PMU whenever it has active events, which would > > keep the cluster online. > > Using PMUs should be designed to have the least impact on power/performance. > Otherwise, using them to profile and debug issues becomes impossible. > Disabling cpuidle would significantly affect power and performance. > > Why not use CPU_CLUSTER_PM_ENTER similar to how arm-pmu.c uses CPU_PM_ENTER > for saving and restoring the counters? The CPU_CLUSTER_PM_{ENTER,EXIT} notifications only exist on particular 32-bit platforms where the kernel directly manages CPU power states. These do not exist on systems where FW manages the cluster power state (e.g. on any arm64 platform with PSCI). So we cannot rely on CPU_CLUSTER_PM_{ENTER,EXIT} in the DSU PMU driver. The arm-pmu code manages state which is strictly CPU affine, so we can rely on the CPU_PM_{ENTER,EXIT} notifications there. We cannot rely on CPU_PM_{ENTER,EXIT} notifications to manage per-cluster state, as CPUs can race to enter/exit idle, and we cannot track the cluster state accurately in the kernel without serializing idle entry/exit across CPUs, which will affect power/performance. Thanks, Mark.