Received: by 10.213.65.68 with SMTP id h4csp234143imn; Wed, 28 Mar 2018 02:33:57 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+wDEl38DZ/cxYVE0r3TBheQXoOtyGbIaL7pBICA5lB7RqZvzZmP8t5L5kw95CH1aA7zFBf X-Received: by 2002:a17:902:2ac3:: with SMTP id j61-v6mr3021714plb.224.1522229637140; Wed, 28 Mar 2018 02:33:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522229637; cv=none; d=google.com; s=arc-20160816; b=F7/beWHTwqYxGpMtK/gPX2BpCIUMVDWb2ljJB9G/RHjKlsrsK1a6yh8HTtXL2Y2MuV p7DOnaCDROv4H+Byy5uuMemvHWjhs3qX87fRQwQmybNUgiauHNz35IDp5R4vgSEFK/Gr kP1r6RxRn4/Ip9ZXCqLoECdpltgD1vtywpegM6J3+UyxkndBrcLOBEFuVtxU05Z+k9Ks 6f0LZeJbxt/GatQuvwNwCYUapAG4W5xp9KQpDG+JdoM+1IQi33SA+x3kc6l1uyNqrllY XlrzfkO57gpFyHPb8Ac7E6yPIEaRMbIzQTqH7HoCOEuY46Y2KipH0XOODJSspZEdIq/h 021w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:subject:cc :to:from:arc-authentication-results; bh=cP0D319gsrXk2BWQ8GFZ2PlJ1kwCnY3jcOy24G8VB5E=; b=DJVRzdGMPLAR3EL2nLUe+/d8nmQQ6jO8Omy1EWNTG2ayv/EseumBYeGNIHgMdvb6ia sqjQsWoNzzh3WrhIA3TBMjyoil9J9N8RgCc26l4Vp3mXx8o8xa/VIeQVakBQ6FLCU+mE CsaWJuIzafs5QupgLJnfLSXruEpJX271gRKkCFYasuWMd0kXSZ+n/j4NJlOB8NhtDeRH 6OjbmmSQQMKwZ8JkoE8XPvJ5JEzZNgI23RpcrmDxBZOswPFoZIwfHpMnrpc7Pk/8s5ol 4+AcJ19ix6OIW/aELMk2Q6KAhWDizzBHg9VOb0Zpot0ER1opgHwYOlt+5rGaiUzXLZ6J WaNg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e1-v6si3289003ple.428.2018.03.28.02.33.42; Wed, 28 Mar 2018 02:33:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752086AbeC1JcS (ORCPT + 99 others); Wed, 28 Mar 2018 05:32:18 -0400 Received: from mx01.hxt-semitech.com.96.203.223.in-addr.arpa ([223.203.96.7]:53472 "EHLO barracuda.hxt-semitech.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1751224AbeC1JcO (ORCPT ); Wed, 28 Mar 2018 05:32:14 -0400 X-ASG-Debug-ID: 1522229525-093b7e08d8031b0001-xx1T2L Received: from HXTBJIDCEMVIW02.hxtcorp.net ([10.128.0.15]) by barracuda.hxt-semitech.com with ESMTP id gNH3pI8B4LQybz8A (version=TLSv1 cipher=ECDHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 28 Mar 2018 17:32:06 +0800 (CST) X-Barracuda-Envelope-From: shunyong.yang@hxt-semitech.com Received: from y.localdomain (10.5.21.109) by HXTBJIDCEMVIW02.hxtcorp.net (10.128.0.15) with Microsoft SMTP Server (TLS) id 15.0.847.32; Wed, 28 Mar 2018 17:32:05 +0800 From: Shunyong Yang To: CC: , , , Shunyong Yang , Joey Zheng Subject: [PATCH] cpufreq: cppc_cpufreq: Initialize shared cpu's perf capabilities Date: Wed, 28 Mar 2018 17:31:54 +0800 X-ASG-Orig-Subj: [PATCH] cpufreq: cppc_cpufreq: Initialize shared cpu's perf capabilities Message-ID: <1522229514-14377-1-git-send-email-shunyong.yang@hxt-semitech.com> X-Mailer: git-send-email 1.8.3.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.5.21.109] X-ClientProxiedBy: HXTBJIDCEMVIW02.hxtcorp.net (10.128.0.15) To HXTBJIDCEMVIW02.hxtcorp.net (10.128.0.15) X-Barracuda-Connect: UNKNOWN[10.128.0.15] X-Barracuda-Start-Time: 1522229526 X-Barracuda-Encrypted: ECDHE-RSA-AES256-SHA X-Barracuda-URL: https://192.168.50.101:443/cgi-mod/mark.cgi X-Virus-Scanned: by bsmtpd at hxt-semitech.com X-Barracuda-BRTS-Status: 1 X-Barracuda-Bayes: INNOCENT GLOBAL 0.6814 1.0000 1.2321 X-Barracuda-Spam-Score: 1.23 X-Barracuda-Spam-Status: No, SCORE=1.23 using global scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.49375 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When multiple cpus are related in one cpufreq policy, the first online cpu will be chosen by default to handle cpufreq operations. Let's take cpu0 and cpu1 as an example. When cpu0 is offline, policy->cpu will be shifted to cpu1. Cpu1's should be initialized. Otherwise, perf capabilities are 0s and speed change can not take effect. This patch copies perf capabilities of the first online cpu to other shared cpus when policy shared type is CPUFREQ_SHARED_TYPE_ANY. Cc: Joey Zheng Signed-off-by: Shunyong Yang --- The original RFC link, https://patchwork.kernel.org/patch/10299055/. This patch solves same issue as RFC above. Patch name is changed as code is too much different with RFC above. Remove extra init() per Viresh Kumar's comments and only handle CPPC CPUFREQ_SHARED_TYPE_ANY case. --- drivers/cpufreq/cppc_cpufreq.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c index 8f7b21a4d537..dc625a93a58e 100644 --- a/drivers/cpufreq/cppc_cpufreq.c +++ b/drivers/cpufreq/cppc_cpufreq.c @@ -164,8 +164,18 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy) policy->cpuinfo.transition_latency = cppc_get_transition_latency(cpu_num); policy->shared_type = cpu->shared_type; - if (policy->shared_type == CPUFREQ_SHARED_TYPE_ANY) + if (policy->shared_type == CPUFREQ_SHARED_TYPE_ANY) { + int i; + cpumask_copy(policy->cpus, cpu->shared_cpu_map); + + for_each_cpu(i, policy->cpus) { + if (i != policy->cpu) + memcpy(&all_cpu_data[i]->perf_caps, + &cpu->perf_caps, + sizeof(cpu->perf_caps)); + } + } else if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL) { /* Support only SW_ANY for now. */ pr_debug("Unsupported CPU co-ord type\n"); -- 1.8.3.1