Received: by 10.192.165.156 with SMTP id m28csp578226imm; Fri, 13 Apr 2018 04:25:10 -0700 (PDT) X-Google-Smtp-Source: AIpwx49qSxI2Nu6VMEP1FiShCRiDLMG9fZ2MZTkKhWgI8hs4EntYHJ6Zzyf77QMTp3VZIcDnm7UZ X-Received: by 2002:a17:902:7683:: with SMTP id m3-v6mr4871274pll.340.1523618710273; Fri, 13 Apr 2018 04:25:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523618710; cv=none; d=google.com; s=arc-20160816; b=NLBPfMs9vo5NN4sm8lE6+s2RFupXjHwCnOR/pMpsBzpewR2VDawbJLeZIbvYg1gF+U 3oQPUt8ecM4Rl/+h4cNunrdLsVZ2RI+jEZ+e8YLA+U+lTFodzuCFtrsOV07FYUS1MS0x ietBxRbne+3bs2LLArr52/jUODfzfSry+ccmfLb8vVt5TBz9XOGyZ7Wfa0G6yB2qe0xs 4qlSs6bg0/nftS0CD5CJqz6CRJEFGOQxrnTOCLHqUZJUTnHGesJ6xRKZ0oJCZsP+7hgV AJ7VTT57RNiKAssVnhM/SxfilCtWBpZpbcRezJ/F4MJ+yCpl2yoWNz6neMFqmMIQYKGf 7iAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:organization:from:references:to:subject:cc :arc-authentication-results; bh=El8jo+Pcvr6VyAPfvGXY6M0Pjlp1qfQ5N2WnPO1A7R0=; b=YU3clvkzZOZjWGxaNXCiPivA+3fqert+swcvbzeE4QwaxUxbEsCcga7n4B5O4bnAQp nNd5bFLKYi4MgKCceGipBFHzE64Lbx4+urdPQMiyg2lzEgef2MBwNKHyZUHuRKssz6jk mbrq18An1bXMNjMqY8WSYupyudWyjJXgTN2Fl9kNyx/NrONv5emhS7zHzCqjiAfv2q11 BXqNWPMa7SXkl/qBzXmBf1seX5VwI/4IUhbqLklzYboIVwFCDPXrCK5kJ9QdfzOq+a19 gPaE/+9GGIRaxGfa79dApvq5pllf41OoBlWXt/eHYPHhW2kh1vtrD7f537KPaWgvWQQg liag== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b189si987789pgc.357.2018.04.13.04.24.55; Fri, 13 Apr 2018 04:25:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754444AbeDMLXV (ORCPT + 99 others); Fri, 13 Apr 2018 07:23:21 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:41508 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754051AbeDMLXT (ORCPT ); Fri, 13 Apr 2018 07:23:19 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 14DA51435; Fri, 13 Apr 2018 04:23:19 -0700 (PDT) Received: from [10.1.210.28] (e107155-lin.cambridge.arm.com [10.1.210.28]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CC26B3F592; Fri, 13 Apr 2018 04:23:16 -0700 (PDT) Cc: viresh.kumar@linaro.org, edubezval@gmail.com, Sudeep Holla , kevin.wangtao@linaro.org, leo.yan@linaro.org, vincent.guittot@linaro.org, linux-kernel@vger.kernel.org, javi.merino@kernel.org, rui.zhang@intel.com, daniel.thompson@linaro.org, linux-pm@vger.kernel.org, Amit Daniel Kachhap Subject: Re: [PATCH v3 6/7] thermal/drivers/cpu_cooling: Introduce the cpu idle cooling driver To: Daniel Lezcano References: <1522945005-7165-1-git-send-email-daniel.lezcano@linaro.org> <1522945005-7165-7-git-send-email-daniel.lezcano@linaro.org> From: Sudeep Holla Organization: ARM Message-ID: <3f3b3b7a-3b74-aee2-2fac-f2759babe3f0@arm.com> Date: Fri, 13 Apr 2018 12:23:15 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <1522945005-7165-7-git-send-email-daniel.lezcano@linaro.org> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Daniel, On 05/04/18 17:16, Daniel Lezcano wrote: [...] > +/** > + * cpuidle_cooling_register - Idle cooling device initialization function > + * > + * This function is in charge of creating a cooling device per cluster > + * and register it to thermal framework. For this we rely on the > + * topology as there is nothing yet describing better the idle state > + * power domains. > + * > + * We create a cpuidle cooling device per cluster. For this reason we > + * must, for each cluster, allocate and initialize the cooling device > + * and for each cpu belonging to this cluster, do the initialization > + * on a cpu basis. > + * > + * This approach for creating the cooling device is needed as we don't > + * have the guarantee the CPU numbering is sequential. > + * > + * Unfortunately, there is no API to browse from top to bottom the > + * topology, cluster->cpu, only the usual for_each_possible_cpu loop. > + * In order to solve that, we use a cpumask to flag the cluster_id we > + * already processed. The cpumask will always have enough room for all > + * the cluster because it is based on NR_CPUS and it is not possible > + * to have more clusters than cpus. > + * > + */ > +void __init cpuidle_cooling_register(void) > +{ > + struct cpuidle_cooling_device *idle_cdev = NULL; > + struct thermal_cooling_device *cdev; > + struct device_node *np; > + cpumask_var_t cpumask; > + char dev_name[THERMAL_NAME_LENGTH]; > + int ret = -ENOMEM, cpu; > + int cluster_id; > + > + if (!zalloc_cpumask_var(&cpumask, GFP_KERNEL)) > + return; > + > + for_each_possible_cpu(cpu) { > + > + cluster_id = topology_physical_package_id(cpu); > + if (cpumask_test_cpu(cluster_id, cpumask)) > + continue; Sorry for chiming in randomly, I haven't read the patches in detail. But it was brought to my notice that topology_physical_package_id is being used for cluster ID here. It's completely wrong and will changesoon with ACPI topology related changes Jeremy is working on. You will get the physical socket number(which is mostly 0 on single SoC system). Makes sure that this won't break with that change. Please note with cluster id not defined architecturally, relying on that is simply problematic. -- Regards, Sudeep