Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp4081037pxk; Tue, 8 Sep 2020 10:09:44 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyj8DzXq43YyHXN8O/QX0CxV+BuLo6G9YHkGoMRkGDdP9vnTllrDBAbrI3hTxm7FwIbYe0K X-Received: by 2002:a17:907:94cf:: with SMTP id dn15mr27821666ejc.114.1599584984618; Tue, 08 Sep 2020 10:09:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1599584984; cv=none; d=google.com; s=arc-20160816; b=zsjwUQ/zRie+oUz28L2fK8wINV1e3ESHziyaqmk9NbXZiCMZbQbnmsrcNrzvqGAKWJ H5EohC9O4F4VmL1JQNizYcI4L9eOgJkcoPEV5g1/1I4i7WhvJpwzuaE5lhL38QHaeaV2 Umtk41fRmRGtnzOeB2zT5ifNZ/OX3XiG70kwfS94JsbKkpEX6ybRirORe2gKZtOpAhFH zlK4kvYHOsELiGKL99LvBBsOLezLNvk7t3bd4iIM4GyyD4pDmwJVOcfehwikmVe2k1tU nDNzg8yRuZuL5XpqzODJTrMZ5HxTyKXezgTPwJU18g6vsVjvyFgoKDp3M25joFAnYF7S JGhg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=l747EmysOWknlsv/8sjjFfL6+3SJ5nU9tmWdOIaC/Zo=; b=aB3VoaS1S5zASDdDDgXRfHihlL2gKnTGZPNE6coP5aOSYixvYZHiuhoR2N3RHpHDDk hlTRXp+LUInvpvuiAci0wy2nvNggqs3CoEJuQ6HtHk/u4w0ADo2KRrrxMvVvTUmhzI/S Ks6bgO2ZJddpA9qzJd96TYOfVbHruzwWXEYq1vyWr7r4YsHS0cYpUgzQLWQohG94zYb/ Ab5e2Jh96Oto33Q8n1thBBkGSBPzXY8RQ6APaEshdsoAzLDRfWyfJO8qR2G4LgvG3tdk foAGw/2LvsEKvBn3znjQ34XRxE36Ief1h6b8r+B20dFSK13TsnEfCPQL+6qTsj81YZPF YiWA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=WlhLwMSx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k1si11856677ejp.718.2020.09.08.10.09.22; Tue, 08 Sep 2020 10:09:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=WlhLwMSx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732146AbgIHRIu (ORCPT + 99 others); Tue, 8 Sep 2020 13:08:50 -0400 Received: from mail.kernel.org ([198.145.29.99]:49698 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732065AbgIHRHH (ORCPT ); Tue, 8 Sep 2020 13:07:07 -0400 Received: from mail-vs1-f53.google.com (mail-vs1-f53.google.com [209.85.217.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7ACE72192A for ; Tue, 8 Sep 2020 17:07:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1599584825; bh=YnJRsJUG1pSW75HhPkDqFEpQnLbYGnKtTbKpi28dmLo=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=WlhLwMSx0iwCqa8K3DJcD/KhLX8X+kUPN0Q76NNXA4e9e0P2AQk6NfTsRdyYZYYw7 bIJRCogunOkjudd91T3Ji1HMrpxMrzItxVIf/qrLN40efCcjJI8IeOoZtwEYb7QxL1 IuaDXCzPyBTRTvH8Y4YFWGqnt3e1tjDnrV4vOLdU= Received: by mail-vs1-f53.google.com with SMTP id y194so9430669vsc.4 for ; Tue, 08 Sep 2020 10:07:05 -0700 (PDT) X-Gm-Message-State: AOAM532bWTprAe8eIljAO4MKHRZm8I7yNdKije4dmVwoms8SuY0k+yls v5XfRXESvX4esGiBKK2zCAbIEpqX8/JPhmhNSq7xBw== X-Received: by 2002:a67:7fd0:: with SMTP id a199mr90272vsd.98.1599584824457; Tue, 08 Sep 2020 10:07:04 -0700 (PDT) MIME-Version: 1.0 References: <20200908075716.30357-1-manivannan.sadhasivam@linaro.org> <20200908075716.30357-5-manivannan.sadhasivam@linaro.org> <20200908150945.GB2352@mani-NUC7i5DNKE> In-Reply-To: <20200908150945.GB2352@mani-NUC7i5DNKE> From: Amit Kucheria Date: Tue, 8 Sep 2020 22:36:53 +0530 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH 4/7] cpufreq: qcom-hw: Make use of of_match data for offsets and row size To: Manivannan Sadhasivam Cc: "Rafael J. Wysocki" , Viresh Kumar , Rob Herring , Andy Gross , Bjorn Andersson , Linux PM list , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS" , LKML , linux-arm-msm , Dmitry Baryshkov , Taniya Das Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Sep 8, 2020 at 8:40 PM Manivannan Sadhasivam wrote: > > On 0908, Amit Kucheria wrote: > > On Tue, Sep 8, 2020 at 1:27 PM Manivannan Sadhasivam > > wrote: > > > > > > For preparing the driver to handle further SoC revisions, let's use the > > > of_match data for getting the device specific offsets and row size instead > > > of defining them globally. > > > > > > Signed-off-by: Manivannan Sadhasivam > > > > > > > > > --- > > > drivers/cpufreq/qcom-cpufreq-hw.c | 96 +++++++++++++++++++++---------- > > > 1 file changed, 66 insertions(+), 30 deletions(-) > > > > > > diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c > > > index ccea34f61152..41853db7c9b8 100644 > > > --- a/drivers/cpufreq/qcom-cpufreq-hw.c > > > +++ b/drivers/cpufreq/qcom-cpufreq-hw.c > > > @@ -19,15 +19,21 @@ > > > #define LUT_L_VAL GENMASK(7, 0) > > > #define LUT_CORE_COUNT GENMASK(18, 16) > > > #define LUT_VOLT GENMASK(11, 0) > > > -#define LUT_ROW_SIZE 32 > > > #define CLK_HW_DIV 2 > > > #define LUT_TURBO_IND 1 > > > > > > -/* Register offsets */ > > > -#define REG_ENABLE 0x0 > > > -#define REG_FREQ_LUT 0x110 > > > -#define REG_VOLT_LUT 0x114 > > > -#define REG_PERF_STATE 0x920 > > > +struct qcom_cpufreq_soc_data { > > > + u32 reg_enable; > > > + u32 reg_freq_lut; > > > + u32 reg_volt_lut; > > > + u32 reg_perf_state; > > > + u8 lut_row_size; > > > +}; > > > + > > > +struct qcom_cpufreq_data { > > > + void __iomem *base; > > > + const struct qcom_cpufreq_soc_data *soc_data; > > > +}; > > > > > > static unsigned long cpu_hw_rate, xo_rate; > > > static bool icc_scaling_enabled; > > > @@ -76,10 +82,11 @@ static int qcom_cpufreq_update_opp(struct device *cpu_dev, > > > static int qcom_cpufreq_hw_target_index(struct cpufreq_policy *policy, > > > unsigned int index) > > > { > > > - void __iomem *perf_state_reg = policy->driver_data; > > > + struct qcom_cpufreq_data *data = policy->driver_data; > > > + const struct qcom_cpufreq_soc_data *soc_data = data->soc_data; > > > unsigned long freq = policy->freq_table[index].frequency; > > > > > > - writel_relaxed(index, perf_state_reg); > > > + writel_relaxed(index, data->base + soc_data->reg_perf_state); > > > > > > if (icc_scaling_enabled) > > > qcom_cpufreq_set_bw(policy, freq); > > > @@ -91,7 +98,8 @@ static int qcom_cpufreq_hw_target_index(struct cpufreq_policy *policy, > > > > > > static unsigned int qcom_cpufreq_hw_get(unsigned int cpu) > > > { > > > - void __iomem *perf_state_reg; > > > + struct qcom_cpufreq_data *data; > > > + const struct qcom_cpufreq_soc_data *soc_data; > > > struct cpufreq_policy *policy; > > > unsigned int index; > > > > > > @@ -99,9 +107,10 @@ static unsigned int qcom_cpufreq_hw_get(unsigned int cpu) > > > if (!policy) > > > return 0; > > > > > > - perf_state_reg = policy->driver_data; > > > + data = policy->driver_data; > > > + soc_data = data->soc_data; > > > > > > - index = readl_relaxed(perf_state_reg); > > > + index = readl_relaxed(data->base + soc_data->reg_perf_state); > > > index = min(index, LUT_MAX_ENTRIES - 1); > > > > > > return policy->freq_table[index].frequency; > > > @@ -110,12 +119,13 @@ static unsigned int qcom_cpufreq_hw_get(unsigned int cpu) > > > static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy, > > > unsigned int target_freq) > > > { > > > - void __iomem *perf_state_reg = policy->driver_data; > > > + struct qcom_cpufreq_data *data = policy->driver_data; > > > + const struct qcom_cpufreq_soc_data *soc_data = data->soc_data; > > > unsigned int index; > > > unsigned long freq; > > > > > > index = policy->cached_resolved_idx; > > > - writel_relaxed(index, perf_state_reg); > > > + writel_relaxed(index, data->base + soc_data->reg_perf_state); > > > > > > freq = policy->freq_table[index].frequency; > > > arch_set_freq_scale(policy->related_cpus, freq, > > > @@ -125,8 +135,7 @@ static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy, > > > } > > > > > > static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev, > > > - struct cpufreq_policy *policy, > > > - void __iomem *base) > > > + struct cpufreq_policy *policy) > > > { > > > u32 data, src, lval, i, core_count, prev_freq = 0, freq; > > > u32 volt; > > > @@ -134,6 +143,8 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev, > > > struct dev_pm_opp *opp; > > > unsigned long rate; > > > int ret; > > > + struct qcom_cpufreq_data *drv_data = policy->driver_data; > > > + const struct qcom_cpufreq_soc_data *soc_data = drv_data->soc_data; > > > > > > table = kcalloc(LUT_MAX_ENTRIES + 1, sizeof(*table), GFP_KERNEL); > > > if (!table) > > > @@ -160,14 +171,14 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev, > > > } > > > > > > for (i = 0; i < LUT_MAX_ENTRIES; i++) { > > > - data = readl_relaxed(base + REG_FREQ_LUT + > > > - i * LUT_ROW_SIZE); > > > + data = readl_relaxed(drv_data->base + soc_data->reg_freq_lut + > > > + i * soc_data->lut_row_size); > > > src = FIELD_GET(LUT_SRC, data); > > > lval = FIELD_GET(LUT_L_VAL, data); > > > core_count = FIELD_GET(LUT_CORE_COUNT, data); > > > > > > - data = readl_relaxed(base + REG_VOLT_LUT + > > > - i * LUT_ROW_SIZE); > > > + data = readl_relaxed(drv_data->base + soc_data->reg_volt_lut + > > > + i * soc_data->lut_row_size); > > > volt = FIELD_GET(LUT_VOLT, data) * 1000; > > > > > > if (src) > > > @@ -237,6 +248,20 @@ static void qcom_get_related_cpus(int index, struct cpumask *m) > > > } > > > } > > > > > > +static const struct qcom_cpufreq_soc_data qcom_soc_data = { > > > > rename this to sdm845_soc_data? > > > > Nah, this is not specific to SDM845. Atleast in mainline, there are 3 SoCs > using this compatible. > > > Or even better, maybe just use the IP version number for this IP block > > so that all SoCs using that IP version can use this struct? > > > > Since the SoCs are using the same compatible it makes sense to use the same > name for the of_data. I don't think it is a good idea to use different name > for the of_data since the differentiation has to happen at compatible level. You are using the name sm8250_soc_data in a subsequent patch, though ;-) So I think it would make sense for compatible "qcom,cpufreq-hw" to use data "osm_soc_data" and compatible "qcom,sm8250-epss" to use data "epss_soc_data" as suggested by Bjorn. Regards, Amit > > > > + .reg_enable = 0x0, > > > + .reg_freq_lut = 0x110, > > > + .reg_volt_lut = 0x114, > > > + .reg_perf_state = 0x920, > > > + .lut_row_size = 32, > > > +}; > > > + > > > +static const struct of_device_id qcom_cpufreq_hw_match[] = { > > > + { .compatible = "qcom,cpufreq-hw", .data = &qcom_soc_data }, > > > + {} > > > +};