Subject: [PATCH v10 0/6] PCI: qcom: Add support for OPP

This patch adds support for OPP to vote for the performance state of RPMH
power domain based upon PCIe speed it got enumerated.

QCOM Resource Power Manager-hardened (RPMh) is a hardware block which
maintains hardware state of a regulator by performing max aggregation of
the requests made by all of the processors.

PCIe controller can operate on different RPMh performance state of power
domain based up on the speed of the link. And this performance state varies
from target to target.

It is manadate to scale the performance state based up on the PCIe speed
link operates so that SoC can run under optimum power conditions.

Add Operating Performance Points(OPP) support to vote for RPMh state based
upon GEN speed link is operating.

Before link up PCIe driver will vote for the maximum performance state.

As now we are adding ICC BW vote in OPP, the ICC BW voting depends both
GEN speed and link width using opp-level to indicate the opp entry table
will be difficult.

In PCIe certain gen speeds like 2.5GT/s x2 & 5.0 GT/s X1 or 8.0 GT/s x2 &
16GT/s x1 use same ICC bw if we use freq in the OPP table to represent the
PCIe speed number of PCIe entries can reduced.

So going back to use freq in the OPP table instead of level.

To access PCIe registers of the host controller and endpoint PCIe
BAR space, config space the CPU-PCIe ICC (interconnect) path should
be voted otherwise it may lead to NoC (Network on chip) timeout.
We are surviving because of other driver voting for this path.

As there is less access on this path compared to PCIe to mem path
add minimum vote i.e 1KBps bandwidth always which is sufficient enough
to keep the path active and is recommended by HW team.

In suspend to ram case there can be some DBI access. Except in suspend
to ram case disable CPU-PCIe ICC path after register space access
is done.

Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
Changes from v9:
- Disable interconnect CPU-PCIe path only system is not suspend to ram case.
- If opp find freq fails in the probe fail the probe as suggested by mani.
- Modify comments as suggested by mani
- Link to v9: https://lore.kernel.org/r/[email protected]
Changes from v8:
- Removed the ack-by and reviewed by on dt-bindings as dt-bindings moved to new files.
- Removed dt-binding patch for interconnects as it is added in the common file.
- Added tags for interconnect as suggested by konrad
- Added the comments as suggested by mani
- In ICC BW vote for CPU to PCIe path if icc_disable() fails log error and return instead of re-init.
- Link to v8: https://lore.kernel.org/linux-arm-msm/[email protected]/
Changes from v7:
- Fix the compilation issue in patch3
- Change the commit text and wrap the comments to 80 columns as suggested by bjorn
- remove PCIE_MBS2FREQ macro as this is being used by only qcom drivers.
- Link to v7: https://lore.kernel.org/r/[email protected]
Changes from v6:
- change CPU-PCIe bandwidth to 1KBps as suggested by HW team.
- Create a new API to get frequency based upon PCIe speed as suggested
by mani.
- Updated few commit texts and comments.
- Setting opp to NULL in suspend to remove any votes.
- Link for v6: https://lore.kernel.org/linux-arm-msm/[email protected]/
Changes from v5:
- Add ICC BW voting as part of OPP, rebase the latest kernel, and only
- either OPP or ICC BW voting will supported we removed the patch to
- return error for icc opp update patch.
- As we added the icc bw voting in opp table I am not including reviewed
- by tags given in previous patch.
- Use opp freq to find opp entries as now we need to include pcie link
- also in to considerations.
- Add CPU-PCIe BW voting which is not present till now.
- Drop PCI: qcom: Return error from 'qcom_pcie_icc_update' as either opp or icc bw
- only one executes and there is no need to fail if opp or icc update fails.
- Link for v5: https://lore.kernel.org/linux-arm-msm/20231101063323.GH2897@thinkpad/T/
Changes from v4:
- Added a separate patch for returning error from the qcom_pcie_upadate
and moved opp update logic to icc_update and used a bool variable to
update the opp.
- Addressed comments made by pavan.
changes from v3:
- Removing the opp vote on suspend when the link is not up and link is not
up and add debug prints as suggested by pavan.
- Added dev_pm_opp_find_level_floor API to find the highest opp to vote.
changes from v2:
- Instead of using the freq based opp search use level based as suggested
by Dmitry Baryshkov.
Changes from v1:
- Addressed comments from Krzysztof Kozlowski.
- Added the rpmhpd_opp_xxx phandle as suggested by pavan.
- Added dev_pm_opp_set_opp API call which was missed on previous patch.
---

---
Krishna chaitanya chundru (6):
arm64: dts: qcom: sm8450: Add interconnect path to PCIe node
PCI: qcom: Add ICC bandwidth vote for CPU to PCIe path
dt-bindings: pci: qcom: Add OPP table
arm64: dts: qcom: sm8450: Add OPP table support to PCIe
PCI: Bring the PCIe speed to MBps logic to new pcie_link_speed_to_mbps()
PCI: qcom: Add OPP support to scale performance state of power domain

.../devicetree/bindings/pci/qcom,pcie-sm8450.yaml | 4 +
arch/arm64/boot/dts/qcom/sm8450.dtsi | 89 +++++++++++++++
drivers/pci/controller/dwc/pcie-qcom.c | 122 ++++++++++++++++++---
drivers/pci/pci.c | 19 +---
drivers/pci/pci.h | 22 ++++
5 files changed, 221 insertions(+), 35 deletions(-)
---
base-commit: 6c6e47d69d821047097909288b6d7f1aafb3b9b1
change-id: 20240406-opp_support-ca095eb032b4

Best regards,
--
Krishna chaitanya chundru <[email protected]>



Subject: [PATCH v10 1/6] arm64: dts: qcom: sm8450: Add interconnect path to PCIe node

Add PCIe-MEM & CPU-PCIe interconnect path to the PCIe nodes.

Reviewed-by: Manivannan Sadhasivam <[email protected]>
Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
arch/arm64/boot/dts/qcom/sm8450.dtsi | 12 ++++++++++++
1 file changed, 12 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
index b86be34a912b..615296e13c43 100644
--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
@@ -1807,6 +1807,12 @@ pcie0: pcie@1c00000 {
<0 0 0 3 &intc 0 0 0 151 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
<0 0 0 4 &intc 0 0 0 152 IRQ_TYPE_LEVEL_HIGH>; /* int_d */

+ interconnects = <&pcie_noc MASTER_PCIE_0 QCOM_ICC_TAG_ALWAYS
+ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
+ <&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
+ &config_noc SLAVE_PCIE_0 QCOM_ICC_TAG_ALWAYS>;
+ interconnect-names = "pcie-mem", "cpu-pcie";
+
clocks = <&gcc GCC_PCIE_0_PIPE_CLK>,
<&gcc GCC_PCIE_0_PIPE_CLK_SRC>,
<&pcie0_phy>,
@@ -1930,6 +1936,12 @@ pcie1: pcie@1c08000 {
<0 0 0 3 &intc 0 0 0 438 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
<0 0 0 4 &intc 0 0 0 439 IRQ_TYPE_LEVEL_HIGH>; /* int_d */

+ interconnects = <&pcie_noc MASTER_PCIE_1 QCOM_ICC_TAG_ALWAYS
+ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
+ <&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
+ &config_noc SLAVE_PCIE_1 QCOM_ICC_TAG_ALWAYS>;
+ interconnect-names = "pcie-mem", "cpu-pcie";
+
clocks = <&gcc GCC_PCIE_1_PIPE_CLK>,
<&gcc GCC_PCIE_1_PIPE_CLK_SRC>,
<&pcie1_phy>,

--
2.42.0


Subject: [PATCH v10 4/6] arm64: dts: qcom: sm8450: Add OPP table support to PCIe

PCIe needs to choose the appropriate performance state of RPMh power
domain and interconnect bandwidth based up on the PCIe data rate.

Add the OPP table support to specify RPMh performance states and
interconnect peak bandwidth.

Different link configurations may share the same aggregate bandwidth,
e.g., a 2.5 GT/s x2 link and a 5.0 GT/s x1 link have the same bandwidth
and share the same OPP entry.

Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
arch/arm64/boot/dts/qcom/sm8450.dtsi | 77 ++++++++++++++++++++++++++++++++++++
1 file changed, 77 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
index 615296e13c43..9dfe16012726 100644
--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
@@ -1855,7 +1855,35 @@ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
pinctrl-names = "default";
pinctrl-0 = <&pcie0_default_state>;

+ operating-points-v2 = <&pcie0_opp_table>;
+
status = "disabled";
+
+ pcie0_opp_table: opp-table {
+ compatible = "operating-points-v2";
+
+ /* GEN 1 x1 */
+ opp-2500000 {
+ opp-hz = /bits/ 64 <2500000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <250000 1>;
+ };
+
+ /* GEN 2 x1 */
+ opp-5000000 {
+ opp-hz = /bits/ 64 <5000000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <500000 1>;
+ };
+
+ /* GEN 3 x1 */
+ opp-8000000 {
+ opp-hz = /bits/ 64 <8000000>;
+ required-opps = <&rpmhpd_opp_nom>;
+ opp-peak-kBps = <984500 1>;
+ };
+ };
+
};

pcie0_phy: phy@1c06000 {
@@ -1982,7 +2010,56 @@ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
pinctrl-names = "default";
pinctrl-0 = <&pcie1_default_state>;

+ operating-points-v2 = <&pcie1_opp_table>;
+
status = "disabled";
+
+ pcie1_opp_table: opp-table {
+ compatible = "operating-points-v2";
+
+ /* GEN 1 x1 */
+ opp-2500000 {
+ opp-hz = /bits/ 64 <2500000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <250000 1>;
+ };
+
+ /* GEN 1 x2 GEN 2 x1 */
+ opp-5000000 {
+ opp-hz = /bits/ 64 <5000000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <500000 1>;
+ };
+
+ /* GEN 2 x2 */
+ opp-10000000 {
+ opp-hz = /bits/ 64 <10000000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <1000000 1>;
+ };
+
+ /* GEN 3 x1 */
+ opp-8000000 {
+ opp-hz = /bits/ 64 <8000000>;
+ required-opps = <&rpmhpd_opp_nom>;
+ opp-peak-kBps = <984500 1>;
+ };
+
+ /* GEN 3 x2 GEN 4 x1 */
+ opp-16000000 {
+ opp-hz = /bits/ 64 <16000000>;
+ required-opps = <&rpmhpd_opp_nom>;
+ opp-peak-kBps = <1969000 1>;
+ };
+
+ /* GEN 4 x2 */
+ opp-32000000 {
+ opp-hz = /bits/ 64 <32000000>;
+ required-opps = <&rpmhpd_opp_nom>;
+ opp-peak-kBps = <3938000 1>;
+ };
+ };
+
};

pcie1_phy: phy@1c0e000 {

--
2.42.0


Subject: [PATCH v10 5/6] PCI: Bring the PCIe speed to MBps logic to new pcie_link_speed_to_mbps()

Bring the switch case in pcie_link_speed_mbps() to new function to
the header file so that it can be used in other places like
in controller driver.

Suggested-by: Bjorn Helgaas <[email protected]>
Signed-off-by: Krishna chaitanya chundru <[email protected]>
Reviewed-by: Manivannan Sadhasivam <[email protected]>
---
drivers/pci/pci.c | 19 +------------------
drivers/pci/pci.h | 22 ++++++++++++++++++++++
2 files changed, 23 insertions(+), 18 deletions(-)

diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index e5f243dd4288..40487b86a75e 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -5922,24 +5922,7 @@ int pcie_link_speed_mbps(struct pci_dev *pdev)
if (err)
return err;

- switch (to_pcie_link_speed(lnksta)) {
- case PCIE_SPEED_2_5GT:
- return 2500;
- case PCIE_SPEED_5_0GT:
- return 5000;
- case PCIE_SPEED_8_0GT:
- return 8000;
- case PCIE_SPEED_16_0GT:
- return 16000;
- case PCIE_SPEED_32_0GT:
- return 32000;
- case PCIE_SPEED_64_0GT:
- return 64000;
- default:
- break;
- }
-
- return -EINVAL;
+ return pcie_link_speed_to_mbps(to_pcie_link_speed(lnksta));
}
EXPORT_SYMBOL(pcie_link_speed_mbps);

diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index 17fed1846847..4de10087523e 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -290,6 +290,28 @@ void pci_bus_put(struct pci_bus *bus);
(speed) == PCIE_SPEED_2_5GT ? 2500*8/10 : \
0)

+static inline int pcie_link_speed_to_mbps(enum pci_bus_speed speed)
+{
+ switch (speed) {
+ case PCIE_SPEED_2_5GT:
+ return 2500;
+ case PCIE_SPEED_5_0GT:
+ return 5000;
+ case PCIE_SPEED_8_0GT:
+ return 8000;
+ case PCIE_SPEED_16_0GT:
+ return 16000;
+ case PCIE_SPEED_32_0GT:
+ return 32000;
+ case PCIE_SPEED_64_0GT:
+ return 64000;
+ default:
+ break;
+ }
+
+ return -EINVAL;
+}
+
const char *pci_speed_string(enum pci_bus_speed speed);
enum pci_bus_speed pcie_get_speed_cap(struct pci_dev *dev);
enum pcie_link_width pcie_get_width_cap(struct pci_dev *dev);

--
2.42.0


Subject: [PATCH v10 2/6] PCI: qcom: Add ICC bandwidth vote for CPU to PCIe path

To access PCIe registers of the host controller and endpoint PCIe
BAR space, config space the CPU-PCIe ICC (interconnect) path should
be voted otherwise it may lead to NoC (Network on chip) timeout.
We are surviving because of other driver voting for this path.

As there is less access on this path compared to PCIe to mem path
add minimum vote i.e 1KBps bandwidth always which is sufficient enough
to keep the path active and is recommended by HW team.

In suspend to ram case there can be some DBI access. Except in suspend
to ram case disable CPU-PCIe ICC path after register space access
is done.

Reviewed-by: Bryan O'Donoghue <[email protected]>
Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
drivers/pci/controller/dwc/pcie-qcom.c | 43 ++++++++++++++++++++++++++++++----
1 file changed, 39 insertions(+), 4 deletions(-)

diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
index 14772edcf0d3..e53422171c01 100644
--- a/drivers/pci/controller/dwc/pcie-qcom.c
+++ b/drivers/pci/controller/dwc/pcie-qcom.c
@@ -245,6 +245,7 @@ struct qcom_pcie {
struct phy *phy;
struct gpio_desc *reset;
struct icc_path *icc_mem;
+ struct icc_path *icc_cpu;
const struct qcom_pcie_cfg *cfg;
struct dentry *debugfs;
bool suspended;
@@ -1409,6 +1410,9 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
if (IS_ERR(pcie->icc_mem))
return PTR_ERR(pcie->icc_mem);

+ pcie->icc_cpu = devm_of_icc_get(pci->dev, "cpu-pcie");
+ if (IS_ERR(pcie->icc_cpu))
+ return PTR_ERR(pcie->icc_cpu);
/*
* Some Qualcomm platforms require interconnect bandwidth constraints
* to be set before enabling interconnect clocks.
@@ -1418,7 +1422,20 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
*/
ret = icc_set_bw(pcie->icc_mem, 0, QCOM_PCIE_LINK_SPEED_TO_BW(1));
if (ret) {
- dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
+ dev_err(pci->dev, "Failed to set interconnect bandwidth for PCIe-MEM: %d\n",
+ ret);
+ return ret;
+ }
+
+ /*
+ * Since the CPU-PCIe path is only used for activities like register
+ * access of the host controller and endpoint Config/BAR space access,
+ * HW team has recommended to use a minimal bandwidth of 1KBps just to
+ * keep the path active.
+ */
+ ret = icc_set_bw(pcie->icc_cpu, 0, kBps_to_icc(1));
+ if (ret) {
+ dev_err(pci->dev, "Failed to set interconnect bandwidth for CPU-PCIe: %d\n",
ret);
return ret;
}
@@ -1448,7 +1465,7 @@ static void qcom_pcie_icc_update(struct qcom_pcie *pcie)

ret = icc_set_bw(pcie->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
if (ret) {
- dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
+ dev_err(pci->dev, "Failed to set interconnect bandwidth for PCIe-MEM: %d\n",
ret);
}
}
@@ -1610,7 +1627,7 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
*/
ret = icc_set_bw(pcie->icc_mem, 0, kBps_to_icc(1));
if (ret) {
- dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret);
+ dev_err(dev, "Failed to set interconnect bandwidth for PCIe-MEM: %d\n", ret);
return ret;
}

@@ -1634,7 +1651,17 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
pcie->suspended = true;
}

- return 0;
+ /*
+ * In suspend to ram case there are DBI access, except in suspend to ram case
+ * remove the vote for CPU-PCIe path now, since at this point onwards,
+ * no register access will be done.
+ */
+ if (pm_suspend_target_state != PM_SUSPEND_MEM) {
+ ret = icc_disable(pcie->icc_cpu);
+ if (ret)
+ dev_err(dev, "Failed to disable Interconnect path of CPU-PCIe: %d\n", ret);
+ }
+ return ret;
}

static int qcom_pcie_resume_noirq(struct device *dev)
@@ -1642,6 +1669,14 @@ static int qcom_pcie_resume_noirq(struct device *dev)
struct qcom_pcie *pcie = dev_get_drvdata(dev);
int ret;

+ if (pm_suspend_target_state != PM_SUSPEND_MEM) {
+ ret = icc_enable(pcie->icc_cpu);
+ if (ret) {
+ dev_err(dev, "Failed to enable Interconnect path of CPU-PCIe: %d\n", ret);
+ return ret;
+ }
+ }
+
if (pcie->suspended) {
ret = qcom_pcie_host_init(&pcie->pci->pp);
if (ret)

--
2.42.0


Subject: [PATCH v10 3/6] dt-bindings: pci: qcom: Add OPP table

PCIe needs to choose the appropriate performance state of RPMh power
domain based on the PCIe gen speed.

Adding the Operating Performance Points table allows to adjust power
domain performance state and ICC peak bw, depending on the PCIe data
rate and link width.

Reviewed-by: Krzysztof Kozlowski <[email protected]>
Reviewed-by: Manivannan Sadhasivam <[email protected]>
Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
Documentation/devicetree/bindings/pci/qcom,pcie-sm8450.yaml | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/Documentation/devicetree/bindings/pci/qcom,pcie-sm8450.yaml b/Documentation/devicetree/bindings/pci/qcom,pcie-sm8450.yaml
index 1496d6993ab4..d8c0afaa4b19 100644
--- a/Documentation/devicetree/bindings/pci/qcom,pcie-sm8450.yaml
+++ b/Documentation/devicetree/bindings/pci/qcom,pcie-sm8450.yaml
@@ -69,6 +69,10 @@ properties:
- const: msi6
- const: msi7

+ operating-points-v2: true
+ opp-table:
+ type: object
+
resets:
maxItems: 1


--
2.42.0


Subject: [PATCH v10 6/6] PCI: qcom: Add OPP support to scale performance state of power domain

QCOM Resource Power Manager-hardened (RPMh) is a hardware block which
maintains hardware state of a regulator by performing max aggregation of
the requests made by all of the clients.

PCIe controller can operate on different RPMh performance state of power
domain based on the speed of the link. And this performance state varies
from target to target, like some controllers support GEN3 in NOM (Nominal)
voltage corner, while some other supports GEN3 in low SVS (static voltage
scaling).

The SoC can be more power efficient if we scale the performance state
based on the aggregate PCIe link bandwidth.

Add Operating Performance Points (OPP) support to vote for RPMh state based
on the aggregate link bandwidth.

OPP can handle ICC bw voting also, so move ICC bw voting through OPP
framework if OPP entries are present.

As we are moving ICC voting as part of OPP, don't initialize ICC if OPP
is supported.

Before PCIe link is initialized vote for highest OPP in the OPP table,
so that we are voting for maximum voltage corner for the link to come up
in maximum supported speed.

Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
drivers/pci/controller/dwc/pcie-qcom.c | 81 ++++++++++++++++++++++++++++------
1 file changed, 67 insertions(+), 14 deletions(-)

diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
index e53422171c01..ad4f456619cb 100644
--- a/drivers/pci/controller/dwc/pcie-qcom.c
+++ b/drivers/pci/controller/dwc/pcie-qcom.c
@@ -22,6 +22,7 @@
#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/pci.h>
+#include <linux/pm_opp.h>
#include <linux/pm_runtime.h>
#include <linux/platform_device.h>
#include <linux/phy/pcie.h>
@@ -1443,15 +1444,13 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
return 0;
}

-static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
+static void qcom_pcie_icc_opp_update(struct qcom_pcie *pcie)
{
struct dw_pcie *pci = pcie->pci;
- u32 offset, status;
+ u32 offset, status, freq;
+ struct dev_pm_opp *opp;
int speed, width;
- int ret;
-
- if (!pcie->icc_mem)
- return;
+ int ret, mbps;

offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
status = readw(pci->dbi_base + offset + PCI_EXP_LNKSTA);
@@ -1463,10 +1462,26 @@ static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, status);
width = FIELD_GET(PCI_EXP_LNKSTA_NLW, status);

- ret = icc_set_bw(pcie->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
- if (ret) {
- dev_err(pci->dev, "Failed to set interconnect bandwidth for PCIe-MEM: %d\n",
- ret);
+ if (pcie->icc_mem) {
+ ret = icc_set_bw(pcie->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
+ if (ret) {
+ dev_err(pci->dev, "Failed to set interconnect bandwidth for PCIe-MEM: %d\n",
+ ret);
+ }
+ } else {
+ mbps = pcie_link_speed_to_mbps(pcie_link_speed[speed]);
+ if (mbps < 0)
+ return;
+
+ freq = mbps * 1000;
+ opp = dev_pm_opp_find_freq_exact(pci->dev, freq * width, true);
+ if (!IS_ERR(opp)) {
+ ret = dev_pm_opp_set_opp(pci->dev, opp);
+ if (ret)
+ dev_err(pci->dev, "Failed to set opp for freq (%ld): %d\n",
+ dev_pm_opp_get_freq(opp), ret);
+ }
+ dev_pm_opp_put(opp);
}
}

@@ -1510,7 +1525,9 @@ static void qcom_pcie_init_debugfs(struct qcom_pcie *pcie)
static int qcom_pcie_probe(struct platform_device *pdev)
{
const struct qcom_pcie_cfg *pcie_cfg;
+ unsigned long max_freq = INT_MAX;
struct device *dev = &pdev->dev;
+ struct dev_pm_opp *opp;
struct qcom_pcie *pcie;
struct dw_pcie_rp *pp;
struct resource *res;
@@ -1578,9 +1595,42 @@ static int qcom_pcie_probe(struct platform_device *pdev)
goto err_pm_runtime_put;
}

- ret = qcom_pcie_icc_init(pcie);
- if (ret)
+ /* OPP table is optional */
+ ret = devm_pm_opp_of_add_table(dev);
+ if (ret && ret != -ENODEV) {
+ dev_err_probe(dev, ret, "Failed to add OPP table\n");
goto err_pm_runtime_put;
+ }
+
+ /*
+ * Before PCIe link is initialized vote for highest OPP in the OPP table,
+ * so that we are voting for maximum voltage corner for the link to come up
+ * in maximum supported speed. At the end of the probe(), OPP will be
+ * updated using qcom_pcie_icc_opp_update().
+ */
+ if (!ret) {
+ opp = dev_pm_opp_find_freq_floor(dev, &max_freq);
+ if (IS_ERR(opp)) {
+ dev_err_probe(pci->dev, PTR_ERR(opp),
+ "Unable to find max freq OPP\n");
+ goto err_pm_runtime_put;
+ } else {
+ ret = dev_pm_opp_set_opp(dev, opp);
+ }
+
+ dev_pm_opp_put(opp);
+ if (ret) {
+ dev_err_probe(pci->dev, ret,
+ "Failed to set OPP for freq (%ld): %d\n",
+ max_freq, ret);
+ goto err_pm_runtime_put;
+ }
+ } else {
+ /* Skip ICC init if OPP is supported as it is handled by OPP */
+ ret = qcom_pcie_icc_init(pcie);
+ if (ret)
+ goto err_pm_runtime_put;
+ }

ret = pcie->cfg->ops->get_resources(pcie);
if (ret)
@@ -1600,7 +1650,7 @@ static int qcom_pcie_probe(struct platform_device *pdev)
goto err_phy_exit;
}

- qcom_pcie_icc_update(pcie);
+ qcom_pcie_icc_opp_update(pcie);

if (pcie->mhi)
qcom_pcie_init_debugfs(pcie);
@@ -1660,6 +1710,9 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
ret = icc_disable(pcie->icc_cpu);
if (ret)
dev_err(dev, "Failed to disable Interconnect path of CPU-PCIe: %d\n", ret);
+
+ if (!pcie->icc_mem)
+ dev_pm_opp_set_opp(pcie->pci->dev, NULL);
}
return ret;
}
@@ -1685,7 +1738,7 @@ static int qcom_pcie_resume_noirq(struct device *dev)
pcie->suspended = false;
}

- qcom_pcie_icc_update(pcie);
+ qcom_pcie_icc_opp_update(pcie);

return 0;
}

--
2.42.0


2024-04-09 15:58:50

by Bjorn Helgaas

[permalink] [raw]
Subject: Re: [PATCH v10 5/6] PCI: Bring the PCIe speed to MBps logic to new pcie_link_speed_to_mbps()

On Tue, Apr 09, 2024 at 03:43:23PM +0530, Krishna chaitanya chundru wrote:
> Bring the switch case in pcie_link_speed_mbps() to new function to
> the header file so that it can be used in other places like
> in controller driver.
>
> Suggested-by: Bjorn Helgaas <[email protected]>

Unnecessary. Not every code review comment needs to be acknowledged
in the commit log :)

> Signed-off-by: Krishna chaitanya chundru <[email protected]>
> Reviewed-by: Manivannan Sadhasivam <[email protected]>

Acked-by: Bjorn Helgaas <[email protected]>

> ---
> drivers/pci/pci.c | 19 +------------------
> drivers/pci/pci.h | 22 ++++++++++++++++++++++
> 2 files changed, 23 insertions(+), 18 deletions(-)
>
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index e5f243dd4288..40487b86a75e 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -5922,24 +5922,7 @@ int pcie_link_speed_mbps(struct pci_dev *pdev)
> if (err)
> return err;
>
> - switch (to_pcie_link_speed(lnksta)) {
> - case PCIE_SPEED_2_5GT:
> - return 2500;
> - case PCIE_SPEED_5_0GT:
> - return 5000;
> - case PCIE_SPEED_8_0GT:
> - return 8000;
> - case PCIE_SPEED_16_0GT:
> - return 16000;
> - case PCIE_SPEED_32_0GT:
> - return 32000;
> - case PCIE_SPEED_64_0GT:
> - return 64000;
> - default:
> - break;
> - }
> -
> - return -EINVAL;
> + return pcie_link_speed_to_mbps(to_pcie_link_speed(lnksta));
> }
> EXPORT_SYMBOL(pcie_link_speed_mbps);
>
> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
> index 17fed1846847..4de10087523e 100644
> --- a/drivers/pci/pci.h
> +++ b/drivers/pci/pci.h
> @@ -290,6 +290,28 @@ void pci_bus_put(struct pci_bus *bus);
> (speed) == PCIE_SPEED_2_5GT ? 2500*8/10 : \
> 0)
>
> +static inline int pcie_link_speed_to_mbps(enum pci_bus_speed speed)
> +{
> + switch (speed) {
> + case PCIE_SPEED_2_5GT:
> + return 2500;
> + case PCIE_SPEED_5_0GT:
> + return 5000;
> + case PCIE_SPEED_8_0GT:
> + return 8000;
> + case PCIE_SPEED_16_0GT:
> + return 16000;
> + case PCIE_SPEED_32_0GT:
> + return 32000;
> + case PCIE_SPEED_64_0GT:
> + return 64000;
> + default:
> + break;
> + }
> +
> + return -EINVAL;
> +}
> +
> const char *pci_speed_string(enum pci_bus_speed speed);
> enum pci_bus_speed pcie_get_speed_cap(struct pci_dev *dev);
> enum pcie_link_width pcie_get_width_cap(struct pci_dev *dev);
>
> --
> 2.42.0
>

2024-04-22 14:42:48

by Manivannan Sadhasivam

[permalink] [raw]
Subject: Re: [PATCH v10 2/6] PCI: qcom: Add ICC bandwidth vote for CPU to PCIe path

On Tue, Apr 09, 2024 at 03:43:20PM +0530, Krishna chaitanya chundru wrote:
> To access PCIe registers of the host controller and endpoint PCIe
> BAR space, config space the CPU-PCIe ICC (interconnect) path should

'To access the host controller registers and endpoint BAR/Config space,'

> be voted otherwise it may lead to NoC (Network on chip) timeout.
> We are surviving because of other driver voting for this path.
>
> As there is less access on this path compared to PCIe to mem path
> add minimum vote i.e 1KBps bandwidth always which is sufficient enough
> to keep the path active and is recommended by HW team.
>
> In suspend to ram case there can be some DBI access. Except in suspend
> to ram case disable CPU-PCIe ICC path after register space access
> is done.
>

During S2RAM (Suspend-to-RAM), DBI access can happen very late (while disabling
the boot CPU). So do not disable the CPU-PCIe interconnect path during S2RAM as
that may lead to NoC error.

> Reviewed-by: Bryan O'Donoghue <[email protected]>
> Signed-off-by: Krishna chaitanya chundru <[email protected]>
> ---
> drivers/pci/controller/dwc/pcie-qcom.c | 43 ++++++++++++++++++++++++++++++----
> 1 file changed, 39 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
> index 14772edcf0d3..e53422171c01 100644
> --- a/drivers/pci/controller/dwc/pcie-qcom.c
> +++ b/drivers/pci/controller/dwc/pcie-qcom.c
> @@ -245,6 +245,7 @@ struct qcom_pcie {
> struct phy *phy;
> struct gpio_desc *reset;
> struct icc_path *icc_mem;
> + struct icc_path *icc_cpu;
> const struct qcom_pcie_cfg *cfg;
> struct dentry *debugfs;
> bool suspended;
> @@ -1409,6 +1410,9 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
> if (IS_ERR(pcie->icc_mem))
> return PTR_ERR(pcie->icc_mem);
>
> + pcie->icc_cpu = devm_of_icc_get(pci->dev, "cpu-pcie");
> + if (IS_ERR(pcie->icc_cpu))
> + return PTR_ERR(pcie->icc_cpu);
> /*
> * Some Qualcomm platforms require interconnect bandwidth constraints
> * to be set before enabling interconnect clocks.
> @@ -1418,7 +1422,20 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
> */
> ret = icc_set_bw(pcie->icc_mem, 0, QCOM_PCIE_LINK_SPEED_TO_BW(1));
> if (ret) {
> - dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
> + dev_err(pci->dev, "Failed to set interconnect bandwidth for PCIe-MEM: %d\n",

'Failed to set bandwidth for PCIe-MEM interconnect path: %d\n'

> + ret);
> + return ret;
> + }
> +
> + /*
> + * Since the CPU-PCIe path is only used for activities like register
> + * access of the host controller and endpoint Config/BAR space access,
> + * HW team has recommended to use a minimal bandwidth of 1KBps just to

Single space after 'a'

> + * keep the path active.
> + */
> + ret = icc_set_bw(pcie->icc_cpu, 0, kBps_to_icc(1));
> + if (ret) {
> + dev_err(pci->dev, "Failed to set interconnect bandwidth for CPU-PCIe: %d\n",

'Failed to set bandwidth for CPU-PCIe interconnect path: %d\n'

> ret);
> return ret;
> }
> @@ -1448,7 +1465,7 @@ static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
>
> ret = icc_set_bw(pcie->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
> if (ret) {
> - dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
> + dev_err(pci->dev, "Failed to set interconnect bandwidth for PCIe-MEM: %d\n",

'Failed to set bandwidth for PCIe-MEM interconnect path: %d\n'

> ret);
> }
> }
> @@ -1610,7 +1627,7 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
> */
> ret = icc_set_bw(pcie->icc_mem, 0, kBps_to_icc(1));
> if (ret) {
> - dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret);
> + dev_err(dev, "Failed to set interconnect bandwidth for PCIe-MEM: %d\n", ret);

'Failed to set bandwidth for PCIe-MEM interconnect path: %d\n'

> return ret;
> }
>
> @@ -1634,7 +1651,17 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
> pcie->suspended = true;
> }
>
> - return 0;
> + /*
> + * In suspend to ram case there are DBI access, except in suspend to ram case
> + * remove the vote for CPU-PCIe path now, since at this point onwards,
> + * no register access will be done.
> + */

/*
* Only disable CPU-PCIe interconnect path if the suspend is non-S2RAM.
* Because on some platforms, DBI access can happen very late during the
* S2RAM and a non-active CPU-PCIe interconnect path may lead to NoC
* error.
*/

> + if (pm_suspend_target_state != PM_SUSPEND_MEM) {
> + ret = icc_disable(pcie->icc_cpu);
> + if (ret)
> + dev_err(dev, "Failed to disable Interconnect path of CPU-PCIe: %d\n", ret);

'Failed to disable CPU-PCIe interconnect path: %d\n'

> + }
> + return ret;
> }
>
> static int qcom_pcie_resume_noirq(struct device *dev)
> @@ -1642,6 +1669,14 @@ static int qcom_pcie_resume_noirq(struct device *dev)
> struct qcom_pcie *pcie = dev_get_drvdata(dev);
> int ret;
>
> + if (pm_suspend_target_state != PM_SUSPEND_MEM) {
> + ret = icc_enable(pcie->icc_cpu);
> + if (ret) {
> + dev_err(dev, "Failed to enable Interconnect path of CPU-PCIe: %d\n", ret);

'Failed to enable CPU-PCIe interconnect path: %d\n'

- Mani

--
மணிவண்ணன் சதாசிவம்

2024-04-22 14:45:28

by Manivannan Sadhasivam

[permalink] [raw]
Subject: Re: [PATCH v10 4/6] arm64: dts: qcom: sm8450: Add OPP table support to PCIe

On Tue, Apr 09, 2024 at 03:43:22PM +0530, Krishna chaitanya chundru wrote:
> PCIe needs to choose the appropriate performance state of RPMh power

'PCIe host controller driver'

> domain and interconnect bandwidth based up on the PCIe data rate.

'based on the PCIe data rate'

>
> Add the OPP table support to specify RPMh performance states and

'Hence, add...'

> interconnect peak bandwidth.
>
> Different link configurations may share the same aggregate bandwidth,

'It should be noted that the different...'

> e.g., a 2.5 GT/s x2 link and a 5.0 GT/s x1 link have the same bandwidth
> and share the same OPP entry.
>
> Signed-off-by: Krishna chaitanya chundru <[email protected]>
> ---
> arch/arm64/boot/dts/qcom/sm8450.dtsi | 77 ++++++++++++++++++++++++++++++++++++
> 1 file changed, 77 insertions(+)
>
> diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
> index 615296e13c43..9dfe16012726 100644
> --- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
> +++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
> @@ -1855,7 +1855,35 @@ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
> pinctrl-names = "default";
> pinctrl-0 = <&pcie0_default_state>;
>
> + operating-points-v2 = <&pcie0_opp_table>;
> +
> status = "disabled";
> +
> + pcie0_opp_table: opp-table {
> + compatible = "operating-points-v2";
> +
> + /* GEN 1 x1 */
> + opp-2500000 {
> + opp-hz = /bits/ 64 <2500000>;
> + required-opps = <&rpmhpd_opp_low_svs>;
> + opp-peak-kBps = <250000 1>;
> + };
> +
> + /* GEN 2 x1 */
> + opp-5000000 {
> + opp-hz = /bits/ 64 <5000000>;
> + required-opps = <&rpmhpd_opp_low_svs>;
> + opp-peak-kBps = <500000 1>;
> + };
> +
> + /* GEN 3 x1 */
> + opp-8000000 {
> + opp-hz = /bits/ 64 <8000000>;

I doubt this value. See below...

> + required-opps = <&rpmhpd_opp_nom>;
> + opp-peak-kBps = <984500 1>;
> + };
> + };
> +
> };
>
> pcie0_phy: phy@1c06000 {
> @@ -1982,7 +2010,56 @@ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
> pinctrl-names = "default";
> pinctrl-0 = <&pcie1_default_state>;
>
> + operating-points-v2 = <&pcie1_opp_table>;
> +
> status = "disabled";
> +
> + pcie1_opp_table: opp-table {
> + compatible = "operating-points-v2";
> +
> + /* GEN 1 x1 */
> + opp-2500000 {
> + opp-hz = /bits/ 64 <2500000>;
> + required-opps = <&rpmhpd_opp_low_svs>;
> + opp-peak-kBps = <250000 1>;
> + };
> +
> + /* GEN 1 x2 GEN 2 x1 */
> + opp-5000000 {
> + opp-hz = /bits/ 64 <5000000>;
> + required-opps = <&rpmhpd_opp_low_svs>;
> + opp-peak-kBps = <500000 1>;
> + };
> +
> + /* GEN 2 x2 */
> + opp-10000000 {
> + opp-hz = /bits/ 64 <10000000>;
> + required-opps = <&rpmhpd_opp_low_svs>;
> + opp-peak-kBps = <1000000 1>;
> + };
> +
> + /* GEN 3 x1 */
> + opp-8000000 {
> + opp-hz = /bits/ 64 <8000000>;

GEN 3 x1 frequency is lower than GEN 2 x2? This looks strange. Both should be of
same frequency.

> + required-opps = <&rpmhpd_opp_nom>;
> + opp-peak-kBps = <984500 1>;
> + };
> +
> + /* GEN 3 x2 GEN 4 x1 */

'GEN 3 x2 and GEN 4 x1'

- Mani

--
மணிவண்ணன் சதாசிவம்

Subject: Re: [PATCH v10 4/6] arm64: dts: qcom: sm8450: Add OPP table support to PCIe



On 4/22/2024 8:14 PM, Manivannan Sadhasivam wrote:
> On Tue, Apr 09, 2024 at 03:43:22PM +0530, Krishna chaitanya chundru wrote:
>> PCIe needs to choose the appropriate performance state of RPMh power
>
> 'PCIe host controller driver'
>
>> domain and interconnect bandwidth based up on the PCIe data rate.
>
> 'based on the PCIe data rate'
>
>>
>> Add the OPP table support to specify RPMh performance states and
>
> 'Hence, add...'
>
>> interconnect peak bandwidth.
>>
>> Different link configurations may share the same aggregate bandwidth,
>
> 'It should be noted that the different...'
>
>> e.g., a 2.5 GT/s x2 link and a 5.0 GT/s x1 link have the same bandwidth
>> and share the same OPP entry.
>>
>> Signed-off-by: Krishna chaitanya chundru <[email protected]>
>> ---
>> arch/arm64/boot/dts/qcom/sm8450.dtsi | 77 ++++++++++++++++++++++++++++++++++++
>> 1 file changed, 77 insertions(+)
>>
>> diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
>> index 615296e13c43..9dfe16012726 100644
>> --- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
>> +++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
>> @@ -1855,7 +1855,35 @@ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
>> pinctrl-names = "default";
>> pinctrl-0 = <&pcie0_default_state>;
>>
>> + operating-points-v2 = <&pcie0_opp_table>;
>> +
>> status = "disabled";
>> +
>> + pcie0_opp_table: opp-table {
>> + compatible = "operating-points-v2";
>> +
>> + /* GEN 1 x1 */
>> + opp-2500000 {
>> + opp-hz = /bits/ 64 <2500000>;
>> + required-opps = <&rpmhpd_opp_low_svs>;
>> + opp-peak-kBps = <250000 1>;
>> + };
>> +
>> + /* GEN 2 x1 */
>> + opp-5000000 {
>> + opp-hz = /bits/ 64 <5000000>;
>> + required-opps = <&rpmhpd_opp_low_svs>;
>> + opp-peak-kBps = <500000 1>;
>> + };
>> +
>> + /* GEN 3 x1 */
>> + opp-8000000 {
>> + opp-hz = /bits/ 64 <8000000>;
>
> I doubt this value. See below...
>
>> + required-opps = <&rpmhpd_opp_nom>;
>> + opp-peak-kBps = <984500 1>;
>> + };
>> + };
>> +
>> };
>>
>> pcie0_phy: phy@1c06000 {
>> @@ -1982,7 +2010,56 @@ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
>> pinctrl-names = "default";
>> pinctrl-0 = <&pcie1_default_state>;
>>
>> + operating-points-v2 = <&pcie1_opp_table>;
>> +
>> status = "disabled";
>> +
>> + pcie1_opp_table: opp-table {
>> + compatible = "operating-points-v2";
>> +
>> + /* GEN 1 x1 */
>> + opp-2500000 {
>> + opp-hz = /bits/ 64 <2500000>;
>> + required-opps = <&rpmhpd_opp_low_svs>;
>> + opp-peak-kBps = <250000 1>;
>> + };
>> +
>> + /* GEN 1 x2 GEN 2 x1 */
>> + opp-5000000 {
>> + opp-hz = /bits/ 64 <5000000>;
>> + required-opps = <&rpmhpd_opp_low_svs>;
>> + opp-peak-kBps = <500000 1>;
>> + };
>> +
>> + /* GEN 2 x2 */
>> + opp-10000000 {
>> + opp-hz = /bits/ 64 <10000000>;
>> + required-opps = <&rpmhpd_opp_low_svs>;
>> + opp-peak-kBps = <1000000 1>;
>> + };
>> +
>> + /* GEN 3 x1 */
>> + opp-8000000 {
>> + opp-hz = /bits/ 64 <8000000>;
>
> GEN 3 x1 frequency is lower than GEN 2 x2? This looks strange. Both should be of
> same frequency.
>
Gen2 is 5GT/s where as GEN3 is 8GT/s. so the freq for 3 x1(8 x1 GT/s) is
less than Gen2 x2(5 x2 GT/s)

- Krishna Chaitanya.
>> + required-opps = <&rpmhpd_opp_nom>;
>> + opp-peak-kBps = <984500 1>;
>> + };
>> +
>> + /* GEN 3 x2 GEN 4 x1 */
>
> 'GEN 3 x2 and GEN 4 x1'
>
> - Mani
>

2024-04-22 17:09:44

by Manivannan Sadhasivam

[permalink] [raw]
Subject: Re: [PATCH v10 4/6] arm64: dts: qcom: sm8450: Add OPP table support to PCIe

On Mon, Apr 22, 2024 at 10:25:06PM +0530, Krishna Chaitanya Chundru wrote:
>
>
> On 4/22/2024 8:14 PM, Manivannan Sadhasivam wrote:
> > On Tue, Apr 09, 2024 at 03:43:22PM +0530, Krishna chaitanya chundru wrote:
> > > PCIe needs to choose the appropriate performance state of RPMh power
> >
> > 'PCIe host controller driver'
> >
> > > domain and interconnect bandwidth based up on the PCIe data rate.
> >
> > 'based on the PCIe data rate'
> >
> > >
> > > Add the OPP table support to specify RPMh performance states and
> >
> > 'Hence, add...'
> >
> > > interconnect peak bandwidth.
> > >
> > > Different link configurations may share the same aggregate bandwidth,
> >
> > 'It should be noted that the different...'
> >
> > > e.g., a 2.5 GT/s x2 link and a 5.0 GT/s x1 link have the same bandwidth
> > > and share the same OPP entry.
> > >
> > > Signed-off-by: Krishna chaitanya chundru <[email protected]>
> > > ---
> > > arch/arm64/boot/dts/qcom/sm8450.dtsi | 77 ++++++++++++++++++++++++++++++++++++
> > > 1 file changed, 77 insertions(+)
> > >
> > > diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
> > > index 615296e13c43..9dfe16012726 100644
> > > --- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
> > > +++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
> > > @@ -1855,7 +1855,35 @@ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
> > > pinctrl-names = "default";
> > > pinctrl-0 = <&pcie0_default_state>;
> > > + operating-points-v2 = <&pcie0_opp_table>;
> > > +
> > > status = "disabled";
> > > +
> > > + pcie0_opp_table: opp-table {
> > > + compatible = "operating-points-v2";
> > > +
> > > + /* GEN 1 x1 */
> > > + opp-2500000 {
> > > + opp-hz = /bits/ 64 <2500000>;
> > > + required-opps = <&rpmhpd_opp_low_svs>;
> > > + opp-peak-kBps = <250000 1>;
> > > + };
> > > +
> > > + /* GEN 2 x1 */
> > > + opp-5000000 {
> > > + opp-hz = /bits/ 64 <5000000>;
> > > + required-opps = <&rpmhpd_opp_low_svs>;
> > > + opp-peak-kBps = <500000 1>;
> > > + };
> > > +
> > > + /* GEN 3 x1 */
> > > + opp-8000000 {
> > > + opp-hz = /bits/ 64 <8000000>;
> >
> > I doubt this value. See below...
> >
> > > + required-opps = <&rpmhpd_opp_nom>;
> > > + opp-peak-kBps = <984500 1>;
> > > + };
> > > + };
> > > +
> > > };
> > > pcie0_phy: phy@1c06000 {
> > > @@ -1982,7 +2010,56 @@ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
> > > pinctrl-names = "default";
> > > pinctrl-0 = <&pcie1_default_state>;
> > > + operating-points-v2 = <&pcie1_opp_table>;
> > > +
> > > status = "disabled";
> > > +
> > > + pcie1_opp_table: opp-table {
> > > + compatible = "operating-points-v2";
> > > +
> > > + /* GEN 1 x1 */
> > > + opp-2500000 {
> > > + opp-hz = /bits/ 64 <2500000>;
> > > + required-opps = <&rpmhpd_opp_low_svs>;
> > > + opp-peak-kBps = <250000 1>;
> > > + };
> > > +
> > > + /* GEN 1 x2 GEN 2 x1 */
> > > + opp-5000000 {
> > > + opp-hz = /bits/ 64 <5000000>;
> > > + required-opps = <&rpmhpd_opp_low_svs>;
> > > + opp-peak-kBps = <500000 1>;
> > > + };
> > > +
> > > + /* GEN 2 x2 */
> > > + opp-10000000 {
> > > + opp-hz = /bits/ 64 <10000000>;
> > > + required-opps = <&rpmhpd_opp_low_svs>;
> > > + opp-peak-kBps = <1000000 1>;
> > > + };
> > > +
> > > + /* GEN 3 x1 */
> > > + opp-8000000 {
> > > + opp-hz = /bits/ 64 <8000000>;
> >
> > GEN 3 x1 frequency is lower than GEN 2 x2? This looks strange. Both should be of
> > same frequency.
> >
> Gen2 is 5GT/s where as GEN3 is 8GT/s. so the freq for 3 x1(8 x1 GT/s) is
> less than Gen2 x2(5 x2 GT/s)
>

Sorry, that's my bad. I missed the fact that the spec doubled the data rate
starting from GEN 3 only.

- Mani

--
மணிவண்ணன் சதாசிவம்