Subject: [PATCH v8 0/7] PCI: qcom: Add support for OPP

This patch adds support for OPP to vote for the performance state of RPMH
power domain based upon PCIe speed it got enumerated.

QCOM Resource Power Manager-hardened (RPMh) is a hardware block which
maintains hardware state of a regulator by performing max aggregation of
the requests made by all of the processors.

PCIe controller can operate on different RPMh performance state of power
domain based up on the speed of the link. And this performance state varies
from target to target.

It is manadate to scale the performance state based up on the PCIe speed
link operates so that SoC can run under optimum power conditions.

Add Operating Performance Points(OPP) support to vote for RPMh state based
upon GEN speed link is operating.

Before link up PCIe driver will vote for the maximum performance state.

As now we are adding ICC BW vote in OPP, the ICC BW voting depends both
GEN speed and link width using opp-level to indicate the opp entry table
will be difficult.

In PCIe certain gen speeds like GEN1x2 & GEN2X1 or GEN3x2 & GEN4x1 use
same icc bw if we use freq in the OPP table to represent the PCIe Gen
speed number of PCIe entries can reduced.

So going back to use freq in the OPP table instead of level.

Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
Changes from v7:
- Fix the compilation issue in patch3
- Change the commit text and wrap the comments to 80 columns as suggested by bjorn
- remove PCIE_MBS2FREQ macro as this is being used by only qcom drivers.
- Link to v7: https://lore.kernel.org/r/[email protected]
Changes from v6:
- change CPU-PCIe bandwidth to 1KBps as suggested by HW team.
- Create a new API to get frequency based upon PCIe speed as suggested
by mani.
- Updated few commit texts and comments.
- Setting opp to NULL in suspend to remove any votes.
- Link for v6: https://lore.kernel.org/linux-arm-msm/[email protected]/
Changes from v5:
- Add ICC BW voting as part of OPP, rebase the latest kernel, and only
- either OPP or ICC BW voting will supported we removed the patch to
- return error for icc opp update patch.
- As we added the icc bw voting in opp table I am not including reviewed
- by tags given in previous patch.
- Use opp freq to find opp entries as now we need to include pcie link
- also in to considerations.
- Add CPU-PCIe BW voting which is not present till now.
- Drop PCI: qcom: Return error from 'qcom_pcie_icc_update' as either opp or icc bw
- only one executes and there is no need to fail if opp or icc update fails.
- Link for v5: https://lore.kernel.org/linux-arm-msm/20231101063323.GH2897@thinkpad/T/
Changes from v4:
- Added a separate patch for returning error from the qcom_pcie_upadate
and moved opp update logic to icc_update and used a bool variable to
update the opp.
- Addressed comments made by pavan.
changes from v3:
- Removing the opp vote on suspend when the link is not up and link is not
up and add debug prints as suggested by pavan.
- Added dev_pm_opp_find_level_floor API to find the highest opp to vote.
changes from v2:
- Instead of using the freq based opp search use level based as suggested
by Dmitry Baryshkov.
Changes from v1:
- Addressed comments from Krzysztof Kozlowski.
- Added the rpmhpd_opp_xxx phandle as suggested by pavan.
- Added dev_pm_opp_set_opp API call which was missed on previous patch.

---

---
Krishna chaitanya chundru (7):
dt-bindings: PCI: qcom: Add interconnects path as required property
arm64: dts: qcom: sm8450: Add interconnect path to PCIe node
PCI: qcom: Add ICC bandwidth vote for CPU to PCIe path
dt-bindings: pci: qcom: Add opp table
arm64: dts: qcom: sm8450: Add opp table support to PCIe
PCI: Bring the PCIe speed to MBps logic to new pcie_link_speed_to_mbps()
PCI: qcom: Add OPP support to scale performance state of power domain

.../devicetree/bindings/pci/qcom,pcie.yaml | 6 ++
arch/arm64/boot/dts/qcom/sm8450.dtsi | 82 +++++++++++++++
drivers/pci/controller/dwc/pcie-qcom.c | 117 ++++++++++++++++++---
drivers/pci/pci.c | 19 +---
drivers/pci/pci.h | 22 ++++
5 files changed, 212 insertions(+), 34 deletions(-)
---
base-commit: 6613476e225e090cc9aad49be7fa504e290dd33d
change-id: 20240222-opp_support-19a0c53be1f4

Best regards,
--
Krishna chaitanya chundru <[email protected]>



Subject: [PATCH v8 2/7] arm64: dts: qcom: sm8450: Add interconnect path to PCIe node

Add pcie-mem & cpu-pcie interconnect path to the PCIe nodes.

Reviewed-by: Manivannan Sadhasivam <[email protected]>
Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
arch/arm64/boot/dts/qcom/sm8450.dtsi | 8 ++++++++
1 file changed, 8 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
index 01e4dfc4babd..6b1d2e0d9d14 100644
--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
@@ -1781,6 +1781,10 @@ pcie0: pcie@1c00000 {
<0 0 0 3 &intc 0 0 0 151 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
<0 0 0 4 &intc 0 0 0 152 IRQ_TYPE_LEVEL_HIGH>; /* int_d */

+ interconnects = <&pcie_noc MASTER_PCIE_0 0 &mc_virt SLAVE_EBI1 0>,
+ <&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_PCIE_0 0>;
+ interconnect-names = "pcie-mem", "cpu-pcie";
+
clocks = <&gcc GCC_PCIE_0_PIPE_CLK>,
<&gcc GCC_PCIE_0_PIPE_CLK_SRC>,
<&pcie0_phy>,
@@ -1890,6 +1894,10 @@ pcie1: pcie@1c08000 {
<0 0 0 3 &intc 0 0 0 438 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
<0 0 0 4 &intc 0 0 0 439 IRQ_TYPE_LEVEL_HIGH>; /* int_d */

+ interconnects = <&pcie_noc MASTER_PCIE_1 0 &mc_virt SLAVE_EBI1 0>,
+ <&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_PCIE_1 0>;
+ interconnect-names = "pcie-mem", "cpu-pcie";
+
clocks = <&gcc GCC_PCIE_1_PIPE_CLK>,
<&gcc GCC_PCIE_1_PIPE_CLK_SRC>,
<&pcie1_phy>,

--
2.42.0


Subject: [PATCH v8 3/7] PCI: qcom: Add ICC bandwidth vote for CPU to PCIe path

To access PCIe registers, PCIe BAR space, config space the CPU-PCIe
ICC (interconnect consumers) path should be voted otherwise it may
lead to NoC (Network on chip) timeout. We are surviving because of
other driver vote for this path.

As there is less access on this path compared to PCIe to mem path
add minimum vote i.e 1KBps bandwidth always.

When suspending, disable this path after register space access
is done.

Reviewed-by: Bryan O'Donoghue <[email protected]>
Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
drivers/pci/controller/dwc/pcie-qcom.c | 38 ++++++++++++++++++++++++++++++++--
1 file changed, 36 insertions(+), 2 deletions(-)

diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
index 10f2d0bb86be..a0266bfe71f1 100644
--- a/drivers/pci/controller/dwc/pcie-qcom.c
+++ b/drivers/pci/controller/dwc/pcie-qcom.c
@@ -240,6 +240,7 @@ struct qcom_pcie {
struct phy *phy;
struct gpio_desc *reset;
struct icc_path *icc_mem;
+ struct icc_path *icc_cpu;
const struct qcom_pcie_cfg *cfg;
struct dentry *debugfs;
bool suspended;
@@ -1372,6 +1373,9 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
if (IS_ERR(pcie->icc_mem))
return PTR_ERR(pcie->icc_mem);

+ pcie->icc_cpu = devm_of_icc_get(pci->dev, "cpu-pcie");
+ if (IS_ERR(pcie->icc_cpu))
+ return PTR_ERR(pcie->icc_cpu);
/*
* Some Qualcomm platforms require interconnect bandwidth constraints
* to be set before enabling interconnect clocks.
@@ -1381,7 +1385,19 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
*/
ret = icc_set_bw(pcie->icc_mem, 0, QCOM_PCIE_LINK_SPEED_TO_BW(1));
if (ret) {
- dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
+ dev_err(pci->dev, "failed to set interconnect bandwidth for pcie-mem: %d\n",
+ ret);
+ return ret;
+ }
+
+ /*
+ * The config space, BAR space and registers goes through cpu-pcie path
+ * Set peak bandwidth to 1KBps as recommended by HW team for this path
+ * all the time.
+ */
+ ret = icc_set_bw(pcie->icc_cpu, 0, kBps_to_icc(1));
+ if (ret) {
+ dev_err(pci->dev, "failed to set interconnect bandwidth for cpu-pcie: %d\n",
ret);
return ret;
}
@@ -1573,7 +1589,7 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
*/
ret = icc_set_bw(pcie->icc_mem, 0, kBps_to_icc(1));
if (ret) {
- dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret);
+ dev_err(dev, "Failed to set interconnect bandwidth for pcie-mem: %d\n", ret);
return ret;
}

@@ -1597,6 +1613,18 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
pcie->suspended = true;
}

+ /* Remove CPU path vote after all the register access is done */
+ ret = icc_disable(pcie->icc_cpu);
+ if (ret) {
+ dev_err(dev, "failed to disable icc path of cpu-pcie: %d\n", ret);
+ if (pcie->suspended) {
+ qcom_pcie_host_init(&pcie->pci->pp);
+ pcie->suspended = false;
+ }
+ qcom_pcie_icc_update(pcie);
+ return ret;
+ }
+
return 0;
}

@@ -1605,6 +1633,12 @@ static int qcom_pcie_resume_noirq(struct device *dev)
struct qcom_pcie *pcie = dev_get_drvdata(dev);
int ret;

+ ret = icc_enable(pcie->icc_cpu);
+ if (ret) {
+ dev_err(dev, "failed to enable icc path of cpu-pcie: %d\n", ret);
+ return ret;
+ }
+
if (pcie->suspended) {
ret = qcom_pcie_host_init(&pcie->pci->pp);
if (ret)

--
2.42.0


Subject: [PATCH v8 4/7] dt-bindings: pci: qcom: Add opp table

PCIe needs to choose the appropriate performance state of RPMH power
domain based upon the PCIe gen speed.

Adding the Operating Performance Points table allows to adjust power
domain performance state and icc peak bw, depending on the PCIe gen
speed and width.

Acked-by: Manivannan Sadhasivam <[email protected]>
Reviewed-by: Krzysztof Kozlowski <[email protected]>
Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
Documentation/devicetree/bindings/pci/qcom,pcie.yaml | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/Documentation/devicetree/bindings/pci/qcom,pcie.yaml b/Documentation/devicetree/bindings/pci/qcom,pcie.yaml
index 5ad5c4cfd2a8..e1d75cabb1a9 100644
--- a/Documentation/devicetree/bindings/pci/qcom,pcie.yaml
+++ b/Documentation/devicetree/bindings/pci/qcom,pcie.yaml
@@ -127,6 +127,10 @@ properties:
description: GPIO controlled connection to WAKE# signal
maxItems: 1

+ operating-points-v2: true
+ opp-table:
+ type: object
+
required:
- compatible
- reg

--
2.42.0


Subject: [PATCH v8 5/7] arm64: dts: qcom: sm8450: Add opp table support to PCIe

PCIe needs to choose the appropriate performance state of RPMH power
domain and interconnect bandwidth based up on the PCIe gen speed.

Add the OPP table support to specify RPMH performance states and
interconnect peak bandwidth.

Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
arch/arm64/boot/dts/qcom/sm8450.dtsi | 74 ++++++++++++++++++++++++++++++++++++
1 file changed, 74 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
index 6b1d2e0d9d14..662f2129f20d 100644
--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
@@ -1827,7 +1827,32 @@ pcie0: pcie@1c00000 {
pinctrl-names = "default";
pinctrl-0 = <&pcie0_default_state>;

+ operating-points-v2 = <&pcie0_opp_table>;
+
status = "disabled";
+
+ pcie0_opp_table: opp-table {
+ compatible = "operating-points-v2";
+
+ opp-2500000 {
+ opp-hz = /bits/ 64 <2500000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <250000 1>;
+ };
+
+ opp-5000000 {
+ opp-hz = /bits/ 64 <5000000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <500000 1>;
+ };
+
+ opp-8000000 {
+ opp-hz = /bits/ 64 <8000000>;
+ required-opps = <&rpmhpd_opp_nom>;
+ opp-peak-kBps = <984500 1>;
+ };
+ };
+
};

pcie0_phy: phy@1c06000 {
@@ -1938,7 +1963,56 @@ pcie1: pcie@1c08000 {
pinctrl-names = "default";
pinctrl-0 = <&pcie1_default_state>;

+ operating-points-v2 = <&pcie1_opp_table>;
+
status = "disabled";
+
+ pcie1_opp_table: opp-table {
+ compatible = "operating-points-v2";
+
+ /* GEN 1x1 */
+ opp-2500000 {
+ opp-hz = /bits/ 64 <2500000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <250000 1>;
+ };
+
+ /* GEN 1x2 GEN 2x1 */
+ opp-5000000 {
+ opp-hz = /bits/ 64 <5000000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <500000 1>;
+ };
+
+ /* GEN 2x2 */
+ opp-10000000 {
+ opp-hz = /bits/ 64 <10000000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <1000000 1>;
+ };
+
+ /* GEN 3x1 */
+ opp-8000000 {
+ opp-hz = /bits/ 64 <8000000>;
+ required-opps = <&rpmhpd_opp_nom>;
+ opp-peak-kBps = <984500 1>;
+ };
+
+ /* GEN 3x2 GEN 4x1 */
+ opp-16000000 {
+ opp-hz = /bits/ 64 <16000000>;
+ required-opps = <&rpmhpd_opp_nom>;
+ opp-peak-kBps = <1969000 1>;
+ };
+
+ /* GEN 4x2 */
+ opp-32000000 {
+ opp-hz = /bits/ 64 <32000000>;
+ required-opps = <&rpmhpd_opp_nom>;
+ opp-peak-kBps = <3938000 1>;
+ };
+ };
+
};

pcie1_phy: phy@1c0e000 {

--
2.42.0


Subject: [PATCH v8 6/7] PCI: Bring the PCIe speed to MBps logic to new pcie_link_speed_to_mbps()

Bring the switch case in pcie_link_speed_mbps() to new function to
the header file so that it can be used in other places like
in controller driver.

Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
drivers/pci/pci.c | 19 +------------------
drivers/pci/pci.h | 22 ++++++++++++++++++++++
2 files changed, 23 insertions(+), 18 deletions(-)

diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index d8f11a078924..b441ab862a8d 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -6309,24 +6309,7 @@ int pcie_link_speed_mbps(struct pci_dev *pdev)
if (err)
return err;

- switch (to_pcie_link_speed(lnksta)) {
- case PCIE_SPEED_2_5GT:
- return 2500;
- case PCIE_SPEED_5_0GT:
- return 5000;
- case PCIE_SPEED_8_0GT:
- return 8000;
- case PCIE_SPEED_16_0GT:
- return 16000;
- case PCIE_SPEED_32_0GT:
- return 32000;
- case PCIE_SPEED_64_0GT:
- return 64000;
- default:
- break;
- }
-
- return -EINVAL;
+ return pcie_link_speed_to_mbps(to_pcie_link_speed(lnksta));
}
EXPORT_SYMBOL(pcie_link_speed_mbps);

diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index 2336a8d1edab..40403783229f 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -282,6 +282,28 @@ void pci_bus_put(struct pci_bus *bus);
(speed) == PCIE_SPEED_2_5GT ? 2500*8/10 : \
0)

+static inline int pcie_link_speed_to_mbps(enum pci_bus_speed speed)
+{
+ switch (speed) {
+ case PCIE_SPEED_2_5GT:
+ return 2500;
+ case PCIE_SPEED_5_0GT:
+ return 5000;
+ case PCIE_SPEED_8_0GT:
+ return 8000;
+ case PCIE_SPEED_16_0GT:
+ return 16000;
+ case PCIE_SPEED_32_0GT:
+ return 32000;
+ case PCIE_SPEED_64_0GT:
+ return 64000;
+ default:
+ break;
+ }
+
+ return -EINVAL;
+}
+
const char *pci_speed_string(enum pci_bus_speed speed);
enum pci_bus_speed pcie_get_speed_cap(struct pci_dev *dev);
enum pcie_link_width pcie_get_width_cap(struct pci_dev *dev);

--
2.42.0


Subject: [PATCH v8 7/7] PCI: qcom: Add OPP support to scale performance state of power domain

QCOM Resource Power Manager-hardened (RPMh) is a hardware block which
maintains hardware state of a regulator by performing max aggregation of
the requests made by all of the clients.

PCIe controller can operate on different RPMh performance state of power
domain based on the speed of the link. And this performance state varies
from target to target, like some controllers support GEN3 in NOM (Nominal)
voltage corner, while some other supports GEN3 in low SVS (static voltage
scaling).

The SoC can be more power efficient if we scale the performance state
based on the aggregate PCIe link bandwidth.

Add Operating Performance Points (OPP) support to vote for RPMh state based
on the aggregate link bandwidth.

OPP can handle ICC bw voting also, so move ICC bw voting through OPP
framework if OPP entries are present.

Different link configurations may share the same aggregate bandwidth,
e.g., a 2.5 GT/s x2 link and a 5.0 GT/s x1 link have the same bandwidth
and share the same OPP entry.

As we are moving ICC voting as part of OPP, don't initialize ICC if OPP
is supported.

Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
drivers/pci/controller/dwc/pcie-qcom.c | 81 +++++++++++++++++++++++++++-------
1 file changed, 66 insertions(+), 15 deletions(-)

diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
index a0266bfe71f1..2ec14bfafcfc 100644
--- a/drivers/pci/controller/dwc/pcie-qcom.c
+++ b/drivers/pci/controller/dwc/pcie-qcom.c
@@ -22,6 +22,7 @@
#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/pci.h>
+#include <linux/pm_opp.h>
#include <linux/pm_runtime.h>
#include <linux/platform_device.h>
#include <linux/phy/pcie.h>
@@ -244,6 +245,7 @@ struct qcom_pcie {
const struct qcom_pcie_cfg *cfg;
struct dentry *debugfs;
bool suspended;
+ bool opp_supported;
};

#define to_qcom_pcie(x) dev_get_drvdata((x)->dev)
@@ -1405,15 +1407,13 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
return 0;
}

-static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
+static void qcom_pcie_icc_opp_update(struct qcom_pcie *pcie)
{
struct dw_pcie *pci = pcie->pci;
- u32 offset, status;
+ u32 offset, status, freq;
+ struct dev_pm_opp *opp;
int speed, width;
- int ret;
-
- if (!pcie->icc_mem)
- return;
+ int ret, mbps;

offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
status = readw(pci->dbi_base + offset + PCI_EXP_LNKSTA);
@@ -1425,11 +1425,30 @@ static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, status);
width = FIELD_GET(PCI_EXP_LNKSTA_NLW, status);

- ret = icc_set_bw(pcie->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
- if (ret) {
- dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
- ret);
+ if (pcie->opp_supported) {
+ mbps = pcie_link_speed_to_mbps(pcie_link_speed[speed]);
+ if (mbps < 0)
+ return;
+
+ freq = mbps * 1000;
+ opp = dev_pm_opp_find_freq_exact(pci->dev, freq * width, true);
+ if (!IS_ERR(opp)) {
+ ret = dev_pm_opp_set_opp(pci->dev, opp);
+ if (ret)
+ dev_err(pci->dev, "Failed to set opp: freq %ld ret %d\n",
+ dev_pm_opp_get_freq(opp), ret);
+ dev_pm_opp_put(opp);
+ }
+ } else {
+ ret = icc_set_bw(pcie->icc_mem, 0,
+ width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
+ if (ret) {
+ dev_err(pci->dev,
+ "failed to set interconnect bandwidth for pcie-mem: %d\n", ret);
+ }
}
+
+ return;
}

static int qcom_pcie_link_transition_count(struct seq_file *s, void *data)
@@ -1472,8 +1491,10 @@ static void qcom_pcie_init_debugfs(struct qcom_pcie *pcie)
static int qcom_pcie_probe(struct platform_device *pdev)
{
const struct qcom_pcie_cfg *pcie_cfg;
+ unsigned long max_freq = INT_MAX;
struct device *dev = &pdev->dev;
struct qcom_pcie *pcie;
+ struct dev_pm_opp *opp;
struct dw_pcie_rp *pp;
struct resource *res;
struct dw_pcie *pci;
@@ -1540,9 +1561,36 @@ static int qcom_pcie_probe(struct platform_device *pdev)
goto err_pm_runtime_put;
}

- ret = qcom_pcie_icc_init(pcie);
- if (ret)
+ /* OPP table is optional */
+ ret = devm_pm_opp_of_add_table(dev);
+ if (ret && ret != -ENODEV) {
+ dev_err_probe(dev, ret, "Failed to add OPP table\n");
goto err_pm_runtime_put;
+ }
+
+ /*
+ * Use highest OPP here if the OPP table is present. At the end of
+ * the probe(), OPP will be updated using qcom_pcie_icc_opp_update().
+ */
+ if (ret != -ENODEV) {
+ opp = dev_pm_opp_find_freq_floor(dev, &max_freq);
+ if (!IS_ERR(opp)) {
+ ret = dev_pm_opp_set_opp(dev, opp);
+ if (ret)
+ dev_err_probe(pci->dev, ret,
+ "Failed to set opp: freq %ld\n",
+ dev_pm_opp_get_freq(opp));
+ dev_pm_opp_put(opp);
+ }
+ pcie->opp_supported = true;
+ }
+
+ /* Skip ICC init if OPP is supported as ICC bw is handled by OPP */
+ if (!pcie->opp_supported) {
+ ret = qcom_pcie_icc_init(pcie);
+ if (ret)
+ goto err_pm_runtime_put;
+ }

ret = pcie->cfg->ops->get_resources(pcie);
if (ret)
@@ -1562,7 +1610,7 @@ static int qcom_pcie_probe(struct platform_device *pdev)
goto err_phy_exit;
}

- qcom_pcie_icc_update(pcie);
+ qcom_pcie_icc_opp_update(pcie);

if (pcie->mhi)
qcom_pcie_init_debugfs(pcie);
@@ -1621,10 +1669,13 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
qcom_pcie_host_init(&pcie->pci->pp);
pcie->suspended = false;
}
- qcom_pcie_icc_update(pcie);
+ qcom_pcie_icc_opp_update(pcie);
return ret;
}

+ if (pcie->opp_supported)
+ dev_pm_opp_set_opp(pcie->pci->dev, NULL);
+
return 0;
}

@@ -1647,7 +1698,7 @@ static int qcom_pcie_resume_noirq(struct device *dev)
pcie->suspended = false;
}

- qcom_pcie_icc_update(pcie);
+ qcom_pcie_icc_opp_update(pcie);

return 0;
}

--
2.42.0


2024-03-04 17:41:35

by Manivannan Sadhasivam

[permalink] [raw]
Subject: Re: [PATCH v8 3/7] PCI: qcom: Add ICC bandwidth vote for CPU to PCIe path

On Sat, Mar 02, 2024 at 09:29:57AM +0530, Krishna chaitanya chundru wrote:
> To access PCIe registers, PCIe BAR space, config space the CPU-PCIe
> ICC (interconnect consumers) path should be voted otherwise it may
> lead to NoC (Network on chip) timeout. We are surviving because of
> other driver vote for this path.
>
> As there is less access on this path compared to PCIe to mem path
> add minimum vote i.e 1KBps bandwidth always.

Please add the info that 1KBps is what shared by the HW team.

>
> When suspending, disable this path after register space access
> is done.
>
> Reviewed-by: Bryan O'Donoghue <[email protected]>
> Signed-off-by: Krishna chaitanya chundru <[email protected]>
> ---
> drivers/pci/controller/dwc/pcie-qcom.c | 38 ++++++++++++++++++++++++++++++++--
> 1 file changed, 36 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
> index 10f2d0bb86be..a0266bfe71f1 100644
> --- a/drivers/pci/controller/dwc/pcie-qcom.c
> +++ b/drivers/pci/controller/dwc/pcie-qcom.c
> @@ -240,6 +240,7 @@ struct qcom_pcie {
> struct phy *phy;
> struct gpio_desc *reset;
> struct icc_path *icc_mem;
> + struct icc_path *icc_cpu;
> const struct qcom_pcie_cfg *cfg;
> struct dentry *debugfs;
> bool suspended;
> @@ -1372,6 +1373,9 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
> if (IS_ERR(pcie->icc_mem))
> return PTR_ERR(pcie->icc_mem);
>
> + pcie->icc_cpu = devm_of_icc_get(pci->dev, "cpu-pcie");
> + if (IS_ERR(pcie->icc_cpu))
> + return PTR_ERR(pcie->icc_cpu);
> /*
> * Some Qualcomm platforms require interconnect bandwidth constraints
> * to be set before enabling interconnect clocks.
> @@ -1381,7 +1385,19 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
> */
> ret = icc_set_bw(pcie->icc_mem, 0, QCOM_PCIE_LINK_SPEED_TO_BW(1));
> if (ret) {
> - dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
> + dev_err(pci->dev, "failed to set interconnect bandwidth for pcie-mem: %d\n",

"PCIe-MEM"

> + ret);
> + return ret;
> + }
> +
> + /*
> + * The config space, BAR space and registers goes through cpu-pcie path
> + * Set peak bandwidth to 1KBps as recommended by HW team for this path
> + * all the time.

How about,

"Since the CPU-PCIe path is only used for activities like register
access, Config/BAR space access, HW team has recommended to use a
minimal bandwidth of 1KBps just to keep the link active."

> + */
> + ret = icc_set_bw(pcie->icc_cpu, 0, kBps_to_icc(1));
> + if (ret) {
> + dev_err(pci->dev, "failed to set interconnect bandwidth for cpu-pcie: %d\n",
> ret);
> return ret;
> }
> @@ -1573,7 +1589,7 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
> */
> ret = icc_set_bw(pcie->icc_mem, 0, kBps_to_icc(1));
> if (ret) {
> - dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret);
> + dev_err(dev, "Failed to set interconnect bandwidth for pcie-mem: %d\n", ret);

"PCIe-MEM"

> return ret;
> }
>
> @@ -1597,6 +1613,18 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
> pcie->suspended = true;
> }
>
> + /* Remove CPU path vote after all the register access is done */

"Remove the vote for CPU-PCIe path now, since at this point onwards, no register
access will be done."

> + ret = icc_disable(pcie->icc_cpu);
> + if (ret) {
> + dev_err(dev, "failed to disable icc path of cpu-pcie: %d\n", ret);

"CPU-PCIe"

> + if (pcie->suspended) {
> + qcom_pcie_host_init(&pcie->pci->pp);

Interesting. So if icc_disable() fails, can the IP continue to function?

> + pcie->suspended = false;
> + }
> + qcom_pcie_icc_update(pcie);
> + return ret;
> + }
> +
> return 0;
> }
>
> @@ -1605,6 +1633,12 @@ static int qcom_pcie_resume_noirq(struct device *dev)
> struct qcom_pcie *pcie = dev_get_drvdata(dev);
> int ret;
>
> + ret = icc_enable(pcie->icc_cpu);
> + if (ret) {
> + dev_err(dev, "failed to enable icc path of cpu-pcie: %d\n", ret);

"CPU-PCIe"

- Mani

--
மணிவண்ணன் சதாசிவம்

2024-03-04 17:49:44

by Manivannan Sadhasivam

[permalink] [raw]
Subject: Re: [PATCH v8 5/7] arm64: dts: qcom: sm8450: Add opp table support to PCIe

On Sat, Mar 02, 2024 at 09:29:59AM +0530, Krishna chaitanya chundru wrote:
> PCIe needs to choose the appropriate performance state of RPMH power
> domain and interconnect bandwidth based up on the PCIe gen speed.
>
> Add the OPP table support to specify RPMH performance states and
> interconnect peak bandwidth.
>
> Signed-off-by: Krishna chaitanya chundru <[email protected]>
> ---
> arch/arm64/boot/dts/qcom/sm8450.dtsi | 74 ++++++++++++++++++++++++++++++++++++
> 1 file changed, 74 insertions(+)
>
> diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
> index 6b1d2e0d9d14..662f2129f20d 100644
> --- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
> +++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
> @@ -1827,7 +1827,32 @@ pcie0: pcie@1c00000 {
> pinctrl-names = "default";
> pinctrl-0 = <&pcie0_default_state>;
>
> + operating-points-v2 = <&pcie0_opp_table>;
> +
> status = "disabled";
> +
> + pcie0_opp_table: opp-table {
> + compatible = "operating-points-v2";
> +
> + opp-2500000 {

Add the comments that you added below.

> + opp-hz = /bits/ 64 <2500000>;
> + required-opps = <&rpmhpd_opp_low_svs>;
> + opp-peak-kBps = <250000 1>;

Isn't the peak bw should be greater that the avg bw? Atleast in upstream we
follow that pattern.

- Mani

> + };
> +
> + opp-5000000 {
> + opp-hz = /bits/ 64 <5000000>;
> + required-opps = <&rpmhpd_opp_low_svs>;
> + opp-peak-kBps = <500000 1>;
> + };
> +
> + opp-8000000 {
> + opp-hz = /bits/ 64 <8000000>;
> + required-opps = <&rpmhpd_opp_nom>;
> + opp-peak-kBps = <984500 1>;
> + };
> + };
> +
> };
>
> pcie0_phy: phy@1c06000 {
> @@ -1938,7 +1963,56 @@ pcie1: pcie@1c08000 {
> pinctrl-names = "default";
> pinctrl-0 = <&pcie1_default_state>;
>
> + operating-points-v2 = <&pcie1_opp_table>;
> +
> status = "disabled";
> +
> + pcie1_opp_table: opp-table {
> + compatible = "operating-points-v2";
> +
> + /* GEN 1x1 */
> + opp-2500000 {
> + opp-hz = /bits/ 64 <2500000>;
> + required-opps = <&rpmhpd_opp_low_svs>;
> + opp-peak-kBps = <250000 1>;
> + };
> +
> + /* GEN 1x2 GEN 2x1 */
> + opp-5000000 {
> + opp-hz = /bits/ 64 <5000000>;
> + required-opps = <&rpmhpd_opp_low_svs>;
> + opp-peak-kBps = <500000 1>;
> + };
> +
> + /* GEN 2x2 */
> + opp-10000000 {
> + opp-hz = /bits/ 64 <10000000>;
> + required-opps = <&rpmhpd_opp_low_svs>;
> + opp-peak-kBps = <1000000 1>;
> + };
> +
> + /* GEN 3x1 */
> + opp-8000000 {
> + opp-hz = /bits/ 64 <8000000>;
> + required-opps = <&rpmhpd_opp_nom>;
> + opp-peak-kBps = <984500 1>;
> + };
> +
> + /* GEN 3x2 GEN 4x1 */
> + opp-16000000 {
> + opp-hz = /bits/ 64 <16000000>;
> + required-opps = <&rpmhpd_opp_nom>;
> + opp-peak-kBps = <1969000 1>;
> + };
> +
> + /* GEN 4x2 */
> + opp-32000000 {
> + opp-hz = /bits/ 64 <32000000>;
> + required-opps = <&rpmhpd_opp_nom>;
> + opp-peak-kBps = <3938000 1>;
> + };
> + };
> +
> };
>
> pcie1_phy: phy@1c0e000 {
>
> --
> 2.42.0
>

--
மணிவண்ணன் சதாசிவம்

2024-03-04 17:52:00

by Manivannan Sadhasivam

[permalink] [raw]
Subject: Re: [PATCH v8 6/7] PCI: Bring the PCIe speed to MBps logic to new pcie_link_speed_to_mbps()

On Sat, Mar 02, 2024 at 09:30:00AM +0530, Krishna chaitanya chundru wrote:
> Bring the switch case in pcie_link_speed_mbps() to new function to
> the header file so that it can be used in other places like
> in controller driver.
>

Suggested-by: Bjorn Helgaas <[email protected]>

> Signed-off-by: Krishna chaitanya chundru <[email protected]>

Reviewed-by: Manivannan Sadhasivam <[email protected]>

- Mani

> ---
> drivers/pci/pci.c | 19 +------------------
> drivers/pci/pci.h | 22 ++++++++++++++++++++++
> 2 files changed, 23 insertions(+), 18 deletions(-)
>
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index d8f11a078924..b441ab862a8d 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -6309,24 +6309,7 @@ int pcie_link_speed_mbps(struct pci_dev *pdev)
> if (err)
> return err;
>
> - switch (to_pcie_link_speed(lnksta)) {
> - case PCIE_SPEED_2_5GT:
> - return 2500;
> - case PCIE_SPEED_5_0GT:
> - return 5000;
> - case PCIE_SPEED_8_0GT:
> - return 8000;
> - case PCIE_SPEED_16_0GT:
> - return 16000;
> - case PCIE_SPEED_32_0GT:
> - return 32000;
> - case PCIE_SPEED_64_0GT:
> - return 64000;
> - default:
> - break;
> - }
> -
> - return -EINVAL;
> + return pcie_link_speed_to_mbps(to_pcie_link_speed(lnksta));
> }
> EXPORT_SYMBOL(pcie_link_speed_mbps);
>
> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
> index 2336a8d1edab..40403783229f 100644
> --- a/drivers/pci/pci.h
> +++ b/drivers/pci/pci.h
> @@ -282,6 +282,28 @@ void pci_bus_put(struct pci_bus *bus);
> (speed) == PCIE_SPEED_2_5GT ? 2500*8/10 : \
> 0)
>
> +static inline int pcie_link_speed_to_mbps(enum pci_bus_speed speed)
> +{
> + switch (speed) {
> + case PCIE_SPEED_2_5GT:
> + return 2500;
> + case PCIE_SPEED_5_0GT:
> + return 5000;
> + case PCIE_SPEED_8_0GT:
> + return 8000;
> + case PCIE_SPEED_16_0GT:
> + return 16000;
> + case PCIE_SPEED_32_0GT:
> + return 32000;
> + case PCIE_SPEED_64_0GT:
> + return 64000;
> + default:
> + break;
> + }
> +
> + return -EINVAL;
> +}
> +
> const char *pci_speed_string(enum pci_bus_speed speed);
> enum pci_bus_speed pcie_get_speed_cap(struct pci_dev *dev);
> enum pcie_link_width pcie_get_width_cap(struct pci_dev *dev);
>
> --
> 2.42.0
>

--
மணிவண்ணன் சதாசிவம்

2024-03-04 18:05:45

by Manivannan Sadhasivam

[permalink] [raw]
Subject: Re: [PATCH v8 7/7] PCI: qcom: Add OPP support to scale performance state of power domain

On Sat, Mar 02, 2024 at 09:30:01AM +0530, Krishna chaitanya chundru wrote:
> QCOM Resource Power Manager-hardened (RPMh) is a hardware block which
> maintains hardware state of a regulator by performing max aggregation of
> the requests made by all of the clients.
>
> PCIe controller can operate on different RPMh performance state of power
> domain based on the speed of the link. And this performance state varies
> from target to target, like some controllers support GEN3 in NOM (Nominal)
> voltage corner, while some other supports GEN3 in low SVS (static voltage
> scaling).
>
> The SoC can be more power efficient if we scale the performance state
> based on the aggregate PCIe link bandwidth.
>
> Add Operating Performance Points (OPP) support to vote for RPMh state based
> on the aggregate link bandwidth.
>
> OPP can handle ICC bw voting also, so move ICC bw voting through OPP
> framework if OPP entries are present.
>
> Different link configurations may share the same aggregate bandwidth,
> e.g., a 2.5 GT/s x2 link and a 5.0 GT/s x1 link have the same bandwidth
> and share the same OPP entry.
>
> As we are moving ICC voting as part of OPP, don't initialize ICC if OPP
> is supported.
>
> Signed-off-by: Krishna chaitanya chundru <[email protected]>
> ---
> drivers/pci/controller/dwc/pcie-qcom.c | 81 +++++++++++++++++++++++++++-------
> 1 file changed, 66 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
> index a0266bfe71f1..2ec14bfafcfc 100644
> --- a/drivers/pci/controller/dwc/pcie-qcom.c
> +++ b/drivers/pci/controller/dwc/pcie-qcom.c
> @@ -22,6 +22,7 @@
> #include <linux/of.h>
> #include <linux/of_gpio.h>
> #include <linux/pci.h>
> +#include <linux/pm_opp.h>
> #include <linux/pm_runtime.h>
> #include <linux/platform_device.h>
> #include <linux/phy/pcie.h>
> @@ -244,6 +245,7 @@ struct qcom_pcie {
> const struct qcom_pcie_cfg *cfg;
> struct dentry *debugfs;
> bool suspended;
> + bool opp_supported;

You can just use "pcie->icc_mem" to differentiate between OPP and ICC. No need
of a new flag.

> };
>
> #define to_qcom_pcie(x) dev_get_drvdata((x)->dev)
> @@ -1405,15 +1407,13 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
> return 0;
> }
>
> -static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
> +static void qcom_pcie_icc_opp_update(struct qcom_pcie *pcie)
> {
> struct dw_pcie *pci = pcie->pci;
> - u32 offset, status;
> + u32 offset, status, freq;
> + struct dev_pm_opp *opp;
> int speed, width;
> - int ret;
> -
> - if (!pcie->icc_mem)
> - return;
> + int ret, mbps;
>
> offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
> status = readw(pci->dbi_base + offset + PCI_EXP_LNKSTA);
> @@ -1425,11 +1425,30 @@ static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
> speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, status);
> width = FIELD_GET(PCI_EXP_LNKSTA_NLW, status);
>
> - ret = icc_set_bw(pcie->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
> - if (ret) {
> - dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
> - ret);
> + if (pcie->opp_supported) {
> + mbps = pcie_link_speed_to_mbps(pcie_link_speed[speed]);
> + if (mbps < 0)
> + return;
> +
> + freq = mbps * 1000;
> + opp = dev_pm_opp_find_freq_exact(pci->dev, freq * width, true);
> + if (!IS_ERR(opp)) {
> + ret = dev_pm_opp_set_opp(pci->dev, opp);
> + if (ret)
> + dev_err(pci->dev, "Failed to set opp: freq %ld ret %d\n",
> + dev_pm_opp_get_freq(opp), ret);
> + dev_pm_opp_put(opp);
> + }
> + } else {
> + ret = icc_set_bw(pcie->icc_mem, 0,
> + width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
> + if (ret) {
> + dev_err(pci->dev,
> + "failed to set interconnect bandwidth for pcie-mem: %d\n", ret);

"PCIe-MEM"

> + }
> }
> +
> + return;
> }
>
> static int qcom_pcie_link_transition_count(struct seq_file *s, void *data)
> @@ -1472,8 +1491,10 @@ static void qcom_pcie_init_debugfs(struct qcom_pcie *pcie)
> static int qcom_pcie_probe(struct platform_device *pdev)
> {
> const struct qcom_pcie_cfg *pcie_cfg;
> + unsigned long max_freq = INT_MAX;
> struct device *dev = &pdev->dev;
> struct qcom_pcie *pcie;
> + struct dev_pm_opp *opp;
> struct dw_pcie_rp *pp;
> struct resource *res;
> struct dw_pcie *pci;
> @@ -1540,9 +1561,36 @@ static int qcom_pcie_probe(struct platform_device *pdev)
> goto err_pm_runtime_put;
> }
>
> - ret = qcom_pcie_icc_init(pcie);
> - if (ret)
> + /* OPP table is optional */
> + ret = devm_pm_opp_of_add_table(dev);
> + if (ret && ret != -ENODEV) {
> + dev_err_probe(dev, ret, "Failed to add OPP table\n");
> goto err_pm_runtime_put;
> + }
> +
> + /*
> + * Use highest OPP here if the OPP table is present. At the end of

Why highest opp? For ICC, we set minimal bandwidth before.

> + * the probe(), OPP will be updated using qcom_pcie_icc_opp_update().
> + */
> + if (ret != -ENODEV) {

if (!ret)

> + opp = dev_pm_opp_find_freq_floor(dev, &max_freq);
> + if (!IS_ERR(opp)) {
> + ret = dev_pm_opp_set_opp(dev, opp);
> + if (ret)
> + dev_err_probe(pci->dev, ret,
> + "Failed to set opp: freq %ld\n",

"Failed to set OPP for freq: %ld\n"

> + dev_pm_opp_get_freq(opp));
> + dev_pm_opp_put(opp);
> + }
> + pcie->opp_supported = true;
> + }
> +
> + /* Skip ICC init if OPP is supported as ICC bw is handled by OPP */
> + if (!pcie->opp_supported) {
> + ret = qcom_pcie_icc_init(pcie);

First check whether ICC is present or not and then check OPP as a fallback. This
avoids an extra flag.

- Mani

> + if (ret)
> + goto err_pm_runtime_put;
> + }
>
> ret = pcie->cfg->ops->get_resources(pcie);
> if (ret)
> @@ -1562,7 +1610,7 @@ static int qcom_pcie_probe(struct platform_device *pdev)
> goto err_phy_exit;
> }
>
> - qcom_pcie_icc_update(pcie);
> + qcom_pcie_icc_opp_update(pcie);
>
> if (pcie->mhi)
> qcom_pcie_init_debugfs(pcie);
> @@ -1621,10 +1669,13 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
> qcom_pcie_host_init(&pcie->pci->pp);
> pcie->suspended = false;
> }
> - qcom_pcie_icc_update(pcie);
> + qcom_pcie_icc_opp_update(pcie);
> return ret;
> }
>
> + if (pcie->opp_supported)
> + dev_pm_opp_set_opp(pcie->pci->dev, NULL);
> +
> return 0;
> }
>
> @@ -1647,7 +1698,7 @@ static int qcom_pcie_resume_noirq(struct device *dev)
> pcie->suspended = false;
> }
>
> - qcom_pcie_icc_update(pcie);
> + qcom_pcie_icc_opp_update(pcie);
>
> return 0;
> }
>
> --
> 2.42.0
>

--
மணிவண்ணன் சதாசிவம்

Subject: Re: [PATCH v8 3/7] PCI: qcom: Add ICC bandwidth vote for CPU to PCIe path



On 3/4/2024 11:11 PM, Manivannan Sadhasivam wrote:
> On Sat, Mar 02, 2024 at 09:29:57AM +0530, Krishna chaitanya chundru wrote:
>> To access PCIe registers, PCIe BAR space, config space the CPU-PCIe
>> ICC (interconnect consumers) path should be voted otherwise it may
>> lead to NoC (Network on chip) timeout. We are surviving because of
>> other driver vote for this path.
>>
>> As there is less access on this path compared to PCIe to mem path
>> add minimum vote i.e 1KBps bandwidth always.
>
> Please add the info that 1KBps is what shared by the HW team.
>
Ack to all the comments
>>
>> When suspending, disable this path after register space access
>> is done.
>>
>> Reviewed-by: Bryan O'Donoghue <[email protected]>
>> Signed-off-by: Krishna chaitanya chundru <[email protected]>
>> ---
>> drivers/pci/controller/dwc/pcie-qcom.c | 38 ++++++++++++++++++++++++++++++++--
>> 1 file changed, 36 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
>> index 10f2d0bb86be..a0266bfe71f1 100644
>> --- a/drivers/pci/controller/dwc/pcie-qcom.c
>> +++ b/drivers/pci/controller/dwc/pcie-qcom.c
>> @@ -240,6 +240,7 @@ struct qcom_pcie {
>> struct phy *phy;
>> struct gpio_desc *reset;
>> struct icc_path *icc_mem;
>> + struct icc_path *icc_cpu;
>> const struct qcom_pcie_cfg *cfg;
>> struct dentry *debugfs;
>> bool suspended;
>> @@ -1372,6 +1373,9 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
>> if (IS_ERR(pcie->icc_mem))
>> return PTR_ERR(pcie->icc_mem);
>>
>> + pcie->icc_cpu = devm_of_icc_get(pci->dev, "cpu-pcie");
>> + if (IS_ERR(pcie->icc_cpu))
>> + return PTR_ERR(pcie->icc_cpu);
>> /*
>> * Some Qualcomm platforms require interconnect bandwidth constraints
>> * to be set before enabling interconnect clocks.
>> @@ -1381,7 +1385,19 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
>> */
>> ret = icc_set_bw(pcie->icc_mem, 0, QCOM_PCIE_LINK_SPEED_TO_BW(1));
>> if (ret) {
>> - dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
>> + dev_err(pci->dev, "failed to set interconnect bandwidth for pcie-mem: %d\n",
>
> "PCIe-MEM"
>
>> + ret);
>> + return ret;
>> + }
>> +
>> + /*
>> + * The config space, BAR space and registers goes through cpu-pcie path
>> + * Set peak bandwidth to 1KBps as recommended by HW team for this path
>> + * all the time.
>
> How about,
>
> "Since the CPU-PCIe path is only used for activities like register
> access, Config/BAR space access, HW team has recommended to use a
> minimal bandwidth of 1KBps just to keep the link active."
>
>> + */
>> + ret = icc_set_bw(pcie->icc_cpu, 0, kBps_to_icc(1));
>> + if (ret) {
>> + dev_err(pci->dev, "failed to set interconnect bandwidth for cpu-pcie: %d\n",
>> ret);
>> return ret;
>> }
>> @@ -1573,7 +1589,7 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
>> */
>> ret = icc_set_bw(pcie->icc_mem, 0, kBps_to_icc(1));
>> if (ret) {
>> - dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret);
>> + dev_err(dev, "Failed to set interconnect bandwidth for pcie-mem: %d\n", ret);
>
> "PCIe-MEM"
>
>> return ret;
>> }
>>
>> @@ -1597,6 +1613,18 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
>> pcie->suspended = true;
>> }
>>
>> + /* Remove CPU path vote after all the register access is done */
>
> "Remove the vote for CPU-PCIe path now, since at this point onwards, no register
> access will be done."
>
>> + ret = icc_disable(pcie->icc_cpu);
>> + if (ret) {
>> + dev_err(dev, "failed to disable icc path of cpu-pcie: %d\n", ret);
>
> "CPU-PCIe"
>
>> + if (pcie->suspended) {
>> + qcom_pcie_host_init(&pcie->pci->pp);
>
> Interesting. So if icc_disable() fails, can the IP continue to function?
>
As the ICC already enable before icc_disable() fails, the IP should work.
- Krishna Chaitanya.
>> + pcie->suspended = false;
>> + }
>> + qcom_pcie_icc_update(pcie);
>> + return ret;
>> + }
>> +
>> return 0;
>> }
>>
>> @@ -1605,6 +1633,12 @@ static int qcom_pcie_resume_noirq(struct device *dev)
>> struct qcom_pcie *pcie = dev_get_drvdata(dev);
>> int ret;
>>
>> + ret = icc_enable(pcie->icc_cpu);
>> + if (ret) {
>> + dev_err(dev, "failed to enable icc path of cpu-pcie: %d\n", ret);
>
> "CPU-PCIe"
>
> - Mani
>

Subject: Re: [PATCH v8 5/7] arm64: dts: qcom: sm8450: Add opp table support to PCIe



On 3/4/2024 11:19 PM, Manivannan Sadhasivam wrote:
> On Sat, Mar 02, 2024 at 09:29:59AM +0530, Krishna chaitanya chundru wrote:
>> PCIe needs to choose the appropriate performance state of RPMH power
>> domain and interconnect bandwidth based up on the PCIe gen speed.
>>
>> Add the OPP table support to specify RPMH performance states and
>> interconnect peak bandwidth.
>>
>> Signed-off-by: Krishna chaitanya chundru <[email protected]>
>> ---
>> arch/arm64/boot/dts/qcom/sm8450.dtsi | 74 ++++++++++++++++++++++++++++++++++++
>> 1 file changed, 74 insertions(+)
>>
>> diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
>> index 6b1d2e0d9d14..662f2129f20d 100644
>> --- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
>> +++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
>> @@ -1827,7 +1827,32 @@ pcie0: pcie@1c00000 {
>> pinctrl-names = "default";
>> pinctrl-0 = <&pcie0_default_state>;
>>
>> + operating-points-v2 = <&pcie0_opp_table>;
>> +
>> status = "disabled";
>> +
>> + pcie0_opp_table: opp-table {
>> + compatible = "operating-points-v2";
>> +
>> + opp-2500000 {
>
> Add the comments that you added below.
ACK.
>
>> + opp-hz = /bits/ 64 <2500000>;
>> + required-opps = <&rpmhpd_opp_low_svs>;
>> + opp-peak-kBps = <250000 1>;
>
> Isn't the peak bw should be greater that the avg bw? Atleast in upstream we
> follow that pattern.
>
> - Mani
The two values which are defined are for peak BW only one value
corresponds to PCI-MEM path and other to CPU to PCIe path.
- Krishna Chaitanya.
>
>> + };
>> +
>> + opp-5000000 {
>> + opp-hz = /bits/ 64 <5000000>;
>> + required-opps = <&rpmhpd_opp_low_svs>;
>> + opp-peak-kBps = <500000 1>;
>> + };
>> +
>> + opp-8000000 {
>> + opp-hz = /bits/ 64 <8000000>;
>> + required-opps = <&rpmhpd_opp_nom>;
>> + opp-peak-kBps = <984500 1>;
>> + };
>> + };
>> +
>> };
>>
>> pcie0_phy: phy@1c06000 {
>> @@ -1938,7 +1963,56 @@ pcie1: pcie@1c08000 {
>> pinctrl-names = "default";
>> pinctrl-0 = <&pcie1_default_state>;
>>
>> + operating-points-v2 = <&pcie1_opp_table>;
>> +
>> status = "disabled";
>> +
>> + pcie1_opp_table: opp-table {
>> + compatible = "operating-points-v2";
>> +
>> + /* GEN 1x1 */
>> + opp-2500000 {
>> + opp-hz = /bits/ 64 <2500000>;
>> + required-opps = <&rpmhpd_opp_low_svs>;
>> + opp-peak-kBps = <250000 1>;
>> + };
>> +
>> + /* GEN 1x2 GEN 2x1 */
>> + opp-5000000 {
>> + opp-hz = /bits/ 64 <5000000>;
>> + required-opps = <&rpmhpd_opp_low_svs>;
>> + opp-peak-kBps = <500000 1>;
>> + };
>> +
>> + /* GEN 2x2 */
>> + opp-10000000 {
>> + opp-hz = /bits/ 64 <10000000>;
>> + required-opps = <&rpmhpd_opp_low_svs>;
>> + opp-peak-kBps = <1000000 1>;
>> + };
>> +
>> + /* GEN 3x1 */
>> + opp-8000000 {
>> + opp-hz = /bits/ 64 <8000000>;
>> + required-opps = <&rpmhpd_opp_nom>;
>> + opp-peak-kBps = <984500 1>;
>> + };
>> +
>> + /* GEN 3x2 GEN 4x1 */
>> + opp-16000000 {
>> + opp-hz = /bits/ 64 <16000000>;
>> + required-opps = <&rpmhpd_opp_nom>;
>> + opp-peak-kBps = <1969000 1>;
>> + };
>> +
>> + /* GEN 4x2 */
>> + opp-32000000 {
>> + opp-hz = /bits/ 64 <32000000>;
>> + required-opps = <&rpmhpd_opp_nom>;
>> + opp-peak-kBps = <3938000 1>;
>> + };
>> + };
>> +
>> };
>>
>> pcie1_phy: phy@1c0e000 {
>>
>> --
>> 2.42.0
>>
>

Subject: Re: [PATCH v8 7/7] PCI: qcom: Add OPP support to scale performance state of power domain



On 3/4/2024 11:35 PM, Manivannan Sadhasivam wrote:
> On Sat, Mar 02, 2024 at 09:30:01AM +0530, Krishna chaitanya chundru wrote:
>> QCOM Resource Power Manager-hardened (RPMh) is a hardware block which
>> maintains hardware state of a regulator by performing max aggregation of
>> the requests made by all of the clients.
>>
>> PCIe controller can operate on different RPMh performance state of power
>> domain based on the speed of the link. And this performance state varies
>> from target to target, like some controllers support GEN3 in NOM (Nominal)
>> voltage corner, while some other supports GEN3 in low SVS (static voltage
>> scaling).
>>
>> The SoC can be more power efficient if we scale the performance state
>> based on the aggregate PCIe link bandwidth.
>>
>> Add Operating Performance Points (OPP) support to vote for RPMh state based
>> on the aggregate link bandwidth.
>>
>> OPP can handle ICC bw voting also, so move ICC bw voting through OPP
>> framework if OPP entries are present.
>>
>> Different link configurations may share the same aggregate bandwidth,
>> e.g., a 2.5 GT/s x2 link and a 5.0 GT/s x1 link have the same bandwidth
>> and share the same OPP entry.
>>
>> As we are moving ICC voting as part of OPP, don't initialize ICC if OPP
>> is supported.
>>
>> Signed-off-by: Krishna chaitanya chundru <[email protected]>
>> ---
>> drivers/pci/controller/dwc/pcie-qcom.c | 81 +++++++++++++++++++++++++++-------
>> 1 file changed, 66 insertions(+), 15 deletions(-)
>>
>> diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
>> index a0266bfe71f1..2ec14bfafcfc 100644
>> --- a/drivers/pci/controller/dwc/pcie-qcom.c
>> +++ b/drivers/pci/controller/dwc/pcie-qcom.c
>> @@ -22,6 +22,7 @@
>> #include <linux/of.h>
>> #include <linux/of_gpio.h>
>> #include <linux/pci.h>
>> +#include <linux/pm_opp.h>
>> #include <linux/pm_runtime.h>
>> #include <linux/platform_device.h>
>> #include <linux/phy/pcie.h>
>> @@ -244,6 +245,7 @@ struct qcom_pcie {
>> const struct qcom_pcie_cfg *cfg;
>> struct dentry *debugfs;
>> bool suspended;
>> + bool opp_supported;
>
> You can just use "pcie->icc_mem" to differentiate between OPP and ICC. No need
> of a new flag.
>
Ack.

>> };
>>
>> #define to_qcom_pcie(x) dev_get_drvdata((x)->dev)
>> @@ -1405,15 +1407,13 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
>> return 0;
>> }
>>
>> -static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
>> +static void qcom_pcie_icc_opp_update(struct qcom_pcie *pcie)
>> {
>> struct dw_pcie *pci = pcie->pci;
>> - u32 offset, status;
>> + u32 offset, status, freq;
>> + struct dev_pm_opp *opp;
>> int speed, width;
>> - int ret;
>> -
>> - if (!pcie->icc_mem)
>> - return;
>> + int ret, mbps;
>>
>> offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
>> status = readw(pci->dbi_base + offset + PCI_EXP_LNKSTA);
>> @@ -1425,11 +1425,30 @@ static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
>> speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, status);
>> width = FIELD_GET(PCI_EXP_LNKSTA_NLW, status);
>>
>> - ret = icc_set_bw(pcie->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
>> - if (ret) {
>> - dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
>> - ret);
>> + if (pcie->opp_supported) {
>> + mbps = pcie_link_speed_to_mbps(pcie_link_speed[speed]);
>> + if (mbps < 0)
>> + return;
>> +
>> + freq = mbps * 1000;
>> + opp = dev_pm_opp_find_freq_exact(pci->dev, freq * width, true);
>> + if (!IS_ERR(opp)) {
>> + ret = dev_pm_opp_set_opp(pci->dev, opp);
>> + if (ret)
>> + dev_err(pci->dev, "Failed to set opp: freq %ld ret %d\n",
>> + dev_pm_opp_get_freq(opp), ret);
>> + dev_pm_opp_put(opp);
>> + }
>> + } else {
>> + ret = icc_set_bw(pcie->icc_mem, 0,
>> + width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
>> + if (ret) {
>> + dev_err(pci->dev,
>> + "failed to set interconnect bandwidth for pcie-mem: %d\n", ret);
>
> "PCIe-MEM"
>
Ack.
>> + }
>> }
>> +
>> + return;
>> }
>>
>> static int qcom_pcie_link_transition_count(struct seq_file *s, void *data)
>> @@ -1472,8 +1491,10 @@ static void qcom_pcie_init_debugfs(struct qcom_pcie *pcie)
>> static int qcom_pcie_probe(struct platform_device *pdev)
>> {
>> const struct qcom_pcie_cfg *pcie_cfg;
>> + unsigned long max_freq = INT_MAX;
>> struct device *dev = &pdev->dev;
>> struct qcom_pcie *pcie;
>> + struct dev_pm_opp *opp;
>> struct dw_pcie_rp *pp;
>> struct resource *res;
>> struct dw_pcie *pci;
>> @@ -1540,9 +1561,36 @@ static int qcom_pcie_probe(struct platform_device *pdev)
>> goto err_pm_runtime_put;
>> }
>>
>> - ret = qcom_pcie_icc_init(pcie);
>> - if (ret)
>> + /* OPP table is optional */
>> + ret = devm_pm_opp_of_add_table(dev);
>> + if (ret && ret != -ENODEV) {
>> + dev_err_probe(dev, ret, "Failed to add OPP table\n");
>> goto err_pm_runtime_put;
>> + }
>> +
>> + /*
>> + * Use highest OPP here if the OPP table is present. At the end of
>
> Why highest opp? For ICC, we set minimal bandwidth before.
>
In OPP we are voting for both ICC and voltage corner also, if we didn't
vote for maximum voltage core the PCIe link may not come in maximum
supported speed. Due to that we are voting for Maximum value.

Anyway we are updating them based upon the link speed and width this
should not create any issues.
>> + * the probe(), OPP will be updated using qcom_pcie_icc_opp_update().
>> + */
>> + if (ret != -ENODEV) {
>
> if (!ret)
>
>> + opp = dev_pm_opp_find_freq_floor(dev, &max_freq);
>> + if (!IS_ERR(opp)) {
>> + ret = dev_pm_opp_set_opp(dev, opp);
>> + if (ret)
>> + dev_err_probe(pci->dev, ret,
>> + "Failed to set opp: freq %ld\n",
>
> "Failed to set OPP for freq: %ld\n"
>
Ack
>> + dev_pm_opp_get_freq(opp));
>> + dev_pm_opp_put(opp);
>> + }
>> + pcie->opp_supported = true;
>> + }
>> +
>> + /* Skip ICC init if OPP is supported as ICC bw is handled by OPP */
>> + if (!pcie->opp_supported) {
>> + ret = qcom_pcie_icc_init(pcie);
>
> First check whether ICC is present or not and then check OPP as a fallback. This
> avoids an extra flag.
>
> - Mani
Ack.

- Krishna Chaitanya.
>
>> + if (ret)
>> + goto err_pm_runtime_put;
>> + }
>>
>> ret = pcie->cfg->ops->get_resources(pcie);
>> if (ret)
>> @@ -1562,7 +1610,7 @@ static int qcom_pcie_probe(struct platform_device *pdev)
>> goto err_phy_exit;
>> }
>>
>> - qcom_pcie_icc_update(pcie);
>> + qcom_pcie_icc_opp_update(pcie);
>>
>> if (pcie->mhi)
>> qcom_pcie_init_debugfs(pcie);
>> @@ -1621,10 +1669,13 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
>> qcom_pcie_host_init(&pcie->pci->pp);
>> pcie->suspended = false;
>> }
>> - qcom_pcie_icc_update(pcie);
>> + qcom_pcie_icc_opp_update(pcie);
>> return ret;
>> }
>>
>> + if (pcie->opp_supported)
>> + dev_pm_opp_set_opp(pcie->pci->dev, NULL);
>> +
>> return 0;
>> }
>>
>> @@ -1647,7 +1698,7 @@ static int qcom_pcie_resume_noirq(struct device *dev)
>> pcie->suspended = false;
>> }
>>
>> - qcom_pcie_icc_update(pcie);
>> + qcom_pcie_icc_opp_update(pcie);
>>
>> return 0;
>> }
>>
>> --
>> 2.42.0
>>
>

2024-03-06 16:07:09

by Konrad Dybcio

[permalink] [raw]
Subject: Re: [PATCH v8 2/7] arm64: dts: qcom: sm8450: Add interconnect path to PCIe node



On 3/2/24 04:59, Krishna chaitanya chundru wrote:
> Add pcie-mem & cpu-pcie interconnect path to the PCIe nodes.
>
> Reviewed-by: Manivannan Sadhasivam <[email protected]>
> Signed-off-by: Krishna chaitanya chundru <[email protected]>
> ---
> arch/arm64/boot/dts/qcom/sm8450.dtsi | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
> index 01e4dfc4babd..6b1d2e0d9d14 100644
> --- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
> +++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
> @@ -1781,6 +1781,10 @@ pcie0: pcie@1c00000 {
> <0 0 0 3 &intc 0 0 0 151 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
> <0 0 0 4 &intc 0 0 0 152 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
>
> + interconnects = <&pcie_noc MASTER_PCIE_0 0 &mc_virt SLAVE_EBI1 0>,

Please use QCOM_ICC_TAG_ALWAYS.

> + <&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_PCIE_0 0>;

And this path could presumably be demoted to QCOM_ICC_TAG_ACTIVE_ONLY?

Konrad

2024-04-05 07:47:16

by Manivannan Sadhasivam

[permalink] [raw]
Subject: Re: [PATCH v8 2/7] arm64: dts: qcom: sm8450: Add interconnect path to PCIe node

On Wed, Mar 06, 2024 at 05:04:54PM +0100, Konrad Dybcio wrote:
>
>
> On 3/2/24 04:59, Krishna chaitanya chundru wrote:
> > Add pcie-mem & cpu-pcie interconnect path to the PCIe nodes.
> >
> > Reviewed-by: Manivannan Sadhasivam <[email protected]>
> > Signed-off-by: Krishna chaitanya chundru <[email protected]>
> > ---
> > arch/arm64/boot/dts/qcom/sm8450.dtsi | 8 ++++++++
> > 1 file changed, 8 insertions(+)
> >
> > diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
> > index 01e4dfc4babd..6b1d2e0d9d14 100644
> > --- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
> > +++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
> > @@ -1781,6 +1781,10 @@ pcie0: pcie@1c00000 {
> > <0 0 0 3 &intc 0 0 0 151 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
> > <0 0 0 4 &intc 0 0 0 152 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
> > + interconnects = <&pcie_noc MASTER_PCIE_0 0 &mc_virt SLAVE_EBI1 0>,
>
> Please use QCOM_ICC_TAG_ALWAYS.
>
> > + <&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_PCIE_0 0>;
>
> And this path could presumably be demoted to QCOM_ICC_TAG_ACTIVE_ONLY?
>

I think it should be fine since there would be no register access done while the
RPMh is put into sleep state. Krishna, can you confirm that by executing the CX
shutdown with QCOM_ICC_TAG_ACTIVE_ONLY vote for cpu-pcie path on any supported
platform?

But if we do such change, then it should also be applied to other SoCs.

- Mani

--
மணிவண்ணன் சதாசிவம்

2024-04-05 08:23:33

by Manivannan Sadhasivam

[permalink] [raw]
Subject: Re: [PATCH v8 7/7] PCI: qcom: Add OPP support to scale performance state of power domain

On Tue, Mar 05, 2024 at 04:44:20PM +0530, Krishna Chaitanya Chundru wrote:
>
>
> On 3/4/2024 11:35 PM, Manivannan Sadhasivam wrote:
> > On Sat, Mar 02, 2024 at 09:30:01AM +0530, Krishna chaitanya chundru wrote:
> > > QCOM Resource Power Manager-hardened (RPMh) is a hardware block which
> > > maintains hardware state of a regulator by performing max aggregation of
> > > the requests made by all of the clients.
> > >
> > > PCIe controller can operate on different RPMh performance state of power
> > > domain based on the speed of the link. And this performance state varies
> > > from target to target, like some controllers support GEN3 in NOM (Nominal)
> > > voltage corner, while some other supports GEN3 in low SVS (static voltage
> > > scaling).
> > >
> > > The SoC can be more power efficient if we scale the performance state
> > > based on the aggregate PCIe link bandwidth.
> > >
> > > Add Operating Performance Points (OPP) support to vote for RPMh state based
> > > on the aggregate link bandwidth.
> > >
> > > OPP can handle ICC bw voting also, so move ICC bw voting through OPP
> > > framework if OPP entries are present.
> > >
> > > Different link configurations may share the same aggregate bandwidth,
> > > e.g., a 2.5 GT/s x2 link and a 5.0 GT/s x1 link have the same bandwidth
> > > and share the same OPP entry.
> > >
> > > As we are moving ICC voting as part of OPP, don't initialize ICC if OPP
> > > is supported.
> > >
> > > Signed-off-by: Krishna chaitanya chundru <[email protected]>
> > > ---
> > > drivers/pci/controller/dwc/pcie-qcom.c | 81 +++++++++++++++++++++++++++-------
> > > 1 file changed, 66 insertions(+), 15 deletions(-)
> > >
> > > diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
> > > index a0266bfe71f1..2ec14bfafcfc 100644
> > > --- a/drivers/pci/controller/dwc/pcie-qcom.c
> > > +++ b/drivers/pci/controller/dwc/pcie-qcom.c

[...]

> > > static int qcom_pcie_link_transition_count(struct seq_file *s, void *data)
> > > @@ -1472,8 +1491,10 @@ static void qcom_pcie_init_debugfs(struct qcom_pcie *pcie)
> > > static int qcom_pcie_probe(struct platform_device *pdev)
> > > {
> > > const struct qcom_pcie_cfg *pcie_cfg;
> > > + unsigned long max_freq = INT_MAX;
> > > struct device *dev = &pdev->dev;
> > > struct qcom_pcie *pcie;
> > > + struct dev_pm_opp *opp;
> > > struct dw_pcie_rp *pp;
> > > struct resource *res;
> > > struct dw_pcie *pci;
> > > @@ -1540,9 +1561,36 @@ static int qcom_pcie_probe(struct platform_device *pdev)
> > > goto err_pm_runtime_put;
> > > }
> > > - ret = qcom_pcie_icc_init(pcie);
> > > - if (ret)
> > > + /* OPP table is optional */
> > > + ret = devm_pm_opp_of_add_table(dev);
> > > + if (ret && ret != -ENODEV) {
> > > + dev_err_probe(dev, ret, "Failed to add OPP table\n");
> > > goto err_pm_runtime_put;
> > > + }
> > > +
> > > + /*
> > > + * Use highest OPP here if the OPP table is present. At the end of
> >
> > Why highest opp? For ICC, we set minimal bandwidth before.
> >
> In OPP we are voting for both ICC and voltage corner also, if we didn't vote
> for maximum voltage core the PCIe link may not come in maximum supported
> speed. Due to that we are voting for Maximum value.
>

Okay, then this information should be part of the comment.

- Mani

--
மணிவண்ணன் சதாசிவம்

2024-04-05 08:30:09

by Manivannan Sadhasivam

[permalink] [raw]
Subject: Re: [PATCH v8 3/7] PCI: qcom: Add ICC bandwidth vote for CPU to PCIe path

On Tue, Mar 05, 2024 at 04:23:21PM +0530, Krishna Chaitanya Chundru wrote:
>
>
> On 3/4/2024 11:11 PM, Manivannan Sadhasivam wrote:
> > On Sat, Mar 02, 2024 at 09:29:57AM +0530, Krishna chaitanya chundru wrote:
> > > To access PCIe registers, PCIe BAR space, config space the CPU-PCIe
> > > ICC (interconnect consumers) path should be voted otherwise it may
> > > lead to NoC (Network on chip) timeout. We are surviving because of
> > > other driver vote for this path.
> > >
> > > As there is less access on this path compared to PCIe to mem path
> > > add minimum vote i.e 1KBps bandwidth always.
> >
> > Please add the info that 1KBps is what shared by the HW team.
> >
> Ack to all the comments
> > >
> > > When suspending, disable this path after register space access
> > > is done.
> > >
> > > Reviewed-by: Bryan O'Donoghue <[email protected]>
> > > Signed-off-by: Krishna chaitanya chundru <[email protected]>
> > > ---
> > > drivers/pci/controller/dwc/pcie-qcom.c | 38 ++++++++++++++++++++++++++++++++--
> > > 1 file changed, 36 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
> > > index 10f2d0bb86be..a0266bfe71f1 100644
> > > --- a/drivers/pci/controller/dwc/pcie-qcom.c
> > > +++ b/drivers/pci/controller/dwc/pcie-qcom.c

[...]

> > > + ret = icc_disable(pcie->icc_cpu);
> > > + if (ret) {
> > > + dev_err(dev, "failed to disable icc path of cpu-pcie: %d\n", ret);
> >
> > "CPU-PCIe"
> >
> > > + if (pcie->suspended) {
> > > + qcom_pcie_host_init(&pcie->pci->pp);
> >
> > Interesting. So if icc_disable() fails, can the IP continue to function?
> >
> As the ICC already enable before icc_disable() fails, the IP should work.

If icc_disable() fails, then most likely something is wrong with RPMh. How can
the IP continue to work in that case?

- Mani

--
மணிவண்ணன் சதாசிவம்

Subject: Re: [PATCH v8 3/7] PCI: qcom: Add ICC bandwidth vote for CPU to PCIe path



On 4/5/2024 1:59 PM, Manivannan Sadhasivam wrote:
> On Tue, Mar 05, 2024 at 04:23:21PM +0530, Krishna Chaitanya Chundru wrote:
>>
>>
>> On 3/4/2024 11:11 PM, Manivannan Sadhasivam wrote:
>>> On Sat, Mar 02, 2024 at 09:29:57AM +0530, Krishna chaitanya chundru wrote:
>>>> To access PCIe registers, PCIe BAR space, config space the CPU-PCIe
>>>> ICC (interconnect consumers) path should be voted otherwise it may
>>>> lead to NoC (Network on chip) timeout. We are surviving because of
>>>> other driver vote for this path.
>>>>
>>>> As there is less access on this path compared to PCIe to mem path
>>>> add minimum vote i.e 1KBps bandwidth always.
>>>
>>> Please add the info that 1KBps is what shared by the HW team.
>>>
>> Ack to all the comments
>>>>
>>>> When suspending, disable this path after register space access
>>>> is done.
>>>>
>>>> Reviewed-by: Bryan O'Donoghue <[email protected]>
>>>> Signed-off-by: Krishna chaitanya chundru <[email protected]>
>>>> ---
>>>> drivers/pci/controller/dwc/pcie-qcom.c | 38 ++++++++++++++++++++++++++++++++--
>>>> 1 file changed, 36 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
>>>> index 10f2d0bb86be..a0266bfe71f1 100644
>>>> --- a/drivers/pci/controller/dwc/pcie-qcom.c
>>>> +++ b/drivers/pci/controller/dwc/pcie-qcom.c
>
> [...]
>
>>>> + ret = icc_disable(pcie->icc_cpu);
>>>> + if (ret) {
>>>> + dev_err(dev, "failed to disable icc path of cpu-pcie: %d\n", ret);
>>>
>>> "CPU-PCIe"
>>>
>>>> + if (pcie->suspended) {
>>>> + qcom_pcie_host_init(&pcie->pci->pp);
>>>
>>> Interesting. So if icc_disable() fails, can the IP continue to function?
>>>
>> As the ICC already enable before icc_disable() fails, the IP should work.
>
> If icc_disable() fails, then most likely something is wrong with RPMh. How can
> the IP continue to work in that case?
>
Ok then I will log the error and return here

- Krishna Chaitanya.

> - Mani
>

Subject: Re: [PATCH v8 2/7] arm64: dts: qcom: sm8450: Add interconnect path to PCIe node



On 4/5/2024 1:10 PM, Manivannan Sadhasivam wrote:
> On Wed, Mar 06, 2024 at 05:04:54PM +0100, Konrad Dybcio wrote:
>>
>>
>> On 3/2/24 04:59, Krishna chaitanya chundru wrote:
>>> Add pcie-mem & cpu-pcie interconnect path to the PCIe nodes.
>>>
>>> Reviewed-by: Manivannan Sadhasivam <[email protected]>
>>> Signed-off-by: Krishna chaitanya chundru <[email protected]>
>>> ---
>>> arch/arm64/boot/dts/qcom/sm8450.dtsi | 8 ++++++++
>>> 1 file changed, 8 insertions(+)
>>>
>>> diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
>>> index 01e4dfc4babd..6b1d2e0d9d14 100644
>>> --- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
>>> +++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
>>> @@ -1781,6 +1781,10 @@ pcie0: pcie@1c00000 {
>>> <0 0 0 3 &intc 0 0 0 151 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
>>> <0 0 0 4 &intc 0 0 0 152 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
>>> + interconnects = <&pcie_noc MASTER_PCIE_0 0 &mc_virt SLAVE_EBI1 0>,
>>
>> Please use QCOM_ICC_TAG_ALWAYS.
>>
>>> + <&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_PCIE_0 0>;
>>
>> And this path could presumably be demoted to QCOM_ICC_TAG_ACTIVE_ONLY?
>>
>
> I think it should be fine since there would be no register access done while the
> RPMh is put into sleep state. Krishna, can you confirm that by executing the CX
> shutdown with QCOM_ICC_TAG_ACTIVE_ONLY vote for cpu-pcie path on any supported
> platform?
>
> But if we do such change, then it should also be applied to other SoCs.
>
> - Mani
>
we don't a have platform to test this now, we will keep
QCOM_ICC_TAG_ALWAYS for now.

- Krishna Chaitanya.