This patch adds support for OPP to vote for the performance state of RPMH
power domain based upon PCIe speed it got enumerated.
QCOM Resource Power Manager-hardened (RPMh) is a hardware block which
maintains hardware state of a regulator by performing max aggregation of
the requests made by all of the processors.
PCIe controller can operate on different RPMh performance state of power
domain based up on the speed of the link. And this performance state varies
from target to target.
It is manadate to scale the performance state based up on the PCIe speed
link operates so that SoC can run under optimum power conditions.
Add Operating Performance Points(OPP) support to vote for RPMh state based
upon GEN speed link is operating.
Before link up PCIe driver will vote for the maximum performance state.
As now we are adding ICC BW vote in OPP, the ICC BW voting depends both
GEN speed and link width using opp-level to indicate the opp entry table
will be difficult.
In PCIe certain gen speeds like GEN1x2 & GEN2X1 or GEN3x2 & GEN4x1 use
same icc bw if we use freq in the OPP table to represent the PCIe Gen
speed number of PCIe entries can reduced.
So going back to use freq in the OPP table instead of level.
Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
Changes from v6:
- change CPU-PCIe bandwidth to 1KBps as suggested by HW team.
- Create a new API to get frequency based upon PCIe speed as suggested
by mani.
- Updated few commit texts and comments.
- Setting opp to NULL in suspend to remove any votes.
- Link for v6: https://lore.kernel.org/linux-arm-msm/[email protected]/
Changes from v5:
- Add ICC BW voting as part of OPP, rebase the latest kernel, and only
- either OPP or ICC BW voting will supported we removed the patch to
- return error for icc opp update patch.
- As we added the icc bw voting in opp table I am not including reviewed
- by tags given in previous patch.
- Use opp freq to find opp entries as now we need to include pcie link
- also in to considerations.
- Add CPU-PCIe BW voting which is not present till now.
- Drop PCI: qcom: Return error from 'qcom_pcie_icc_update' as either opp or icc bw
- only one executes and there is no need to fail if opp or icc update fails.
- Link for v5: https://lore.kernel.org/linux-arm-msm/20231101063323.GH2897@thinkpad/T/
Changes from v4:
- Added a separate patch for returning error from the qcom_pcie_upadate
and moved opp update logic to icc_update and used a bool variable to
update the opp.
- Addressed comments made by pavan.
changes from v3:
- Removing the opp vote on suspend when the link is not up and link is not
up and add debug prints as suggested by pavan.
- Added dev_pm_opp_find_level_floor API to find the highest opp to vote.
changes from v2:
- Instead of using the freq based opp search use level based as suggested
by Dmitry Baryshkov.
Changes from v1:
- Addressed comments from Krzysztof Kozlowski.
- Added the rpmhpd_opp_xxx phandle as suggested by pavan.
- Added dev_pm_opp_set_opp API call which was missed on previous patch.
---
---
Krishna chaitanya chundru (7):
dt-bindings: PCI: qcom: Add interconnects path as required property
arm64: dts: qcom: sm8450: Add interconnect path to PCIe node
PCI: qcom: Add ICC bandwidth vote for CPU to PCIe path
dt-bindings: pci: qcom: Add opp table
arm64: dts: qcom: sm8450: Add opp table support to PCIe
PCI: Bring out the pcie link speed to MBps logic to new function
PCI: qcom: Add OPP support to scale performance state of power domain
.../devicetree/bindings/pci/qcom,pcie.yaml | 6 ++
arch/arm64/boot/dts/qcom/sm8450.dtsi | 82 +++++++++++++++
drivers/pci/controller/dwc/pcie-qcom.c | 110 ++++++++++++++++++---
drivers/pci/pci.c | 19 +---
drivers/pci/pci.h | 24 +++++
5 files changed, 208 insertions(+), 33 deletions(-)
---
base-commit: 6613476e225e090cc9aad49be7fa504e290dd33d
change-id: 20240222-opp_support-19a0c53be1f4
Best regards,
--
Krishna chaitanya chundru <[email protected]>
Add the interconnects path as required property for sm8450 platform.
Reviewed-by: Manivannan Sadhasivam <[email protected]>
Acked-by: Rob Herring <[email protected]>
Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
Documentation/devicetree/bindings/pci/qcom,pcie.yaml | 2 ++
1 file changed, 2 insertions(+)
diff --git a/Documentation/devicetree/bindings/pci/qcom,pcie.yaml b/Documentation/devicetree/bindings/pci/qcom,pcie.yaml
index a93ab3b54066..5ad5c4cfd2a8 100644
--- a/Documentation/devicetree/bindings/pci/qcom,pcie.yaml
+++ b/Documentation/devicetree/bindings/pci/qcom,pcie.yaml
@@ -834,6 +834,8 @@ allOf:
- qcom,pcie-sa8540p
- qcom,pcie-sa8775p
- qcom,pcie-sc8280xp
+ - qcom,pcie-sm8450-pcie0
+ - qcom,pcie-sm8450-pcie1
then:
required:
- interconnects
--
2.42.0
Add pcie-mem & cpu-pcie interconnect path to the PCIe nodes.
Reviewed-by: Manivannan Sadhasivam <[email protected]>
Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
arch/arm64/boot/dts/qcom/sm8450.dtsi | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
index 01e4dfc4babd..6b1d2e0d9d14 100644
--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
@@ -1781,6 +1781,10 @@ pcie0: pcie@1c00000 {
<0 0 0 3 &intc 0 0 0 151 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
<0 0 0 4 &intc 0 0 0 152 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
+ interconnects = <&pcie_noc MASTER_PCIE_0 0 &mc_virt SLAVE_EBI1 0>,
+ <&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_PCIE_0 0>;
+ interconnect-names = "pcie-mem", "cpu-pcie";
+
clocks = <&gcc GCC_PCIE_0_PIPE_CLK>,
<&gcc GCC_PCIE_0_PIPE_CLK_SRC>,
<&pcie0_phy>,
@@ -1890,6 +1894,10 @@ pcie1: pcie@1c08000 {
<0 0 0 3 &intc 0 0 0 438 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
<0 0 0 4 &intc 0 0 0 439 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
+ interconnects = <&pcie_noc MASTER_PCIE_1 0 &mc_virt SLAVE_EBI1 0>,
+ <&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_PCIE_1 0>;
+ interconnect-names = "pcie-mem", "cpu-pcie";
+
clocks = <&gcc GCC_PCIE_1_PIPE_CLK>,
<&gcc GCC_PCIE_1_PIPE_CLK_SRC>,
<&pcie1_phy>,
--
2.42.0
To access PCIe registers, PCIe BAR space, config space the CPU-PCIe
ICC(interconnect consumers) path should be voted otherwise it may
lead to NoC(Network on chip) timeout. We are surviving because of
other driver vote for this path.
As there is less access on this path compared to PCIe to mem path
add minimum vote i.e 1KBps bandwidth always.
In suspend remove the disable this path after register space access
is done.
Reviewed-by: Bryan O'Donoghue <[email protected]>
Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
drivers/pci/controller/dwc/pcie-qcom.c | 37 ++++++++++++++++++++++++++++++++--
1 file changed, 35 insertions(+), 2 deletions(-)
diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
index 10f2d0bb86be..088ebd2e5865 100644
--- a/drivers/pci/controller/dwc/pcie-qcom.c
+++ b/drivers/pci/controller/dwc/pcie-qcom.c
@@ -240,6 +240,7 @@ struct qcom_pcie {
struct phy *phy;
struct gpio_desc *reset;
struct icc_path *icc_mem;
+ struct icc_path *icc_cpu;
const struct qcom_pcie_cfg *cfg;
struct dentry *debugfs;
bool suspended;
@@ -1372,6 +1373,9 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
if (IS_ERR(pcie->icc_mem))
return PTR_ERR(pcie->icc_mem);
+ pcie->icc_cpu = devm_of_icc_get(pci->dev, "cpu-pcie");
+ if (IS_ERR(pcie->icc_cpu))
+ return PTR_ERR(pcie->icc_cpu);
/*
* Some Qualcomm platforms require interconnect bandwidth constraints
* to be set before enabling interconnect clocks.
@@ -1381,7 +1385,18 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
*/
ret = icc_set_bw(pcie->icc_mem, 0, QCOM_PCIE_LINK_SPEED_TO_BW(1));
if (ret) {
- dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
+ dev_err(pci->dev, "failed to set interconnect bandwidth for pcie-mem: %d\n",
+ ret);
+ return ret;
+ }
+
+ /*
+ * The config space, BAR space and registers goes through cpu-pcie path.
+ * Set peak bandwidth to 1KBps as recommended by HW team for this path all the time.
+ */
+ ret = icc_set_bw(pcie->icc_cpu, 0, kBps_to_icc(1));
+ if (ret) {
+ dev_err(pci->dev, "failed to set interconnect bandwidth for cpu-pcie: %d\n",
ret);
return ret;
}
@@ -1573,7 +1588,7 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
*/
ret = icc_set_bw(pcie->icc_mem, 0, kBps_to_icc(1));
if (ret) {
- dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret);
+ dev_err(dev, "Failed to set interconnect bandwidth for pcie-mem: %d\n", ret);
return ret;
}
@@ -1597,6 +1612,18 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
pcie->suspended = true;
}
+ /* Remove cpu path vote after all the register access is done */
+ ret = icc_disable(pcie->icc_cpu);
+ if (ret) {
+ dev_err(dev, "failed to disable icc path of cpu-pcie: %d\n", ret);
+ if (pcie->suspended) {
+ qcom_pcie_host_init(&pcie->pci->pp);
+ pcie->suspended = false;
+ }
+ qcom_pcie_icc_opp_update(pcie);
+ return ret;
+ }
+
return 0;
}
@@ -1605,6 +1632,12 @@ static int qcom_pcie_resume_noirq(struct device *dev)
struct qcom_pcie *pcie = dev_get_drvdata(dev);
int ret;
+ ret = icc_enable(pcie->icc_cpu);
+ if (ret) {
+ dev_err(dev, "failed to enable icc path of cpu-pcie: %d\n", ret);
+ return ret;
+ }
+
if (pcie->suspended) {
ret = qcom_pcie_host_init(&pcie->pci->pp);
if (ret)
--
2.42.0
PCIe needs to choose the appropriate performance state of RPMH power
domain based upon the PCIe gen speed.
Adding the Operating Performance Points table allows to adjust power
domain performance state and icc peak bw, depending on the PCIe gen
speed and width.
Acked-by: Manivannan Sadhasivam <[email protected]>
Reviewed-by: Krzysztof Kozlowski <[email protected]>
Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
Documentation/devicetree/bindings/pci/qcom,pcie.yaml | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/Documentation/devicetree/bindings/pci/qcom,pcie.yaml b/Documentation/devicetree/bindings/pci/qcom,pcie.yaml
index 5ad5c4cfd2a8..e1d75cabb1a9 100644
--- a/Documentation/devicetree/bindings/pci/qcom,pcie.yaml
+++ b/Documentation/devicetree/bindings/pci/qcom,pcie.yaml
@@ -127,6 +127,10 @@ properties:
description: GPIO controlled connection to WAKE# signal
maxItems: 1
+ operating-points-v2: true
+ opp-table:
+ type: object
+
required:
- compatible
- reg
--
2.42.0
PCIe needs to choose the appropriate performance state of RPMH power
domain and interconnect bandwidth based up on the PCIe gen speed.
Add the OPP table support to specify RPMH performance states and
interconnect peak bandwidth.
Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
arch/arm64/boot/dts/qcom/sm8450.dtsi | 74 ++++++++++++++++++++++++++++++++++++
1 file changed, 74 insertions(+)
diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
index 6b1d2e0d9d14..662f2129f20d 100644
--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
@@ -1827,7 +1827,32 @@ pcie0: pcie@1c00000 {
pinctrl-names = "default";
pinctrl-0 = <&pcie0_default_state>;
+ operating-points-v2 = <&pcie0_opp_table>;
+
status = "disabled";
+
+ pcie0_opp_table: opp-table {
+ compatible = "operating-points-v2";
+
+ opp-2500000 {
+ opp-hz = /bits/ 64 <2500000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <250000 1>;
+ };
+
+ opp-5000000 {
+ opp-hz = /bits/ 64 <5000000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <500000 1>;
+ };
+
+ opp-8000000 {
+ opp-hz = /bits/ 64 <8000000>;
+ required-opps = <&rpmhpd_opp_nom>;
+ opp-peak-kBps = <984500 1>;
+ };
+ };
+
};
pcie0_phy: phy@1c06000 {
@@ -1938,7 +1963,56 @@ pcie1: pcie@1c08000 {
pinctrl-names = "default";
pinctrl-0 = <&pcie1_default_state>;
+ operating-points-v2 = <&pcie1_opp_table>;
+
status = "disabled";
+
+ pcie1_opp_table: opp-table {
+ compatible = "operating-points-v2";
+
+ /* GEN 1x1 */
+ opp-2500000 {
+ opp-hz = /bits/ 64 <2500000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <250000 1>;
+ };
+
+ /* GEN 1x2 GEN 2x1 */
+ opp-5000000 {
+ opp-hz = /bits/ 64 <5000000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <500000 1>;
+ };
+
+ /* GEN 2x2 */
+ opp-10000000 {
+ opp-hz = /bits/ 64 <10000000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <1000000 1>;
+ };
+
+ /* GEN 3x1 */
+ opp-8000000 {
+ opp-hz = /bits/ 64 <8000000>;
+ required-opps = <&rpmhpd_opp_nom>;
+ opp-peak-kBps = <984500 1>;
+ };
+
+ /* GEN 3x2 GEN 4x1 */
+ opp-16000000 {
+ opp-hz = /bits/ 64 <16000000>;
+ required-opps = <&rpmhpd_opp_nom>;
+ opp-peak-kBps = <1969000 1>;
+ };
+
+ /* GEN 4x2 */
+ opp-32000000 {
+ opp-hz = /bits/ 64 <32000000>;
+ required-opps = <&rpmhpd_opp_nom>;
+ opp-peak-kBps = <3938000 1>;
+ };
+ };
+
};
pcie1_phy: phy@1c0e000 {
--
2.42.0
Bring the switch case in pcie_link_speed_mbps to new function to
the header file so that it can be used in other places like
in controller driver.
Create a new macro to convert from MBps to frequency.
Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
drivers/pci/pci.c | 19 +------------------
drivers/pci/pci.h | 24 ++++++++++++++++++++++++
2 files changed, 25 insertions(+), 18 deletions(-)
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index d8f11a078924..b441ab862a8d 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -6309,24 +6309,7 @@ int pcie_link_speed_mbps(struct pci_dev *pdev)
if (err)
return err;
- switch (to_pcie_link_speed(lnksta)) {
- case PCIE_SPEED_2_5GT:
- return 2500;
- case PCIE_SPEED_5_0GT:
- return 5000;
- case PCIE_SPEED_8_0GT:
- return 8000;
- case PCIE_SPEED_16_0GT:
- return 16000;
- case PCIE_SPEED_32_0GT:
- return 32000;
- case PCIE_SPEED_64_0GT:
- return 64000;
- default:
- break;
- }
-
- return -EINVAL;
+ return pcie_link_speed_to_mbps(to_pcie_link_speed(lnksta));
}
EXPORT_SYMBOL(pcie_link_speed_mbps);
diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index 2336a8d1edab..82e715ebe383 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -282,6 +282,30 @@ void pci_bus_put(struct pci_bus *bus);
(speed) == PCIE_SPEED_2_5GT ? 2500*8/10 : \
0)
+static inline int pcie_link_speed_to_mbps(enum pci_bus_speed speed)
+{
+ switch (speed) {
+ case PCIE_SPEED_2_5GT:
+ return 2500;
+ case PCIE_SPEED_5_0GT:
+ return 5000;
+ case PCIE_SPEED_8_0GT:
+ return 8000;
+ case PCIE_SPEED_16_0GT:
+ return 16000;
+ case PCIE_SPEED_32_0GT:
+ return 32000;
+ case PCIE_SPEED_64_0GT:
+ return 64000;
+ default:
+ break;
+ }
+
+ return -EINVAL;
+}
+
+#define PCIE_MBS2FREQ(speed) (pcie_link_speed_to_mbps(speed) * 1000)
+
const char *pci_speed_string(enum pci_bus_speed speed);
enum pci_bus_speed pcie_get_speed_cap(struct pci_dev *dev);
enum pcie_link_width pcie_get_width_cap(struct pci_dev *dev);
--
2.42.0
QCOM Resource Power Manager-hardened (RPMh) is a hardware block which
maintains hardware state of a regulator by performing max aggregation of
the requests made by all of the clients.
PCIe controller can operate on different RPMh performance state of power
domain based up on the speed of the link. And this performance state varies
from target to target.
It is manadate to scale the performance state based up on the PCIe speed
link operates so that SoC can run under optimum power conditions.
Add Operating Performance Points(OPP) support to vote for RPMh state based
upon the speed link is operating.
OPP can handle ICC bw voting also, so move ICC bw voting through OPP
framework if OPP entries are present.
In PCIe certain speeds like GEN1x2 & GEN2x1 or GEN3x2 & GEN4x1 use
same bw and frequency and thus the OPP entry, so use frequency based
search to reduce number of entries in the OPP table.
Don't initialize ICC if OPP is supported.
Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
drivers/pci/controller/dwc/pcie-qcom.c | 75 +++++++++++++++++++++++++++-------
1 file changed, 61 insertions(+), 14 deletions(-)
diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
index 088ebd2e5865..c608bec8b9cb 100644
--- a/drivers/pci/controller/dwc/pcie-qcom.c
+++ b/drivers/pci/controller/dwc/pcie-qcom.c
@@ -22,6 +22,7 @@
#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/pci.h>
+#include <linux/pm_opp.h>
#include <linux/pm_runtime.h>
#include <linux/platform_device.h>
#include <linux/phy/pcie.h>
@@ -244,6 +245,7 @@ struct qcom_pcie {
const struct qcom_pcie_cfg *cfg;
struct dentry *debugfs;
bool suspended;
+ bool opp_supported;
};
#define to_qcom_pcie(x) dev_get_drvdata((x)->dev)
@@ -1404,16 +1406,14 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
return 0;
}
-static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
+static void qcom_pcie_icc_opp_update(struct qcom_pcie *pcie)
{
struct dw_pcie *pci = pcie->pci;
- u32 offset, status;
+ u32 offset, status, freq;
+ struct dev_pm_opp *opp;
int speed, width;
int ret;
- if (!pcie->icc_mem)
- return;
-
offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
status = readw(pci->dbi_base + offset + PCI_EXP_LNKSTA);
@@ -1424,11 +1424,26 @@ static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, status);
width = FIELD_GET(PCI_EXP_LNKSTA_NLW, status);
- ret = icc_set_bw(pcie->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
- if (ret) {
- dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
- ret);
+ if (pcie->opp_supported) {
+ freq = PCIE_MBS2FREQ(pcie_link_speed[speed]);
+
+ opp = dev_pm_opp_find_freq_exact(pci->dev, freq * width, true);
+ if (!IS_ERR(opp)) {
+ ret = dev_pm_opp_set_opp(pci->dev, opp);
+ if (ret)
+ dev_err(pci->dev, "Failed to set opp: freq %ld ret %d\n",
+ dev_pm_opp_get_freq(opp), ret);
+ dev_pm_opp_put(opp);
+ }
+ } else {
+ ret = icc_set_bw(pcie->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
+ if (ret) {
+ dev_err(pci->dev, "failed to set interconnect bandwidth for pcie-mem: %d\n",
+ ret);
+ }
}
+
+ return;
}
static int qcom_pcie_link_transition_count(struct seq_file *s, void *data)
@@ -1471,8 +1486,10 @@ static void qcom_pcie_init_debugfs(struct qcom_pcie *pcie)
static int qcom_pcie_probe(struct platform_device *pdev)
{
const struct qcom_pcie_cfg *pcie_cfg;
+ unsigned long max_freq = INT_MAX;
struct device *dev = &pdev->dev;
struct qcom_pcie *pcie;
+ struct dev_pm_opp *opp;
struct dw_pcie_rp *pp;
struct resource *res;
struct dw_pcie *pci;
@@ -1539,9 +1556,36 @@ static int qcom_pcie_probe(struct platform_device *pdev)
goto err_pm_runtime_put;
}
- ret = qcom_pcie_icc_init(pcie);
- if (ret)
+ /* OPP table is optional */
+ ret = devm_pm_opp_of_add_table(dev);
+ if (ret && ret != -ENODEV) {
+ dev_err_probe(dev, ret, "Failed to add OPP table\n");
goto err_pm_runtime_put;
+ }
+
+ /*
+ * Use highest OPP here if the OPP table is present. At the end of the probe(),
+ * OPP will be updated using qcom_pcie_icc_opp_update().
+ */
+ if (ret != -ENODEV) {
+ opp = dev_pm_opp_find_freq_floor(dev, &max_freq);
+ if (!IS_ERR(opp)) {
+ ret = dev_pm_opp_set_opp(dev, opp);
+ if (ret)
+ dev_err_probe(pci->dev, ret,
+ "Failed to set opp: freq %ld\n",
+ dev_pm_opp_get_freq(opp));
+ dev_pm_opp_put(opp);
+ }
+ pcie->opp_supported = true;
+ }
+
+ /* Skip ICC init if OPP is supported as ICC bw vote is handled by OPP framework */
+ if (!pcie->opp_supported) {
+ ret = qcom_pcie_icc_init(pcie);
+ if (ret)
+ goto err_pm_runtime_put;
+ }
ret = pcie->cfg->ops->get_resources(pcie);
if (ret)
@@ -1561,7 +1605,7 @@ static int qcom_pcie_probe(struct platform_device *pdev)
goto err_phy_exit;
}
- qcom_pcie_icc_update(pcie);
+ qcom_pcie_icc_opp_update(pcie);
if (pcie->mhi)
qcom_pcie_init_debugfs(pcie);
@@ -1612,7 +1656,7 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
pcie->suspended = true;
}
- /* Remove cpu path vote after all the register access is done */
+ /* Remove CPU path vote after all the register access is done */
ret = icc_disable(pcie->icc_cpu);
if (ret) {
dev_err(dev, "failed to disable icc path of cpu-pcie: %d\n", ret);
@@ -1624,6 +1668,9 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
return ret;
}
+ if (pcie->opp_supported)
+ dev_pm_opp_set_opp(pcie->pci->dev, NULL);
+
return 0;
}
@@ -1646,7 +1693,7 @@ static int qcom_pcie_resume_noirq(struct device *dev)
pcie->suspended = false;
}
- qcom_pcie_icc_update(pcie);
+ qcom_pcie_icc_opp_update(pcie);
return 0;
}
--
2.42.0
On 23.02.2024 15:48, Krishna chaitanya chundru wrote:
> To access PCIe registers, PCIe BAR space, config space the CPU-PCIe
> ICC(interconnect consumers) path should be voted otherwise it may
> lead to NoC(Network on chip) timeout. We are surviving because of
> other driver vote for this path.
> As there is less access on this path compared to PCIe to mem path
> add minimum vote i.e 1KBps bandwidth always.
>
> In suspend remove the disable this path after register space access
> is done.
>
> Reviewed-by: Bryan O'Donoghue <[email protected]>
> Signed-off-by: Krishna chaitanya chundru <[email protected]>
> ---
[...]
>
> + /* Remove cpu path vote after all the register access is done */
> + ret = icc_disable(pcie->icc_cpu);
> + if (ret) {
> + dev_err(dev, "failed to disable icc path of cpu-pcie: %d\n", ret);
> + if (pcie->suspended) {
> + qcom_pcie_host_init(&pcie->pci->pp);
> + pcie->suspended = false;
> + }
> + qcom_pcie_icc_opp_update(pcie);
This doesn't compile (you rename it in patch 6, this is patch 3)
Konrad
On 2/24/2024 5:32 AM, Konrad Dybcio wrote:
> On 23.02.2024 15:48, Krishna chaitanya chundru wrote:
>> To access PCIe registers, PCIe BAR space, config space the CPU-PCIe
>> ICC(interconnect consumers) path should be voted otherwise it may
>> lead to NoC(Network on chip) timeout. We are surviving because of
>> other driver vote for this path.
>> As there is less access on this path compared to PCIe to mem path
>> add minimum vote i.e 1KBps bandwidth always.
>>
>> In suspend remove the disable this path after register space access
>> is done.
>>
>> Reviewed-by: Bryan O'Donoghue <[email protected]>
>> Signed-off-by: Krishna chaitanya chundru <[email protected]>
>> ---
>
> [...]
>
>>
>> + /* Remove cpu path vote after all the register access is done */
>> + ret = icc_disable(pcie->icc_cpu);
>> + if (ret) {
>> + dev_err(dev, "failed to disable icc path of cpu-pcie: %d\n", ret);
>> + if (pcie->suspended) {
>> + qcom_pcie_host_init(&pcie->pci->pp);
>> + pcie->suspended = false;
>> + }
>> + qcom_pcie_icc_opp_update(pcie);
>
> This doesn't compile (you rename it in patch 6, this is patch 3)
>
> Konrad
>
I will fix this in my next series.
- Krishna Chaitanya.
On Fri, Feb 23, 2024 at 08:18:00PM +0530, Krishna chaitanya chundru wrote:
> To access PCIe registers, PCIe BAR space, config space the CPU-PCIe
> ICC(interconnect consumers) path should be voted otherwise it may
> lead to NoC(Network on chip) timeout. We are surviving because of
> other driver vote for this path.
> As there is less access on this path compared to PCIe to mem path
> add minimum vote i.e 1KBps bandwidth always.
Add blank line between paragraphs or wrap into a single paragraph.
Add space before open paren, e.g., "ICC (interconnect consumers)",
"NoC (Network on Chip)".
> In suspend remove the disable this path after register space access
> is done.
"... remove the disable this path ..." has too many verbs :)
Maybe "When suspending, disable this path ..."?
> + * The config space, BAR space and registers goes through cpu-pcie path.
> + * Set peak bandwidth to 1KBps as recommended by HW team for this path all the time.
Wrap to fit in 80 columns.
> + /* Remove cpu path vote after all the register access is done */
One of the other patches has s/cpu/CPU/ in it. Please do the same
here.
Bjorn
On Fri, Feb 23, 2024 at 08:18:04PM +0530, Krishna chaitanya chundru wrote:
> QCOM Resource Power Manager-hardened (RPMh) is a hardware block which
> maintains hardware state of a regulator by performing max aggregation of
> the requests made by all of the clients.
>
> PCIe controller can operate on different RPMh performance state of power
> domain based up on the speed of the link. And this performance state varies
> from target to target.
s/up on/on/ (or "upon" if you prefer) (also below)
I understand changing the performance state based on the link speed,
but I don't understand the variation from target to target. Do you
mean just that the link speed may vary based on the rates supported by
the downstream device?
> It is manadate to scale the performance state based up on the PCIe speed
> link operates so that SoC can run under optimum power conditions.
It sounds like it's more power efficient, but not actually
*mandatory*. Maybe something like this?
The SoC can be more power efficient if we scale the performance
state based on the aggregate PCIe link speed.
> Add Operating Performance Points(OPP) support to vote for RPMh state based
> upon the speed link is operating.
Space before open paren, e.g., "Points (OPP)".
"... based on the link speed."
> OPP can handle ICC bw voting also, so move ICC bw voting through OPP
> framework if OPP entries are present.
>
> In PCIe certain speeds like GEN1x2 & GEN2x1 or GEN3x2 & GEN4x1 use
> same bw and frequency and thus the OPP entry, so use frequency based
> search to reduce number of entries in the OPP table.
GEN1x2, GEN2x1, etc are not "speeds". I would say:
Different link configurations may share the same aggregate speed,
e.g., a 2.5 GT/s x2 link and a 5.0 GT/s x1 link have the same speed
and share the same OPP entry.
> Don't initialize ICC if OPP is supported.
Because? Maybe this should say something about OPP including the ICC
voting?
> + ret = icc_set_bw(pcie->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
Wrap to fit in 80 columns.
> + * Use highest OPP here if the OPP table is present. At the end of the probe(),
> + * OPP will be updated using qcom_pcie_icc_opp_update().
Wrap to fit in 80 columns.
> + /* Skip ICC init if OPP is supported as ICC bw vote is handled by OPP framework */
Wrap to fit in 80 columns.
On Tue, Feb 27, 2024 at 05:36:38PM -0600, Bjorn Helgaas wrote:
> On Fri, Feb 23, 2024 at 08:18:04PM +0530, Krishna chaitanya chundru wrote:
> > QCOM Resource Power Manager-hardened (RPMh) is a hardware block which
> > maintains hardware state of a regulator by performing max aggregation of
> > the requests made by all of the clients.
> > It is manadate to scale the performance state based up on the PCIe speed
> > link operates so that SoC can run under optimum power conditions.
>
> It sounds like it's more power efficient, but not actually
> *mandatory*. Maybe something like this?
>
> The SoC can be more power efficient if we scale the performance
> state based on the aggregate PCIe link speed.
Actually, maybe it would be better to say "aggregate PCIe link
bandwidth", because we use "speed" elsewhere (PCIE_SPEED2MBS_ENC(),
etc) to refer specifically to the data rate independent of the width.
> > Add Operating Performance Points(OPP) support to vote for RPMh state based
> > upon the speed link is operating.
>
> "... based on the link speed."
"... based on the aggregate link bandwidth."
> > In PCIe certain speeds like GEN1x2 & GEN2x1 or GEN3x2 & GEN4x1 use
> > same bw and frequency and thus the OPP entry, so use frequency based
> > search to reduce number of entries in the OPP table.
>
> GEN1x2, GEN2x1, etc are not "speeds". I would say:
>
> Different link configurations may share the same aggregate speed,
> e.g., a 2.5 GT/s x2 link and a 5.0 GT/s x1 link have the same speed
> and share the same OPP entry.
Different link configurations may share the same aggregate
bandwidth, e.g., a 2.5 GT/s x2 link and a 5.0 GT/s x1 link
have the same bandwidth and share the same OPP entry.
Mention the new interface name in the subject and in the commit log.
s/pcie/PCIe/
The subject says "to MBps", but the commit log says "to frequency".
On Fri, Feb 23, 2024 at 08:18:03PM +0530, Krishna chaitanya chundru wrote:
> Bring the switch case in pcie_link_speed_mbps to new function to
> the header file so that it can be used in other places like
> in controller driver.
s/pcie_link_speed_mbps/pcie_link_speed_mbps()/ to identify it as a
function.
> Create a new macro to convert from MBps to frequency.
Include the new macro name here.
I think pcie_link_speed_mbps() returns Mb/s (mega*bits* per second),
not MB/s (mega*bytes* per second).
> Signed-off-by: Krishna chaitanya chundru <[email protected]>
> ---
> drivers/pci/pci.c | 19 +------------------
> drivers/pci/pci.h | 24 ++++++++++++++++++++++++
> 2 files changed, 25 insertions(+), 18 deletions(-)
>
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index d8f11a078924..b441ab862a8d 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -6309,24 +6309,7 @@ int pcie_link_speed_mbps(struct pci_dev *pdev)
> if (err)
> return err;
>
> - switch (to_pcie_link_speed(lnksta)) {
> - case PCIE_SPEED_2_5GT:
> - return 2500;
> - case PCIE_SPEED_5_0GT:
> - return 5000;
> - case PCIE_SPEED_8_0GT:
> - return 8000;
> - case PCIE_SPEED_16_0GT:
> - return 16000;
> - case PCIE_SPEED_32_0GT:
> - return 32000;
> - case PCIE_SPEED_64_0GT:
> - return 64000;
> - default:
> - break;
> - }
> -
> - return -EINVAL;
> + return pcie_link_speed_to_mbps(to_pcie_link_speed(lnksta));
> }
> EXPORT_SYMBOL(pcie_link_speed_mbps);
>
> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
> index 2336a8d1edab..82e715ebe383 100644
> --- a/drivers/pci/pci.h
> +++ b/drivers/pci/pci.h
> @@ -282,6 +282,30 @@ void pci_bus_put(struct pci_bus *bus);
> (speed) == PCIE_SPEED_2_5GT ? 2500*8/10 : \
> 0)
>
> +static inline int pcie_link_speed_to_mbps(enum pci_bus_speed speed)
> +{
> + switch (speed) {
> + case PCIE_SPEED_2_5GT:
> + return 2500;
> + case PCIE_SPEED_5_0GT:
> + return 5000;
> + case PCIE_SPEED_8_0GT:
> + return 8000;
> + case PCIE_SPEED_16_0GT:
> + return 16000;
> + case PCIE_SPEED_32_0GT:
> + return 32000;
> + case PCIE_SPEED_64_0GT:
> + return 64000;
> + default:
> + break;
> + }
> +
> + return -EINVAL;
> +}
> +
> +#define PCIE_MBS2FREQ(speed) (pcie_link_speed_to_mbps(speed) * 1000)
I feel like I might have asked some of this before; if so, my
apologies and maybe a comment would be useful here to save answering
again.
The MBS2FREQ name suggests that "speed" is Mb/s, but it's not; it's an
enum pci_bus_speed just like PCIE_SPEED2MBS_ENC() takes.
When PCI SIG defines a new data rate, PCIE_MBS2FREQ() will do
something completely wrong when pcie_link_speed_to_mbps() returns
-EINVAL. I think it would be better to do this in a way that we can
warn about the unknown speed and fall back to some reasonable default
instead of whatever (-EINVAL * 1000) works out to.
PCIE_MBS2FREQ() looks an awful lot like PCIE_SPEED2MBS_ENC(), except
that it doesn't adjust for the encoding overhead and it multiplies by
1000. I don't know what that result means. The name suggests a
frequency?
pcie_link_speed_to_mbps(PCIE_SPEED_2_5GT) == 2500 Mbit/s (raw data rate)
PCIE_SPEED2MBS_ENC(PCIE_SPEED_2_5GT) == 2000 Mbit/s or 2 Gbit/s (effective data rate)
PCIE_MBS2FREQ(PCIE_SPEED_2_5GT) == 2500000 (? 2.5M of something)
I don't really know how OPP works, but it looks like maybe
PCIE_MBS2FREQ() is a shim that depends on how the OPP tables in DT are
encoded? I'm surprised that the DT OPP tables aren't encoded with
either the raw data rate or the effective data rate directly instead
of what looks like the raw data rate / 1000.
Is this a standard OPP encoding that will apply to other drivers? If
so, it would be helpful to point to where that encoding is defined.
If not, PCIE_MBS2FREQ() should probably be defined in pcie-qcom.c.
Bjorn
On 2/28/2024 4:52 AM, Bjorn Helgaas wrote:
> On Fri, Feb 23, 2024 at 08:18:00PM +0530, Krishna chaitanya chundru wrote:
>> To access PCIe registers, PCIe BAR space, config space the CPU-PCIe
>> ICC(interconnect consumers) path should be voted otherwise it may
>> lead to NoC(Network on chip) timeout. We are surviving because of
>> other driver vote for this path.
>> As there is less access on this path compared to PCIe to mem path
>> add minimum vote i.e 1KBps bandwidth always.
>
> Add blank line between paragraphs or wrap into a single paragraph.
>
> Add space before open paren, e.g., "ICC (interconnect consumers)",
> "NoC (Network on Chip)".
>
>> In suspend remove the disable this path after register space access
>> is done.
>
> "... remove the disable this path ..." has too many verbs :)
> Maybe "When suspending, disable this path ..."?
>
>> + * The config space, BAR space and registers goes through cpu-pcie path.
>> + * Set peak bandwidth to 1KBps as recommended by HW team for this path all the time.
>
> Wrap to fit in 80 columns.
>
>> + /* Remove cpu path vote after all the register access is done */
>
> One of the other patches has s/cpu/CPU/ in it. Please do the same
> here.
>
> Bjorn
I will update the commit message as suggested in next series.
We have limit up to 100 columns in the driver right, I am ok to change
to 80 but just checking if I misunderstood something.
- Krishna Chaitanya.
On 2/28/2024 5:55 AM, Bjorn Helgaas wrote:
> Mention the new interface name in the subject and in the commit log.
>
> s/pcie/PCIe/
>
> The subject says "to MBps", but the commit log says "to frequency".
>
> On Fri, Feb 23, 2024 at 08:18:03PM +0530, Krishna chaitanya chundru wrote:
>> Bring the switch case in pcie_link_speed_mbps to new function to
>> the header file so that it can be used in other places like
>> in controller driver.
>
> s/pcie_link_speed_mbps/pcie_link_speed_mbps()/ to identify it as a
> function.
>
>> Create a new macro to convert from MBps to frequency.
>
> Include the new macro name here.
>
> I think pcie_link_speed_mbps() returns Mb/s (mega*bits* per second),
> not MB/s (mega*bytes* per second).
>
>> Signed-off-by: Krishna chaitanya chundru <[email protected]>
>> ---
>> drivers/pci/pci.c | 19 +------------------
>> drivers/pci/pci.h | 24 ++++++++++++++++++++++++
>> 2 files changed, 25 insertions(+), 18 deletions(-)
>>
>> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
>> index d8f11a078924..b441ab862a8d 100644
>> --- a/drivers/pci/pci.c
>> +++ b/drivers/pci/pci.c
>> @@ -6309,24 +6309,7 @@ int pcie_link_speed_mbps(struct pci_dev *pdev)
>> if (err)
>> return err;
>>
>> - switch (to_pcie_link_speed(lnksta)) {
>> - case PCIE_SPEED_2_5GT:
>> - return 2500;
>> - case PCIE_SPEED_5_0GT:
>> - return 5000;
>> - case PCIE_SPEED_8_0GT:
>> - return 8000;
>> - case PCIE_SPEED_16_0GT:
>> - return 16000;
>> - case PCIE_SPEED_32_0GT:
>> - return 32000;
>> - case PCIE_SPEED_64_0GT:
>> - return 64000;
>> - default:
>> - break;
>> - }
>> -
>> - return -EINVAL;
>> + return pcie_link_speed_to_mbps(to_pcie_link_speed(lnksta));
>> }
>> EXPORT_SYMBOL(pcie_link_speed_mbps);
>>
>> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
>> index 2336a8d1edab..82e715ebe383 100644
>> --- a/drivers/pci/pci.h
>> +++ b/drivers/pci/pci.h
>> @@ -282,6 +282,30 @@ void pci_bus_put(struct pci_bus *bus);
>> (speed) == PCIE_SPEED_2_5GT ? 2500*8/10 : \
>> 0)
>>
>> +static inline int pcie_link_speed_to_mbps(enum pci_bus_speed speed)
>> +{
>> + switch (speed) {
>> + case PCIE_SPEED_2_5GT:
>> + return 2500;
>> + case PCIE_SPEED_5_0GT:
>> + return 5000;
>> + case PCIE_SPEED_8_0GT:
>> + return 8000;
>> + case PCIE_SPEED_16_0GT:
>> + return 16000;
>> + case PCIE_SPEED_32_0GT:
>> + return 32000;
>> + case PCIE_SPEED_64_0GT:
>> + return 64000;
>> + default:
>> + break;
>> + }
>> +
>> + return -EINVAL;
>> +}
>> +
>> +#define PCIE_MBS2FREQ(speed) (pcie_link_speed_to_mbps(speed) * 1000)
>
> I feel like I might have asked some of this before; if so, my
> apologies and maybe a comment would be useful here to save answering
> again.
>
> The MBS2FREQ name suggests that "speed" is Mb/s, but it's not; it's an
> enum pci_bus_speed just like PCIE_SPEED2MBS_ENC() takes.
>
> When PCI SIG defines a new data rate, PCIE_MBS2FREQ() will do
> something completely wrong when pcie_link_speed_to_mbps() returns
> -EINVAL. I think it would be better to do this in a way that we can
> warn about the unknown speed and fall back to some reasonable default
> instead of whatever (-EINVAL * 1000) works out to.
>
As commented below I will move PCIE_MBS2FREQ to qcom driver and I will
take care about -EINVAL in the qcom driver itself.
> PCIE_MBS2FREQ() looks an awful lot like PCIE_SPEED2MBS_ENC(), except
> that it doesn't adjust for the encoding overhead and it multiplies by
> 1000. I don't know what that result means. The name suggests a
> frequency?
>
> pcie_link_speed_to_mbps(PCIE_SPEED_2_5GT) == 2500 Mbit/s (raw data rate)
> PCIE_SPEED2MBS_ENC(PCIE_SPEED_2_5GT) == 2000 Mbit/s or 2 Gbit/s (effective data rate)
> PCIE_MBS2FREQ(PCIE_SPEED_2_5GT) == 2500000 (? 2.5M of something)
>
> I don't really know how OPP works, but it looks like maybe
> PCIE_MBS2FREQ() is a shim that depends on how the OPP tables in DT are
> encoded? I'm surprised that the DT OPP tables aren't encoded with
> either the raw data rate or the effective data rate directly instead
> of what looks like the raw data rate / 1000.
>
> Is this a standard OPP encoding that will apply to other drivers? If
> so, it would be helpful to point to where that encoding is defined.
> If not, PCIE_MBS2FREQ() should probably be defined in pcie-qcom.c.
>
It depends on how driver use OPP, I think as you suggested PCIE_MBS2FREQ
should belong to pcie-qcom.c as no other driver is using it for now.
I will move to pcie_qcom.c in my next series.
- Krishna Chaitanya.
> Bjorn
>
On 2/28/2024 5:15 AM, Bjorn Helgaas wrote:
> On Tue, Feb 27, 2024 at 05:36:38PM -0600, Bjorn Helgaas wrote:
>> On Fri, Feb 23, 2024 at 08:18:04PM +0530, Krishna chaitanya chundru wrote:
>>> QCOM Resource Power Manager-hardened (RPMh) is a hardware block which
>>> maintains hardware state of a regulator by performing max aggregation of
>>> the requests made by all of the clients.
>
>>> It is manadate to scale the performance state based up on the PCIe speed
>>> link operates so that SoC can run under optimum power conditions.
>>
>> It sounds like it's more power efficient, but not actually
>> *mandatory*. Maybe something like this?
>>
>> The SoC can be more power efficient if we scale the performance
>> state based on the aggregate PCIe link speed.
>
> Actually, maybe it would be better to say "aggregate PCIe link
> bandwidth", because we use "speed" elsewhere (PCIE_SPEED2MBS_ENC(),
> etc) to refer specifically to the data rate independent of the width.
>
>>> Add Operating Performance Points(OPP) support to vote for RPMh state based
>>> upon the speed link is operating.
>>
>> "... based on the link speed."
>
> "... based on the aggregate link bandwidth."
>
>>> In PCIe certain speeds like GEN1x2 & GEN2x1 or GEN3x2 & GEN4x1 use
>>> same bw and frequency and thus the OPP entry, so use frequency based
>>> search to reduce number of entries in the OPP table.
>>
>> GEN1x2, GEN2x1, etc are not "speeds". I would say:
>>
>> Different link configurations may share the same aggregate speed,
>> e.g., a 2.5 GT/s x2 link and a 5.0 GT/s x1 link have the same speed
>> and share the same OPP entry.
>
> Different link configurations may share the same aggregate
> bandwidth, e.g., a 2.5 GT/s x2 link and a 5.0 GT/s x1 link
> have the same bandwidth and share the same OPP entry.
- I will update the commit message as suggested in my next series.
- Krishna Chaitanya.
On Wed, Feb 28, 2024 at 12:08:37PM +0530, Krishna Chaitanya Chundru wrote:
> We have limit up to 100 columns in the driver right, I am ok to change
> to 80 but just checking if I misunderstood something.
Please take a look at Documentation/process/coding-style.rst, which
clearly states:
The preferred limit on the length of a single line is 80
columns.
Statements longer than 80 columns should be broken into sensible
chunks, unless exceeding 80 columns significantly increases
readability and does not hide information.
So generally you should stay within 80 columns, unless not doing so
*significantly* increases readability. (And note that making such
decisions requires human judgement, which is why checkpatch now only
warns about lines longer than 100 chars.)
Johan
On Wed, Feb 28, 2024 at 12:08:37PM +0530, Krishna Chaitanya Chundru wrote:
> On 2/28/2024 4:52 AM, Bjorn Helgaas wrote:
> > On Fri, Feb 23, 2024 at 08:18:00PM +0530, Krishna chaitanya chundru wrote:
> > > To access PCIe registers, PCIe BAR space, config space the CPU-PCIe
> > > ICC(interconnect consumers) path should be voted otherwise it may
> > > lead to NoC(Network on chip) timeout. We are surviving because of
> > > other driver vote for this path.
> > > As there is less access on this path compared to PCIe to mem path
> > > add minimum vote i.e 1KBps bandwidth always.
> > > + * The config space, BAR space and registers goes through cpu-pcie path.
> > > + * Set peak bandwidth to 1KBps as recommended by HW team for this path all the time.
> >
> > Wrap to fit in 80 columns.
> We have limit up to 100 columns in the driver right, I am ok to change to 80
> but just checking if I misunderstood something.
I should have said "wrap to fit in 80 columns to match the rest of the
file." I looked at pcie-qcom.c, and with a few minor exceptions, it
fits in 80 columns, and maintaining that consistency makes it easier
to browse. Sometimes exceptions make sense for code, but for
comments, having some that fit in 80 columns and some that require 100
just makes life harder.
Bjorn
On 2/28/2024 8:20 PM, Bjorn Helgaas wrote:
> On Wed, Feb 28, 2024 at 12:08:37PM +0530, Krishna Chaitanya Chundru wrote:
>> On 2/28/2024 4:52 AM, Bjorn Helgaas wrote:
>>> On Fri, Feb 23, 2024 at 08:18:00PM +0530, Krishna chaitanya chundru wrote:
>>>> To access PCIe registers, PCIe BAR space, config space the CPU-PCIe
>>>> ICC(interconnect consumers) path should be voted otherwise it may
>>>> lead to NoC(Network on chip) timeout. We are surviving because of
>>>> other driver vote for this path.
>>>> As there is less access on this path compared to PCIe to mem path
>>>> add minimum vote i.e 1KBps bandwidth always.
>
>>>> + * The config space, BAR space and registers goes through cpu-pcie path.
>>>> + * Set peak bandwidth to 1KBps as recommended by HW team for this path all the time.
>>>
>>> Wrap to fit in 80 columns.
>
>> We have limit up to 100 columns in the driver right, I am ok to change to 80
>> but just checking if I misunderstood something.
>
> I should have said "wrap to fit in 80 columns to match the rest of the
> file." I looked at pcie-qcom.c, and with a few minor exceptions, it
> fits in 80 columns, and maintaining that consistency makes it easier
> to browse. Sometimes exceptions make sense for code, but for
> comments, having some that fit in 80 columns and some that require 100
> just makes life harder.
>
> Bjorn
>
Sure I will wrap in 80 columns, in my next patch series.
- Krishna Chaitanya.
On Wed, Feb 28, 2024 at 08:43:53PM +0530, Krishna Chaitanya Chundru wrote:
> On 2/28/2024 7:09 PM, Johan Hovold wrote:
> > On Wed, Feb 28, 2024 at 12:08:37PM +0530, Krishna Chaitanya Chundru wrote:
> >
> > > We have limit up to 100 columns in the driver right, I am ok to change
> > > to 80 but just checking if I misunderstood something.
> >
> > Please take a look at Documentation/process/coding-style.rst, which
> > clearly states:
> >
> > The preferred limit on the length of a single line is 80
> > columns.
> >
> > Statements longer than 80 columns should be broken into sensible
> > chunks, unless exceeding 80 columns significantly increases
> > readability and does not hide information.
> >
> > So generally you should stay within 80 columns, unless not doing so
> > *significantly* increases readability. (And note that making such
> > decisions requires human judgement, which is why checkpatch now only
> > warns about lines longer than 100 chars.)
>
> ok got it Johan, As checkpatch is not reporting any warnings or errors
> for I misunderstood this. I will correct the comments to fit in 80 columns
> in my next series.
Yeah, checkpatch is great and useful, but the bottom line is that it's
a tool that helps keep things relatively consistent, and a lot of that
consistency just comes down to paying attention to all the surrounding
code so the result looks coherent instead of a hodgepodge.
Bjorn
On 2/28/2024 7:09 PM, Johan Hovold wrote:
> On Wed, Feb 28, 2024 at 12:08:37PM +0530, Krishna Chaitanya Chundru wrote:
>
>> We have limit up to 100 columns in the driver right, I am ok to change
>> to 80 but just checking if I misunderstood something.
>
> Please take a look at Documentation/process/coding-style.rst, which
> clearly states:
>
> The preferred limit on the length of a single line is 80
> columns.
>
> Statements longer than 80 columns should be broken into sensible
> chunks, unless exceeding 80 columns significantly increases
> readability and does not hide information.
>
> So generally you should stay within 80 columns, unless not doing so
> *significantly* increases readability. (And note that making such
> decisions requires human judgement, which is why checkpatch now only
> warns about lines longer than 100 chars.)
>
> Johan
ok got it Johan, As checkpatch is not reporting any warnings or errors
for I misunderstood this. I will correct the comments to fit in 80
columns in my next series.
- Krishna Chaitanya.