This patchset is a third one in the series created in the framework of
my Synopsys DW uMCTL2 DDRC-related work:
[1: In-progress v4] EDAC/mc/synopsys: Various fixes and cleanups
Link: https://lore.kernel.org/linux-edac/[email protected]
[2: In-progress v4] EDAC/synopsys: Add generic DDRC info and address mapping
Link: https://lore.kernel.org/linux-edac/[email protected]
[3: In-progress v4] EDAC/synopsys: Add generic resources and Scrub support
Link: ---you are looking at it---
Note the patchsets above must be merged in the same order as they are
placed in the list in order to prevent conflicts. Nothing prevents them
from being reviewed synchronously though. Any tests are very welcome.
Thanks in advance.
This is a final patchset in the framework of my Synopsys DW uMCTL2 DDRC
work, which completes the driver updates with the new functionality.
The series starts from extending the Synopsys DW uMCTL2 DDRC DT-schema
with the controller specific IRQs, clocks and resets properties. In
addition the Baikal-T1 DDRC is added to the DT-bindings since it's
based on the DW uMCTL2 DDRC v2.61a.
After that the driver is finally altered to informing the MCI core with
the detected SDRAM ranks and making sure the detected errors are reported
to the corresponding rank. Then the DDRC capabilities are extended with
optional Scrub functionality. It's indeed possible to have the DW uMCTL2
controller with no HW-accelerated Scrub support (no RMW engine). In that
case the MCI core is supposed to perform the erroneous location ECC update
by means of the platform-specific scrub method.
Then the error-injection functionality is fixed a bit. First since the
driver now has the Sys<->SDRAM address translation infrastructure it can
be utilized to convert the supplied poisonous system address to the SDRAM
one. Thus there is no longer need in preserving the address in the device
private data. Second a DebuFS node-based command to disable the
error-injection feature is added (no idea why it hasn't been done in the
first place).
Afterwards a series of the IRQ-related patches goes. First introduce the
individual DDRC event IRQs support in accordance with what has been added
to the DT-bindings and what the native DW uMCTL2 DDR controller actually
provides. Then aside to the ECC CE/UE errors detection, the DFI/SDRAM
CRC/Parity errors report is added. It specifically useful for the DDR4
memory which has dedicated ALARM_n signal, but can be still utilized in
the framework of the older protocols if the device DFI-PHY calculates the
HIF-interface signals parity. Third after adding the platform clock/resets
request procedure introduce the HW-accelerated Scrubber support. Its
performance can be tuned by means of the sdram_scrub_rate SysFS node and
the Core clock rate. Note it is possible to one-time-run the Scrubber in
the back-to-back mode so to perform a burst-like scan of the entire SDRAM
memory.
At the patchset closure he DW uMCTL2 DDRC kernel config is finally fixed
to be available not only on the Xilinx, Intel and MXC platforms, but on
anyone including the Baikal-T1 SoC which has the DW uMCTL2 DDRC v2.61a on
board.
Changelog v2:
- Replace "snps,ddrc-3.80a" compatible string with "snps,dw-umctl2-ddrc"
in the example.
- Move unrelated changes in to the dedicated patches. (@Krzysztof)
- Use the IRQ macros in the example. (@Krzysztof)
- Add a new patch:
[PATCH v2 01/15] dt-bindings: memory: snps: Replace opencoded numbers with macros
(@Krzysztof)
- Add a new patch:
[PATCH v2 03/15] dt-bindings: memory: snps: Convert the schema to being generic
(@Krzysztof)
- Drop the PHY CSR region. (@Rob)
- Move the Baikal-T1 DDRC bindings to the separate DT-schema.
Changelog v3:
- Create common DT-schema instead of using the generic device DT-bindings.
(@Rob)
- Drop the merged in patches:
[PATCH v2 01/15] dt-bindings: memory: snps: Replace opencoded numbers with macros
[PATCH v2 02/15] dt-bindings: memory: snps: Extend schema with IRQs/resets/clocks props
(@Krzysztof)
Changelog v4:
- Explicitly set snps_ddrc_info.dq_width for Baikal-T1 DDRC for better
maintainability.
- Explicitly set sys_app_map.minsize to SZ_256M instead of using a helper
macro DDR_MIN_SARSIZE for Baikal-T1 DDRC.
- Use div_u64() instead of do_div().
- Use FIELD_MAX() instead of open-coding the bitwise shift to find
the max field value.
- Fix inject_data_error string printing "Rank" word where "Col" is
supposed to be.
- Rebase onto the kernel v6.6-rcX.
Signed-off-by: Serge Semin <[email protected]>
Cc: Punnaiah Choudary Kalluri <[email protected]>
Cc: Dinh Nguyen <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: Krzysztof Kozlowski <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Serge Semin (13):
dt-bindings: memory: snps: Convert the schema to being generic
dt-bindings: memory: Add BT1 DDRC DT-schema
EDAC/synopsys: Add multi-ranked memory support
EDAC/synopsys: Add optional ECC Scrub support
EDAC/synopsys: Drop ECC poison address from private data
EDAC/synopsys: Add data poisoning disable support
EDAC/synopsys: Split up ECC UE/CE IRQs handler
EDAC/synopsys: Add individual named ECC IRQs support
EDAC/synopsys: Add DFI alert_n IRQ support
EDAC/synopsys: Add reference clocks support
EDAC/synopsys: Add ECC Scrubber support
EDAC/synopsys: Drop vendor-specific arch dependency
EDAC/synopsys: Add BT1 DDRC support
.../memory-controllers/baikal,bt1-ddrc.yaml | 91 ++
.../snps,dw-umctl2-common.yaml | 75 ++
.../snps,dw-umctl2-ddrc.yaml | 57 +-
drivers/edac/Kconfig | 1 -
drivers/edac/synopsys_edac.c | 950 ++++++++++++++----
5 files changed, 933 insertions(+), 241 deletions(-)
create mode 100644 Documentation/devicetree/bindings/memory-controllers/baikal,bt1-ddrc.yaml
create mode 100644 Documentation/devicetree/bindings/memory-controllers/snps,dw-umctl2-common.yaml
--
2.41.0
DW uMCTL2 DDR controller IP-core can by synthesized with an embedded
Scrubber engine. The ECC Scrubber (SBR) is a block which initiates
periodic background burst read commands to the DDRC and further towards
the DDR memory in an attempt to trigger Correctable or Uncorrectable
errors. If a Correctable error is detected the ECC Scrub feature will
execute the Read-Modify-Write (RMW) procedure in order to fix the ECC. In
case of the Uncorrectable error it will be just reported as the
corresponding IRQ event. So it's definitely very useful feature. Let's add
it to the driver then especially seeing the MCI core already has some
infrastructure for it.
First of all the Scrubber clock needs to be enabled if one is supplied
otherwise the engine won't work.
Secondly the Scrubber engine support needs to be detected. Alas there is
no any special CSR indicating whether the DW uMCTL2 DDRC IP-core has been
synthesized with one embedded. Instead implement the detection procedure
based on the Scrubber-specific CSRs writability. So if the SBRWDATA0 CSR
is writable then the CSR exists, which means the Scrubber is available,
otherwise the capability will be considered as absent.
Thirdly the MCI core provides two callbacks utilized for the Scrubber
tuning: set the Scrubber bandwidth in bytes, which can also be used to
disable the periodic scrubbing; get the Scrubber bandwidth (zero if
disabled). Both of them can be implemented by using the Scrubber CSRs the
controller provides. In particular aside with the back-to-back periodic
reads the Scrubber provides a way to delay the next read command for the
predefined set of 512's Core/Scrubber clock cycles. It can be used to
change the Scrubber bandwidth from the DDR maximal bandwidth (no delay) to
up to (0x1FFF * 512) Core/Scrubber clock cycles (see the inline comments
for details and utilized formulae). Note the Scrubber clock must be
synchronous to the Core clock by the controller design so use the Core
clock rate for the calculations. Pleas also note if no Core clock
specified the Scrubber will still be supported, but the bandwidth will be
used directly to calculate the Scrubber reads interval. The back-to-back
reads mode in this case will be indicated by the INT_MAX bandwidth.
Fourthly the back-to-back scrubbing most likely will cause the significant
system performance drop. The manual says that it has been added to the
controller for the initial SDRAM initialization and the fast SDRAM
scrubbing after getting out of the low-power state. In anyway it is
supposed to be enabled only for a single SDRAM pass. So preserve that
semantic here to avoid the system lagging and disable the back-to-back
scrubbing in the Scrubber Done IRQ handler after the Scrubbing work is
done.
Finally the denoted scrub-rate callbacks and the SCRUB_FLAG_HW_PROG and
SCRUB_FLAG_HW_TUN flags will be set to the MCI descriptor based on the
detected Scrubber capability. So no capability - no flags and no
callbacks.
Signed-off-by: Serge Semin <[email protected]>
---
Changelog v4:
- Use div_u64() instead of do_div().
- Use FIELD_MAX() instead of open-coding the bitwise shift to find
the max field value.
---
drivers/edac/synopsys_edac.c | 299 +++++++++++++++++++++++++++++++++++
1 file changed, 299 insertions(+)
diff --git a/drivers/edac/synopsys_edac.c b/drivers/edac/synopsys_edac.c
index ab4c7cc2daf5..e589aa9f7876 100644
--- a/drivers/edac/synopsys_edac.c
+++ b/drivers/edac/synopsys_edac.c
@@ -12,12 +12,14 @@
#include <linux/edac.h>
#include <linux/fs.h>
#include <linux/log2.h>
+#include <linux/math64.h>
#include <linux/module.h>
#include <linux/pfn.h>
#include <linux/platform_device.h>
#include <linux/seq_file.h>
#include <linux/sizes.h>
#include <linux/spinlock.h>
+#include <linux/units.h>
#include <linux/interrupt.h>
#include <linux/of.h>
#include <linux/of_device.h>
@@ -34,6 +36,7 @@
/* DDR capabilities */
#define SNPS_CAP_ECC_SCRUB BIT(0)
+#define SNPS_CAP_ECC_SCRUBBER BIT(1)
#define SNPS_CAP_ZYNQMP BIT(31)
/* Synopsys uMCTL2 DDR controller registers that are relevant to ECC */
@@ -102,6 +105,12 @@
#define DDR_SARBASE0_OFST 0xF04
#define DDR_SARSIZE0_OFST 0xF08
+/* ECC Scrubber Registers */
+#define ECC_SBRCTL_OFST 0xF24
+#define ECC_SBRSTAT_OFST 0xF28
+#define ECC_SBRWDATA0_OFST 0xF2C
+#define ECC_SBRWDATA1_OFST 0xF30
+
/* ZynqMP DDR QOS Registers */
#define ZYNQMP_DDR_QOS_IRQ_STAT_OFST 0x20200
#define ZYNQMP_DDR_QOS_IRQ_EN_OFST 0x20208
@@ -242,6 +251,18 @@
#define DDR_MAX_NSAR 4
#define DDR_MIN_SARSIZE SZ_256M
+/* ECC Scrubber registers definitions */
+#define ECC_SBRCTL_SCRUB_INTERVAL GENMASK(20, 8)
+#define ECC_SBRCTL_INTERVAL_STEP 512
+#define ECC_SBRCTL_INTERVAL_MIN 0
+#define ECC_SBRCTL_INTERVAL_SAFE 1
+#define ECC_SBRCTL_INTERVAL_MAX FIELD_MAX(ECC_SBRCTL_SCRUB_INTERVAL)
+#define ECC_SBRCTL_SCRUB_BURST GENMASK(6, 4)
+#define ECC_SBRCTL_SCRUB_MODE_WR BIT(2)
+#define ECC_SBRCTL_SCRUB_EN BIT(0)
+#define ECC_SBRSTAT_SCRUB_DONE BIT(1)
+#define ECC_SBRSTAT_SCRUB_BUSY BIT(0)
+
/* ZynqMP DDR QOS Interrupt register definitions */
#define ZYNQMP_DDR_QOS_UE_MASK BIT(2)
#define ZYNQMP_DDR_QOS_CE_MASK BIT(1)
@@ -912,6 +933,47 @@ static irqreturn_t snps_dfi_irq_handler(int irq, void *dev_id)
return IRQ_HANDLED;
}
+/**
+ * snps_sbr_irq_handler - Scrubber Done interrupt handler.
+ * @irq: IRQ number.
+ * @dev_id: Device ID.
+ *
+ * It just checks whether the IRQ has been caused by the Scrubber Done event
+ * and disables the back-to-back scrubbing by falling back to the smallest
+ * delay between the Scrubber read commands.
+ *
+ * Return: IRQ_NONE, if interrupt not set or IRQ_HANDLED otherwise.
+ */
+static irqreturn_t snps_sbr_irq_handler(int irq, void *dev_id)
+{
+ struct mem_ctl_info *mci = dev_id;
+ struct snps_edac_priv *priv = mci->pvt_info;
+ unsigned long flags;
+ u32 regval, en;
+
+ /* Make sure IRQ is caused by the Scrubber Done event */
+ regval = readl(priv->baseaddr + ECC_SBRSTAT_OFST);
+ if (!(regval & ECC_SBRSTAT_SCRUB_DONE))
+ return IRQ_NONE;
+
+ spin_lock_irqsave(&priv->reglock, flags);
+
+ regval = readl(priv->baseaddr + ECC_SBRCTL_OFST);
+ en = regval & ECC_SBRCTL_SCRUB_EN;
+ writel(regval & ~en, priv->baseaddr + ECC_SBRCTL_OFST);
+
+ regval = FIELD_PREP(ECC_SBRCTL_SCRUB_INTERVAL, ECC_SBRCTL_INTERVAL_SAFE);
+ writel(regval, priv->baseaddr + ECC_SBRCTL_OFST);
+
+ writel(regval | en, priv->baseaddr + ECC_SBRCTL_OFST);
+
+ spin_unlock_irqrestore(&priv->reglock, flags);
+
+ edac_mc_printk(mci, KERN_WARNING, "Back-to-back scrubbing disabled\n");
+
+ return IRQ_HANDLED;
+}
+
/**
* snps_com_irq_handler - Interrupt IRQ signal handler.
* @irq: IRQ number.
@@ -921,6 +983,8 @@ static irqreturn_t snps_dfi_irq_handler(int irq, void *dev_id)
*/
static irqreturn_t snps_com_irq_handler(int irq, void *dev_id)
{
+ struct mem_ctl_info *mci = dev_id;
+ struct snps_edac_priv *priv = mci->pvt_info;
irqreturn_t rc = IRQ_NONE;
rc |= snps_ce_irq_handler(irq, dev_id);
@@ -929,6 +993,9 @@ static irqreturn_t snps_com_irq_handler(int irq, void *dev_id)
rc |= snps_dfi_irq_handler(irq, dev_id);
+ if (priv->info.caps & SNPS_CAP_ECC_SCRUBBER)
+ rc |= snps_sbr_irq_handler(irq, dev_id);
+
return rc;
}
@@ -983,6 +1050,200 @@ static void snps_disable_irq(struct snps_edac_priv *priv)
spin_unlock_irqrestore(&priv->reglock, flags);
}
+/**
+ * snps_get_sdram_bw - Get SDRAM bandwidth.
+ * @priv: DDR memory controller private instance data.
+ *
+ * The SDRAM interface bandwidth is calculated based on the DDRC Core clock rate
+ * and the DW uMCTL2 IP-core parameters like DQ-bus width and mode and
+ * Core/SDRAM clocks frequency ratio. Note it returns the theoretical bandwidth
+ * which in reality is hardly possible to reach.
+ *
+ * Return: SDRAM bandwidth or zero if no Core clock specified.
+ */
+static u64 snps_get_sdram_bw(struct snps_edac_priv *priv)
+{
+ unsigned long rate;
+
+ /*
+ * Depending on the ratio mode the SDRAM clock either matches the Core
+ * clock or runs with the twice its frequency.
+ */
+ rate = clk_get_rate(priv->clks[SNPS_CORE_CLK].clk);
+ rate *= priv->info.freq_ratio;
+
+ /*
+ * Scale up by 2 since it's DDR (Double Data Rate) and subtract the
+ * DQ-mode since in non-Full mode only a part of the DQ-bus is utilised
+ * on each SDRAM clock edge.
+ */
+ return (2U << (priv->info.dq_width - priv->info.dq_mode)) * (u64)rate;
+}
+
+/**
+ * snps_get_scrub_bw - Get Scrubber bandwidth.
+ * @priv: DDR memory controller private instance data.
+ * @interval: Scrub interval.
+ *
+ * DW uMCTL2 DDRC Scrubber performs periodical progressive burst reads (RMW if
+ * ECC CE is detected) commands from the whole memory space. The read commands
+ * can be delayed by means of the SBRCTL.scrub_interval field. The Scrubber
+ * cycles look as follows:
+ *
+ * |-HIF-burst-read-|-------delay-------|-HIF-burst-read-|------- etc
+ *
+ * Tb = Bl*[DQ]/Bw[RAM], Td = 512*interval/Fc - periods of the HIF-burst-read
+ * and delay stages, where
+ * Bl - HIF burst length, [DQ] - Full DQ-bus width, Bw[RAM] - SDRAM bandwidth,
+ * Fc - Core clock frequency (Scrubber and Core clocks are synchronous).
+ *
+ * After some simple calculations the expressions above can be used to get the
+ * next Scrubber bandwidth formulae:
+ *
+ * Bw[Sbr] = Bw[RAM] / (1 + F * interval), where
+ * F = 2 * 512 * Fr * Fc * [DQ]e - interval scale factor with
+ * Fr - HIF/SDRAM clock frequency ratio (1 or 2), [DQ]e - DQ-bus width mode.
+ *
+ * Return: Scrubber bandwidth or zero if no Core clock specified.
+ */
+static u64 snps_get_scrub_bw(struct snps_edac_priv *priv, u32 interval)
+{
+ unsigned long fac;
+ u64 bw_ram;
+
+ fac = (2 * ECC_SBRCTL_INTERVAL_STEP * priv->info.freq_ratio) /
+ (priv->info.hif_burst_len * (1UL << priv->info.dq_mode));
+
+ bw_ram = snps_get_sdram_bw(priv);
+
+ return div_u64(bw_ram, 1 + fac * interval);
+}
+
+/**
+ * snps_get_scrub_interval - Get Scrubber delay interval.
+ * @priv: DDR memory controller private instance data.
+ * @bw: Scrubber bandwidth.
+ *
+ * Similarly to the Scrubber bandwidth the interval formulae can be inferred
+ * from the same expressions:
+ *
+ * interval = (Bw[RAM] - Bw[Sbr]) / (F * Bw[Sbr])
+ *
+ * Return: Scrubber delay interval or zero if no Core clock specified.
+ */
+static u32 snps_get_scrub_interval(struct snps_edac_priv *priv, u32 bw)
+{
+ unsigned long fac;
+ u64 bw_ram;
+
+ fac = (2 * priv->info.freq_ratio * ECC_SBRCTL_INTERVAL_STEP) /
+ (priv->info.hif_burst_len * (1UL << priv->info.dq_mode));
+
+ bw_ram = snps_get_sdram_bw(priv);
+
+ /* Divide twice so not to cause the integer overflow in (fac * bw) */
+ return div_u64(div_u64(bw_ram - bw, bw), fac);
+}
+
+/**
+ * snps_set_sdram_scrub_rate - Set the Scrubber bandwidth.
+ * @mci: EDAC memory controller instance.
+ * @bw: Bandwidth.
+ *
+ * It calculates the delay between the Scrubber read commands based on the
+ * specified bandwidth and the Core clock rate. If the Core clock is unavailable
+ * the passed bandwidth will be directly used as the interval value.
+ *
+ * Note the method warns about the back-to-back scrubbing since it may
+ * significantly degrade the system performance. This mode is supposed to be
+ * used for a single SDRAM scrubbing pass only. So it will be turned off in the
+ * Scrubber Done IRQ handler.
+ *
+ * Return: Actually set bandwidth (interval-based approximated bandwidth if the
+ * Core clock is unavailable) or zero if the Scrubber was disabled.
+ */
+static int snps_set_sdram_scrub_rate(struct mem_ctl_info *mci, u32 bw)
+{
+ struct snps_edac_priv *priv = mci->pvt_info;
+ u32 regval, interval;
+ unsigned long flags;
+ u64 bw_min, bw_max;
+
+ /* Don't bother with the calculations just disable and return. */
+ if (!bw) {
+ spin_lock_irqsave(&priv->reglock, flags);
+
+ regval = readl(priv->baseaddr + ECC_SBRCTL_OFST);
+ regval &= ~ECC_SBRCTL_SCRUB_EN;
+ writel(regval, priv->baseaddr + ECC_SBRCTL_OFST);
+
+ spin_unlock_irqrestore(&priv->reglock, flags);
+
+ return 0;
+ }
+
+ /* If no Core clock specified fallback to the direct interval setup. */
+ bw_max = snps_get_scrub_bw(priv, ECC_SBRCTL_INTERVAL_MIN);
+ if (bw_max) {
+ bw_min = snps_get_scrub_bw(priv, ECC_SBRCTL_INTERVAL_MAX);
+ bw = clamp_t(u64, bw, bw_min, bw_max);
+
+ interval = snps_get_scrub_interval(priv, bw);
+ } else {
+ bw = clamp_val(bw, ECC_SBRCTL_INTERVAL_MIN, ECC_SBRCTL_INTERVAL_MAX);
+
+ interval = ECC_SBRCTL_INTERVAL_MAX - bw;
+ }
+
+ /*
+ * SBRCTL.scrub_en bitfield must be accessed separately from the other
+ * CSR bitfields. It means the flag must be set/cleared with no updates
+ * to the rest of the fields.
+ */
+ spin_lock_irqsave(&priv->reglock, flags);
+
+ regval = FIELD_PREP(ECC_SBRCTL_SCRUB_INTERVAL, interval);
+ writel(regval, priv->baseaddr + ECC_SBRCTL_OFST);
+
+ writel(regval | ECC_SBRCTL_SCRUB_EN, priv->baseaddr + ECC_SBRCTL_OFST);
+
+ spin_unlock_irqrestore(&priv->reglock, flags);
+
+ if (!interval)
+ edac_mc_printk(mci, KERN_WARNING, "Back-to-back scrubbing enabled\n");
+
+ if (!bw_max)
+ return interval ? bw : INT_MAX;
+
+ return snps_get_scrub_bw(priv, interval);
+}
+
+/**
+ * snps_get_sdram_scrub_rate - Get the Scrubber bandwidth.
+ * @mci: EDAC memory controller instance.
+ *
+ * Return: Scrubber bandwidth (interval-based approximated bandwidth if the
+ * Core clock is unavailable) or zero if the Scrubber was disabled.
+ */
+static int snps_get_sdram_scrub_rate(struct mem_ctl_info *mci)
+{
+ struct snps_edac_priv *priv = mci->pvt_info;
+ u32 regval;
+ u64 bw;
+
+ regval = readl(priv->baseaddr + ECC_SBRCTL_OFST);
+ if (!(regval & ECC_SBRCTL_SCRUB_EN))
+ return 0;
+
+ regval = FIELD_GET(ECC_SBRCTL_SCRUB_INTERVAL, regval);
+
+ bw = snps_get_scrub_bw(priv, regval);
+ if (!bw)
+ return regval ? ECC_SBRCTL_INTERVAL_MAX - regval : INT_MAX;
+
+ return bw;
+}
+
/**
* snps_create_data - Create private data.
* @pdev: platform device.
@@ -1049,7 +1310,18 @@ static int snps_get_res(struct snps_edac_priv *priv)
return rc;
}
+ rc = clk_prepare_enable(priv->clks[SNPS_SBR_CLK].clk);
+ if (rc) {
+ edac_printk(KERN_INFO, EDAC_MC, "Couldn't enable Scrubber clock\n");
+ goto err_disable_pclk;
+ }
+
return 0;
+
+err_disable_pclk:
+ clk_disable_unprepare(priv->clks[SNPS_CSR_CLK].clk);
+
+ return rc;
}
/**
@@ -1058,6 +1330,8 @@ static int snps_get_res(struct snps_edac_priv *priv)
*/
static void snps_put_res(struct snps_edac_priv *priv)
{
+ clk_disable_unprepare(priv->clks[SNPS_SBR_CLK].clk);
+
clk_disable_unprepare(priv->clks[SNPS_CSR_CLK].clk);
}
@@ -1158,6 +1432,14 @@ static int snps_get_ddrc_info(struct snps_edac_priv *priv)
if (!(regval & ECC_CFG0_DIS_SCRUB))
priv->info.caps |= SNPS_CAP_ECC_SCRUB;
+ /* Auto-detect the scrubber by writing to the SBRWDATA0 CSR */
+ regval = readl(priv->baseaddr + ECC_SBRWDATA0_OFST);
+ writel(~regval, priv->baseaddr + ECC_SBRWDATA0_OFST);
+ if (regval != readl(priv->baseaddr + ECC_SBRWDATA0_OFST)) {
+ priv->info.caps |= SNPS_CAP_ECC_SCRUBBER;
+ writel(regval, priv->baseaddr + ECC_SBRWDATA0_OFST);
+ }
+
/* Auto-detect the basic HIF/SDRAM bus parameters */
regval = readl(priv->baseaddr + DDR_MSTR_OFST);
@@ -1644,6 +1926,12 @@ static struct mem_ctl_info *snps_mc_create(struct snps_edac_priv *priv)
mci->scrub_cap = SCRUB_FLAG_SW_SRC;
}
+ if (priv->info.caps & SNPS_CAP_ECC_SCRUBBER) {
+ mci->scrub_cap |= SCRUB_FLAG_HW_PROG | SCRUB_FLAG_HW_TUN;
+ mci->set_sdram_scrub_rate = snps_set_sdram_scrub_rate;
+ mci->get_sdram_scrub_rate = snps_get_sdram_scrub_rate;
+ }
+
mci->ctl_name = "snps_umctl2_ddrc";
mci->dev_name = SNPS_EDAC_MOD_STRING;
mci->mod_name = SNPS_EDAC_MOD_VER;
@@ -1718,6 +2006,15 @@ static int snps_request_ind_irq(struct mem_ctl_info *mci)
}
}
+ irq = platform_get_irq_byname_optional(priv->pdev, "ecc_sbr");
+ if (irq > 0) {
+ rc = devm_request_irq(dev, irq, snps_sbr_irq_handler, 0, "ecc_sbr", mci);
+ if (rc) {
+ edac_printk(KERN_ERR, EDAC_MC, "Failed to request Sbr IRQ\n");
+ return rc;
+ }
+ }
+
return 0;
}
@@ -1824,6 +2121,8 @@ static int snps_ddrc_info_show(struct seq_file *s, void *data)
if (priv->info.caps) {
if (priv->info.caps & SNPS_CAP_ECC_SCRUB)
seq_puts(s, " +Scrub");
+ if (priv->info.caps & SNPS_CAP_ECC_SCRUBBER)
+ seq_puts(s, " +Scrubber");
if (priv->info.caps & SNPS_CAP_ZYNQMP)
seq_puts(s, " +ZynqMP");
} else {
--
2.41.0
At the current state the DW uMCTL2 DDRC DT-schema can't be used as the
common one for all the IP-core-based devices due to the compatible string
property constraining the list of the supported device names. In order to
fix that detach the common properties definition to the separate schema.
The later will be used by the vendor-specific controller versions to
preserve the DT-bindings convention defined for the DW uMCTL2 DDR
controller. Thus the generic DW uMCTL2 DDRC DT-bindings will be left with
the compatible property definition only and will just refer to the
detached common DT-schema.
Signed-off-by: Serge Semin <[email protected]>
Reviewed-by: Rob Herring <[email protected]>
---
Changelog v2:
- This is a new patch created on v2 cycle of the patchset. (@Krzysztof)
Changelog v3:
- Create common DT-schema instead of using the generic device DT-bindings.
(@Rob)
---
.../snps,dw-umctl2-common.yaml | 75 +++++++++++++++++++
.../snps,dw-umctl2-ddrc.yaml | 57 ++------------
2 files changed, 81 insertions(+), 51 deletions(-)
create mode 100644 Documentation/devicetree/bindings/memory-controllers/snps,dw-umctl2-common.yaml
diff --git a/Documentation/devicetree/bindings/memory-controllers/snps,dw-umctl2-common.yaml b/Documentation/devicetree/bindings/memory-controllers/snps,dw-umctl2-common.yaml
new file mode 100644
index 000000000000..115fe5e8339a
--- /dev/null
+++ b/Documentation/devicetree/bindings/memory-controllers/snps,dw-umctl2-common.yaml
@@ -0,0 +1,75 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/memory-controllers/snps,dw-umctl2-common.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Synopsys DesignWare Universal Multi-Protocol Memory Controller
+
+maintainers:
+ - Krzysztof Kozlowski <[email protected]>
+ - Manish Narani <[email protected]>
+ - Michal Simek <[email protected]>
+
+description:
+ Synopsys DesignWare Enhanced uMCTL2 DDR Memory Controller is capable of
+ working with the memory devices supporting up to (LP)DDR4 protocol. It can
+ be equipped with SEC/DEC ECC feature if DRAM data bus width is either
+ 16-bits or 32-bits or 64-bits wide.
+
+select: false
+
+properties:
+ interrupts:
+ description:
+ DW uMCTL2 DDRC IP-core provides individual IRQ signal for each event":"
+ ECC Corrected Error, ECC Uncorrected Error, ECC Address Protection,
+ Scrubber-Done signal, DFI Parity/CRC Error. Some platforms may have the
+ signals merged before they reach the IRQ controller or have some of them
+ absent in case if the corresponding feature is unavailable/disabled.
+ minItems: 1
+ maxItems: 5
+
+ interrupt-names:
+ minItems: 1
+ maxItems: 5
+ oneOf:
+ - description: Common ECC CE/UE/Scrubber/DFI Errors IRQ
+ items:
+ - const: ecc
+ - description: Individual ECC CE/UE/Scrubber/DFI Errors IRQs
+ items:
+ enum: [ ecc_ce, ecc_ue, ecc_ap, ecc_sbr, dfi_e ]
+
+ reg:
+ maxItems: 1
+
+ clocks:
+ description:
+ A standard set of the clock sources contains CSRs bus clock, AXI-ports
+ reference clock, DDRC core clock, Scrubber standalone clock
+ (synchronous to the DDRC clock).
+ minItems: 1
+ maxItems: 4
+
+ clock-names:
+ minItems: 1
+ maxItems: 4
+ items:
+ enum: [ pclk, aclk, core, sbr ]
+
+ resets:
+ description:
+ Each clock domain can have separate reset signal.
+ minItems: 1
+ maxItems: 4
+
+ reset-names:
+ minItems: 1
+ maxItems: 4
+ items:
+ enum: [ prst, arst, core, sbr ]
+
+additionalProperties: true
+
+...
diff --git a/Documentation/devicetree/bindings/memory-controllers/snps,dw-umctl2-ddrc.yaml b/Documentation/devicetree/bindings/memory-controllers/snps,dw-umctl2-ddrc.yaml
index 87ff9ee098f5..80b25d2fa974 100644
--- a/Documentation/devicetree/bindings/memory-controllers/snps,dw-umctl2-ddrc.yaml
+++ b/Documentation/devicetree/bindings/memory-controllers/snps,dw-umctl2-ddrc.yaml
@@ -20,6 +20,11 @@ description: |
controller. It has an optional SEC/DEC ECC support in 64- and 32-bits
bus width configurations.
+allOf:
+ - $ref: /schemas/memory-controllers/snps,dw-umctl2-common.yaml#
+
+# Please create a separate DT-schema for your DW uMCTL2 DDR controller
+# with more detailed properties definition.
properties:
compatible:
oneOf:
@@ -31,62 +36,12 @@ properties:
- description: Xilinx ZynqMP DDR controller v2.40a
const: xlnx,zynqmp-ddrc-2.40a
- interrupts:
- description:
- DW uMCTL2 DDRC IP-core provides individual IRQ signal for each event":"
- ECC Corrected Error, ECC Uncorrected Error, ECC Address Protection,
- Scrubber-Done signal, DFI Parity/CRC Error. Some platforms may have the
- signals merged before they reach the IRQ controller or have some of them
- absent in case if the corresponding feature is unavailable/disabled.
- minItems: 1
- maxItems: 5
-
- interrupt-names:
- minItems: 1
- maxItems: 5
- oneOf:
- - description: Common ECC CE/UE/Scrubber/DFI Errors IRQ
- items:
- - const: ecc
- - description: Individual ECC CE/UE/Scrubber/DFI Errors IRQs
- items:
- enum: [ ecc_ce, ecc_ue, ecc_ap, ecc_sbr, dfi_e ]
-
- reg:
- maxItems: 1
-
- clocks:
- description:
- A standard set of the clock sources contains CSRs bus clock, AXI-ports
- reference clock, DDRC core clock, Scrubber standalone clock
- (synchronous to the DDRC clock).
- minItems: 1
- maxItems: 4
-
- clock-names:
- minItems: 1
- maxItems: 4
- items:
- enum: [ pclk, aclk, core, sbr ]
-
- resets:
- description:
- Each clock domain can have separate reset signal.
- minItems: 1
- maxItems: 4
-
- reset-names:
- minItems: 1
- maxItems: 4
- items:
- enum: [ prst, arst, core, sbr ]
-
required:
- compatible
- reg
- interrupts
-additionalProperties: false
+unevaluatedProperties: false
examples:
- |
--
2.41.0
Currently the driver doesn't support any clock-related resources request
and handling, fairly assuming that all of them are supposed to be enabled
anyway in order for the system to work correctly. It's true for the Core
and AXI Ports reference clocks, but the CSR (APB) and Scrubber clocks
might still be disabled in case if the system firmware doesn't imply any
other software touching the DDR controller internals. Since the DW uMCTL2
DDRC driver does access the controller registers at the very least the
driver needs to make sure the APB clock is enabled.
So add the reference clocks support then. First the driver will request
all the clocks possibly defined for the controller (Core, AXI, APB and
Scrubber). Second the APB clock will be enabled/disabled only since the
Scrubber is currently unsupported by the driver. Since the Core and AXI
clocks feed the critical system parts they left untouched to avoid a risk
to de-stabilize the system memory. Please note the clocks connection IDs
have been chosen in accordance with the DT-bindings.
Signed-off-by: Serge Semin <[email protected]>
---
drivers/edac/synopsys_edac.c | 101 +++++++++++++++++++++++++++++++++--
1 file changed, 98 insertions(+), 3 deletions(-)
diff --git a/drivers/edac/synopsys_edac.c b/drivers/edac/synopsys_edac.c
index a91b048facb6..ab4c7cc2daf5 100644
--- a/drivers/edac/synopsys_edac.c
+++ b/drivers/edac/synopsys_edac.c
@@ -8,6 +8,7 @@
#include <linux/bitfield.h>
#include <linux/bits.h>
+#include <linux/clk.h>
#include <linux/edac.h>
#include <linux/fs.h>
#include <linux/log2.h>
@@ -303,6 +304,25 @@ enum snps_ecc_mode {
SNPS_ECC_ADVX4X8 = 5,
};
+/**
+ * enum snps_ref_clk - DW uMCTL2 DDR controller clocks.
+ * @SNPS_CSR_CLK: CSR/APB interface clock.
+ * @SNPS_AXI_CLK: AXI (AHB) Port reference clock.
+ * @SNPS_CORE_CLK: DDR controller (including DFI) clock. SDRAM clock
+ * matches runs with this freq in 1:1 ratio mode and
+ * with twice of this freq in case of 1:2 ratio mode.
+ * @SNPS_SBR_CLK: Scrubber port reference clock (synchronous to
+ * the core clock).
+ * @SNPS_MAX_NCLK: Total number of clocks.
+ */
+enum snps_ref_clk {
+ SNPS_CSR_CLK,
+ SNPS_AXI_CLK,
+ SNPS_CORE_CLK,
+ SNPS_SBR_CLK,
+ SNPS_MAX_NCLK
+};
+
/**
* struct snps_ddrc_info - DDR controller platform parameters.
* @caps: DDR controller capabilities.
@@ -410,6 +430,7 @@ struct snps_ecc_error_info {
* @pdev: Platform device.
* @baseaddr: Base address of the DDR controller.
* @reglock: Concurrent CSRs access lock.
+ * @clks: Controller reference clocks.
* @message: Buffer for framing the event specific info.
*/
struct snps_edac_priv {
@@ -419,6 +440,7 @@ struct snps_edac_priv {
struct platform_device *pdev;
void __iomem *baseaddr;
spinlock_t reglock;
+ struct clk_bulk_data clks[SNPS_MAX_NCLK];
char message[SNPS_EDAC_MSG_SIZE];
};
@@ -985,6 +1007,60 @@ static struct snps_edac_priv *snps_create_data(struct platform_device *pdev)
return priv;
}
+/**
+ * snps_get_res - Get platform device resources.
+ * @priv: DDR memory controller private instance data.
+ *
+ * It's supposed to request all the controller resources available for the
+ * particular platform and enable all the required for the driver normal
+ * work. Note only the CSR and Scrubber clocks are supposed to be switched
+ * on/off by the driver.
+ *
+ * Return: negative errno if failed to get the resources, otherwise - zero.
+ */
+static int snps_get_res(struct snps_edac_priv *priv)
+{
+ const char * const ids[] = {
+ [SNPS_CSR_CLK] = "pclk",
+ [SNPS_AXI_CLK] = "aclk",
+ [SNPS_CORE_CLK] = "core",
+ [SNPS_SBR_CLK] = "sbr",
+ };
+ int i, rc;
+
+ for (i = 0; i < SNPS_MAX_NCLK; i++)
+ priv->clks[i].id = ids[i];
+
+ rc = devm_clk_bulk_get_optional(&priv->pdev->dev, SNPS_MAX_NCLK,
+ priv->clks);
+ if (rc) {
+ edac_printk(KERN_INFO, EDAC_MC, "Failed to get ref clocks\n");
+ return rc;
+ }
+
+ /*
+ * Don't touch the Core and AXI clocks since they are critical for the
+ * stable system functioning and are supposed to have been enabled
+ * anyway.
+ */
+ rc = clk_prepare_enable(priv->clks[SNPS_CSR_CLK].clk);
+ if (rc) {
+ edac_printk(KERN_INFO, EDAC_MC, "Couldn't enable CSR clock\n");
+ return rc;
+ }
+
+ return 0;
+}
+
+/**
+ * snps_put_res - Put platform device resources.
+ * @priv: DDR memory controller private instance data.
+ */
+static void snps_put_res(struct snps_edac_priv *priv)
+{
+ clk_disable_unprepare(priv->clks[SNPS_CSR_CLK].clk);
+}
+
/*
* zynqmp_init_plat - ZynqMP-specific platform initialization.
* @priv: DDR memory controller private data.
@@ -1718,9 +1794,17 @@ static int snps_ddrc_info_show(struct seq_file *s, void *data)
{
struct mem_ctl_info *mci = s->private;
struct snps_edac_priv *priv = mci->pvt_info;
+ unsigned long rate;
seq_printf(s, "SDRAM: %s\n", edac_mem_types[priv->info.sdram_mode]);
+ rate = clk_get_rate(priv->clks[SNPS_CORE_CLK].clk);
+ if (rate) {
+ rate = rate / HZ_PER_MHZ;
+ seq_printf(s, "Clock: Core %luMHz SDRAM %luMHz\n",
+ rate, priv->info.freq_ratio * rate);
+ }
+
seq_printf(s, "DQ bus: %u/%s\n", (BITS_PER_BYTE << priv->info.dq_width),
priv->info.dq_mode == SNPS_DQ_FULL ? "Full" :
priv->info.dq_mode == SNPS_DQ_HALF ? "Half" :
@@ -2029,15 +2113,21 @@ static int snps_mc_probe(struct platform_device *pdev)
if (IS_ERR(priv))
return PTR_ERR(priv);
- rc = snps_get_ddrc_info(priv);
+ rc = snps_get_res(priv);
if (rc)
return rc;
+ rc = snps_get_ddrc_info(priv);
+ if (rc)
+ goto put_res;
+
snps_get_addr_map(priv);
mci = snps_mc_create(priv);
- if (IS_ERR(mci))
- return PTR_ERR(mci);
+ if (IS_ERR(mci)) {
+ rc = PTR_ERR(mci);
+ goto put_res;
+ }
rc = snps_setup_irq(mci);
if (rc)
@@ -2057,6 +2147,9 @@ static int snps_mc_probe(struct platform_device *pdev)
free_edac_mc:
snps_mc_free(mci);
+put_res:
+ snps_put_res(priv);
+
return rc;
}
@@ -2077,6 +2170,8 @@ static int snps_mc_remove(struct platform_device *pdev)
snps_mc_free(mci);
+ snps_put_res(priv);
+
return 0;
}
--
2.41.0
DW uMCTL2 DDRC ECC has a so called ECC Scrub feature in case if an
single-bit error is detected. The scrub is executed as a new RMW operation
to the location that resulted in a single-bit error thus fixing the ECC
code preserved in the SDRAM. But that feature not only optional, but also
runtime switchable. So there can be platforms with DW uMCTL2 DDRC not
supporting hardware-base scrub. In those cases the single-bit errors will
still be detected but won't be fixed until the next SDRAM write commands
to the erroneous location. Since the ECC Scrub feature availability is
detectable by means of the ECCCFG0.dis_scrub flag state use it to tune the
MCI core up so one would automatically execute the platform-specific
scrubbing to the affected SDRAM location. It's now possible to be done
since the DW uMCTL2 DDRC driver supports the actual system address
reported to the MCI core. The only thing left to do is to auto-detect the
ECC Scrub feature availability and set the mem_ctl.info.scrub_mode mode
with SCRUB_SW_SRC if the feature is unavailable. The rest will be done by
the MCI core when the single-bit errors happen.
Signed-off-by: Serge Semin <[email protected]>
---
drivers/edac/synopsys_edac.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/drivers/edac/synopsys_edac.c b/drivers/edac/synopsys_edac.c
index 001553f3849a..4ee39d6809cc 100644
--- a/drivers/edac/synopsys_edac.c
+++ b/drivers/edac/synopsys_edac.c
@@ -32,6 +32,7 @@
#define SNPS_EDAC_MOD_VER "1"
/* DDR capabilities */
+#define SNPS_CAP_ECC_SCRUB BIT(0)
#define SNPS_CAP_ZYNQMP BIT(31)
/* Synopsys uMCTL2 DDR controller registers that are relevant to ECC */
@@ -119,6 +120,7 @@
#define DDR_MSTR_MEM_DDR2 0
/* ECC CFG0 register definitions */
+#define ECC_CFG0_DIS_SCRUB BIT(4)
#define ECC_CFG0_MODE_MASK GENMASK(2, 0)
/* ECC status register definitions */
@@ -1014,6 +1016,10 @@ static int snps_get_ddrc_info(struct snps_edac_priv *priv)
return -ENXIO;
}
+ /* Assume HW-src scrub is always available if it isn't disabled */
+ if (!(regval & ECC_CFG0_DIS_SCRUB))
+ priv->info.caps |= SNPS_CAP_ECC_SCRUB;
+
/* Auto-detect the basic HIF/SDRAM bus parameters */
regval = readl(priv->baseaddr + DDR_MSTR_OFST);
@@ -1490,8 +1496,14 @@ static struct mem_ctl_info *snps_mc_create(struct snps_edac_priv *priv)
MEM_FLAG_DDR3 | MEM_FLAG_LPDDR3 |
MEM_FLAG_DDR4 | MEM_FLAG_LPDDR4;
mci->edac_ctl_cap = EDAC_FLAG_NONE | EDAC_FLAG_SECDED;
- mci->scrub_cap = SCRUB_FLAG_HW_SRC;
- mci->scrub_mode = SCRUB_NONE;
+
+ if (priv->info.caps & SNPS_CAP_ECC_SCRUB) {
+ mci->scrub_mode = SCRUB_HW_SRC;
+ mci->scrub_cap = SCRUB_FLAG_HW_SRC;
+ } else {
+ mci->scrub_mode = SCRUB_SW_SRC;
+ mci->scrub_cap = SCRUB_FLAG_SW_SRC;
+ }
mci->edac_cap = EDAC_FLAG_SECDED;
mci->ctl_name = "snps_umctl2_ddrc";
@@ -1584,6 +1596,8 @@ static int snps_ddrc_info_show(struct seq_file *s, void *data)
seq_puts(s, "Caps:");
if (priv->info.caps) {
+ if (priv->info.caps & SNPS_CAP_ECC_SCRUB)
+ seq_puts(s, " +Scrub");
if (priv->info.caps & SNPS_CAP_ZYNQMP)
seq_puts(s, " +ZynqMP");
} else {
--
2.41.0
Since the driver now has the generic Sys/SDRAM address translation
interface there is no need in preserving the poisonous address in the
driver private data especially seeing it is used in the framework of the
DebugFS node anyway. So drop the snps_edac_priv.poison_addr field
and just perform Sys/SDRAM back and forth address translation right in
place of the "inject_data_error" node accessors.
It causes a bit more modifications than a simple field removal. Since the
poisonous address is not preserved now there is no point in having the
snps_data_poison_setup() method so its content can be moved right into the
"inject_data_error" write operation. For the same reason there is no point
in printing the ECCPOISONADDR{0,1} registers content in the
"inject_data_error" read operation. Since the CSRs content is now parsed
anyway print the SDRAM address instead.
Signed-off-by: Serge Semin <[email protected]>
---
Changelog v4:
- Fix inject_data_error string printing "Rank" word where "Col" is
supposed to be.
---
drivers/edac/synopsys_edac.c | 68 +++++++++++++++++-------------------
1 file changed, 32 insertions(+), 36 deletions(-)
diff --git a/drivers/edac/synopsys_edac.c b/drivers/edac/synopsys_edac.c
index 4ee39d6809cc..90640b2f877a 100644
--- a/drivers/edac/synopsys_edac.c
+++ b/drivers/edac/synopsys_edac.c
@@ -411,7 +411,6 @@ struct snps_ecc_status {
* @reglock: Concurrent CSRs access lock.
* @message: Buffer for framing the event specific info.
* @stat: ECC status information.
- * @poison_addr: Data poison address.
*/
struct snps_edac_priv {
struct snps_ddrc_info info;
@@ -422,9 +421,6 @@ struct snps_edac_priv {
spinlock_t reglock;
char message[SNPS_EDAC_MSG_SIZE];
struct snps_ecc_status stat;
-#ifdef CONFIG_EDAC_DEBUG
- ulong poison_addr;
-#endif
};
/**
@@ -1719,44 +1715,32 @@ static int snps_hif_sdram_map_show(struct seq_file *s, void *data)
DEFINE_SHOW_ATTRIBUTE(snps_hif_sdram_map);
-/**
- * snps_data_poison_setup - Update poison registers.
- * @priv: DDR memory controller private instance data.
- *
- * Update poison registers as per DDR mapping.
- * Return: none.
- */
-static void snps_data_poison_setup(struct snps_edac_priv *priv)
-{
- struct snps_sdram_addr sdram;
- u32 regval;
-
- snps_map_sys_to_sdram(priv, priv->poison_addr, &sdram);
-
- regval = FIELD_PREP(ECC_POISON0_RANK_MASK, sdram.rank) |
- FIELD_PREP(ECC_POISON0_COL_MASK, sdram.col);
- writel(regval, priv->baseaddr + ECC_POISON0_OFST);
-
- regval = FIELD_PREP(ECC_POISON1_BANKGRP_MASK, sdram.bankgrp) |
- FIELD_PREP(ECC_POISON1_BANK_MASK, sdram.bank) |
- FIELD_PREP(ECC_POISON1_ROW_MASK, sdram.row);
- writel(regval, priv->baseaddr + ECC_POISON1_OFST);
-}
-
static ssize_t snps_inject_data_error_read(struct file *filep, char __user *ubuf,
size_t size, loff_t *offp)
{
struct mem_ctl_info *mci = filep->private_data;
struct snps_edac_priv *priv = mci->pvt_info;
+ struct snps_sdram_addr sdram;
char buf[SNPS_DBGFS_BUF_LEN];
+ dma_addr_t sys;
+ u32 regval;
int pos;
- pos = scnprintf(buf, sizeof(buf), "Poison0 Addr: 0x%08x\n\r",
- readl(priv->baseaddr + ECC_POISON0_OFST));
- pos += scnprintf(buf + pos, sizeof(buf) - pos, "Poison1 Addr: 0x%08x\n\r",
- readl(priv->baseaddr + ECC_POISON1_OFST));
- pos += scnprintf(buf + pos, sizeof(buf) - pos, "Error injection Address: 0x%lx\n\r",
- priv->poison_addr);
+ regval = readl(priv->baseaddr + ECC_POISON0_OFST);
+ sdram.rank = FIELD_GET(ECC_POISON0_RANK_MASK, regval);
+ sdram.col = FIELD_GET(ECC_POISON0_COL_MASK, regval);
+
+ regval = readl(priv->baseaddr + ECC_POISON1_OFST);
+ sdram.bankgrp = FIELD_PREP(ECC_POISON1_BANKGRP_MASK, regval);
+ sdram.bank = FIELD_PREP(ECC_POISON1_BANK_MASK, regval);
+ sdram.row = FIELD_PREP(ECC_POISON1_ROW_MASK, regval);
+
+ snps_map_sdram_to_sys(priv, &sdram, &sys);
+
+ pos = scnprintf(buf, sizeof(buf),
+ "%pad: Row %hu Col %hu Bank %hhu Bank Group %hhu Rank %hhu\n",
+ &sys, sdram.row, sdram.col, sdram.bank, sdram.bankgrp,
+ sdram.rank);
return simple_read_from_buffer(ubuf, size, offp, buf, pos);
}
@@ -1766,13 +1750,25 @@ static ssize_t snps_inject_data_error_write(struct file *filep, const char __use
{
struct mem_ctl_info *mci = filep->private_data;
struct snps_edac_priv *priv = mci->pvt_info;
+ struct snps_sdram_addr sdram;
+ u32 regval;
+ u64 sys;
int rc;
- rc = kstrtoul_from_user(ubuf, size, 0, &priv->poison_addr);
+ rc = kstrtou64_from_user(ubuf, size, 0, &sys);
if (rc)
return rc;
- snps_data_poison_setup(priv);
+ snps_map_sys_to_sdram(priv, sys, &sdram);
+
+ regval = FIELD_PREP(ECC_POISON0_RANK_MASK, sdram.rank) |
+ FIELD_PREP(ECC_POISON0_COL_MASK, sdram.col);
+ writel(regval, priv->baseaddr + ECC_POISON0_OFST);
+
+ regval = FIELD_PREP(ECC_POISON1_BANKGRP_MASK, sdram.bankgrp) |
+ FIELD_PREP(ECC_POISON1_BANK_MASK, sdram.bank) |
+ FIELD_PREP(ECC_POISON1_ROW_MASK, sdram.row);
+ writel(regval, priv->baseaddr + ECC_POISON1_OFST);
return size;
}
--
2.41.0
DW uMCTL2 DDRC supports multi-rank memory attached to the controller. If
so the MSTR.active_ranks field will be set with the populated ranks
bitfield. It is permitted to have one, two or four ranks activated at a
time [1]. Since the driver now supports detecting the number of ranks
use it for accordingly extending the MCI chip-select layer. In case of the ECC errors
the affected rank will be read from the CE/UE address CSRs [2].
Note since the multi-rankness is abstracted out on the EDAC-core layer[0]
level, drop the ranks from out of the total memory size calculation.
[1] DesignWare® Cores Enhanced Universal DDR Memory Controller (uMCTL2)
Databook, Version 3.91a, October 2020, p.739
[2] DesignWare® Cores Enhanced Universal DDR Memory Controller (uMCTL2)
Databook, Version 3.91a, October 2020, p.821, p.832
Signed-off-by: Serge Semin <[email protected]>
---
drivers/edac/synopsys_edac.c | 15 +++++----------
1 file changed, 5 insertions(+), 10 deletions(-)
diff --git a/drivers/edac/synopsys_edac.c b/drivers/edac/synopsys_edac.c
index 9a621b7a256d..001553f3849a 100644
--- a/drivers/edac/synopsys_edac.c
+++ b/drivers/edac/synopsys_edac.c
@@ -23,9 +23,6 @@
#include "edac_module.h"
-/* Number of cs_rows needed per memory controller */
-#define SNPS_EDAC_NR_CSROWS 1
-
/* Number of channels per memory controller */
#define SNPS_EDAC_NR_CHANS 1
@@ -799,7 +796,7 @@ static void snps_handle_error(struct mem_ctl_info *mci, struct snps_ecc_status *
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, p->ce_cnt,
PHYS_PFN(sys), offset_in_page(sys),
- pinf->syndrome, 0, 0, -1,
+ pinf->syndrome, pinf->sdram.rank, 0, -1,
priv->message, "");
}
@@ -816,7 +813,8 @@ static void snps_handle_error(struct mem_ctl_info *mci, struct snps_ecc_status *
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, p->ue_cnt,
PHYS_PFN(sys), offset_in_page(sys),
- 0, 0, 0, -1, priv->message, "");
+ 0, pinf->sdram.rank, 0, -1,
+ priv->message, "");
}
memset(p, 0, sizeof(*p));
@@ -1416,10 +1414,7 @@ static u64 snps_get_sdram_size(struct snps_edac_priv *priv)
size++;
}
- for (i = 0; i < DDR_MAX_RANK_WIDTH; i++) {
- if (map->rank[i] != DDR_ADDRMAP_UNUSED)
- size++;
- }
+ /* Skip the ranks since the multi-rankness is determined by layer[0] */
return 1ULL << (size + priv->info.dq_width);
}
@@ -1473,7 +1468,7 @@ static struct mem_ctl_info *snps_mc_create(struct snps_edac_priv *priv)
struct mem_ctl_info *mci;
layers[0].type = EDAC_MC_LAYER_CHIP_SELECT;
- layers[0].size = SNPS_EDAC_NR_CSROWS;
+ layers[0].size = priv->info.ranks;
layers[0].is_virt_csrow = true;
layers[1].type = EDAC_MC_LAYER_CHANNEL;
layers[1].size = SNPS_EDAC_NR_CHANS;
--
2.41.0