Hi Ulf,
we have a bug on some Octeon plattforms so I removed the Octeon driver for now
(but kept the DT bindings for it). We'll submit the Octeon driver later when
we've fixed the issue.
Changes to v12:
- dts: use generic "mmc-slot" for slots
- dts: mention deprecated power gpio
- Rename driver files
- Use hardcoded voltage instead of mmc_of_parse_voltage()
- Phase out gpiod usage from cavium.c
- Change DT property scan order
- Clean up bus_width setting
- Use GPIOLIB depend for ThunderX driver
- ThunderX: Remove TODO
- ThunderX: Move platform pointers to host struct
- Check slot node compatible string
- Remove gpio includes from ThunderX driver
Changes to v11:
- Fix build error and kill IS_ENABLED() by using an offset per arch
- Added Rob's ACK for the DT bindings
- Removed obsolete voltage-ranges from DT example
- Replace pci_msix_enable() with pci_alloc_irq_vectors()
- Remove superior hardware comment
- Prefixed probe/removal functions with of_
- Merged OF parsing code into one function, change order of property
lookup and simplify code
- Removed slot->sclock, no need to store it there
- Substituted now invisible mmc_card_blockaddr()
- Use new 3.3V CAP for DDR
- Update Copyright
- Allow set_ios to set clock to zero
- Converted bitfields to shift-n-mask logic
- Improved error codes after receiving error interrupt
- Added ifndef guards to header
- Add meaningful interrupt names
- Remove stale mmc_host_ops prototype
Changes to v10:
- Renamed files to get a common prefix
- Select GPIO driver in Kconfig
- Support a fixed regulator
- dts: fixed quotes and re-ordered example
- Use new MMC_CAP_3_3V_DDR instead of 1_8V hack
- Use blksz instead of now internal mmc_card_blockaddr
- Added some maintainers
Previous versions:
v10: https://www.mail-archive.com/[email protected]/msg1295316.html
v9: http://marc.info/?l=linux-mmc&m=147431759215233&w=2
Cheers,
Jan
-------
Jan Glauber (6):
dt-bindings: mmc: Add Cavium SOCs MMC bindings
mmc: cavium: Add core MMC driver for Cavium SOCs
mmc: cavium: Add MMC PCI driver for ThunderX SOCs
mmc: cavium: Add scatter-gather DMA support
mmc: cavium: Support DDR mode for eMMC devices
MAINTAINERS: Add entry for Cavium MMC driver
.../devicetree/bindings/mmc/cavium-mmc.txt | 57 +
MAINTAINERS | 8 +
drivers/mmc/host/Kconfig | 10 +
drivers/mmc/host/Makefile | 2 +
drivers/mmc/host/cavium-thunderx.c | 198 ++++
drivers/mmc/host/cavium.c | 1090 ++++++++++++++++++++
drivers/mmc/host/cavium.h | 215 ++++
7 files changed, 1580 insertions(+)
create mode 100644 Documentation/devicetree/bindings/mmc/cavium-mmc.txt
create mode 100644 drivers/mmc/host/cavium-thunderx.c
create mode 100644 drivers/mmc/host/cavium.c
create mode 100644 drivers/mmc/host/cavium.h
--
2.9.0.rc0.21.g7777322
This core driver will be used by a MIPS platform driver
or by an ARM64 PCI driver. The core driver implements the
mmc_host_ops and slot probe & remove functions.
Callbacks are provided to allow platform specific interrupt
enable and bus locking.
The host controller supports:
- up to 4 slots that can contain sd-cards or eMMC chips
- 1, 4 and 8 bit bus width
- SDR and DDR
- transfers up to 52 Mhz (might be less when multiple slots are used)
- DMA read/write
- multi-block read/write (but not stream mode)
Voltage is limited to 3.3v and shared for all slots (vmmc and vmmcq).
A global lock for all MMC devices is required because the host
controller is shared.
Signed-off-by: Jan Glauber <[email protected]>
Signed-off-by: David Daney <[email protected]>
Signed-off-by: Steven J. Hill <[email protected]>
---
drivers/mmc/host/cavium.c | 982 ++++++++++++++++++++++++++++++++++++++++++++++
drivers/mmc/host/cavium.h | 192 +++++++++
2 files changed, 1174 insertions(+)
create mode 100644 drivers/mmc/host/cavium.c
create mode 100644 drivers/mmc/host/cavium.h
diff --git a/drivers/mmc/host/cavium.c b/drivers/mmc/host/cavium.c
new file mode 100644
index 0000000..910e290
--- /dev/null
+++ b/drivers/mmc/host/cavium.c
@@ -0,0 +1,982 @@
+/*
+ * Shared part of driver for MMC/SDHC controller on Cavium OCTEON and
+ * ThunderX SOCs.
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2012-2017 Cavium Inc.
+ * Authors:
+ * David Daney <[email protected]>
+ * Peter Swain <[email protected]>
+ * Steven J. Hill <[email protected]>
+ * Jan Glauber <[email protected]>
+ */
+#include <linux/bitfield.h>
+#include <linux/delay.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/gpio/consumer.h>
+#include <linux/interrupt.h>
+#include <linux/mmc/mmc.h>
+#include <linux/mmc/slot-gpio.h>
+#include <linux/module.h>
+#include <linux/regulator/consumer.h>
+#include <linux/scatterlist.h>
+#include <linux/time.h>
+
+#include "cavium.h"
+
+const char *cvm_mmc_irq_names[] = {
+ "MMC Buffer",
+ "MMC Command",
+ "MMC DMA",
+ "MMC Command Error",
+ "MMC DMA Error",
+ "MMC Switch",
+ "MMC Switch Error",
+ "MMC DMA int Fifo",
+ "MMC DMA int",
+};
+
+/*
+ * The Cavium MMC host hardware assumes that all commands have fixed
+ * command and response types. These are correct if MMC devices are
+ * being used. However, non-MMC devices like SD use command and
+ * response types that are unexpected by the host hardware.
+ *
+ * The command and response types can be overridden by supplying an
+ * XOR value that is applied to the type. We calculate the XOR value
+ * from the values in this table and the flags passed from the MMC
+ * core.
+ */
+static struct cvm_mmc_cr_type cvm_mmc_cr_types[] = {
+ {0, 0}, /* CMD0 */
+ {0, 3}, /* CMD1 */
+ {0, 2}, /* CMD2 */
+ {0, 1}, /* CMD3 */
+ {0, 0}, /* CMD4 */
+ {0, 1}, /* CMD5 */
+ {0, 1}, /* CMD6 */
+ {0, 1}, /* CMD7 */
+ {1, 1}, /* CMD8 */
+ {0, 2}, /* CMD9 */
+ {0, 2}, /* CMD10 */
+ {1, 1}, /* CMD11 */
+ {0, 1}, /* CMD12 */
+ {0, 1}, /* CMD13 */
+ {1, 1}, /* CMD14 */
+ {0, 0}, /* CMD15 */
+ {0, 1}, /* CMD16 */
+ {1, 1}, /* CMD17 */
+ {1, 1}, /* CMD18 */
+ {3, 1}, /* CMD19 */
+ {2, 1}, /* CMD20 */
+ {0, 0}, /* CMD21 */
+ {0, 0}, /* CMD22 */
+ {0, 1}, /* CMD23 */
+ {2, 1}, /* CMD24 */
+ {2, 1}, /* CMD25 */
+ {2, 1}, /* CMD26 */
+ {2, 1}, /* CMD27 */
+ {0, 1}, /* CMD28 */
+ {0, 1}, /* CMD29 */
+ {1, 1}, /* CMD30 */
+ {1, 1}, /* CMD31 */
+ {0, 0}, /* CMD32 */
+ {0, 0}, /* CMD33 */
+ {0, 0}, /* CMD34 */
+ {0, 1}, /* CMD35 */
+ {0, 1}, /* CMD36 */
+ {0, 0}, /* CMD37 */
+ {0, 1}, /* CMD38 */
+ {0, 4}, /* CMD39 */
+ {0, 5}, /* CMD40 */
+ {0, 0}, /* CMD41 */
+ {2, 1}, /* CMD42 */
+ {0, 0}, /* CMD43 */
+ {0, 0}, /* CMD44 */
+ {0, 0}, /* CMD45 */
+ {0, 0}, /* CMD46 */
+ {0, 0}, /* CMD47 */
+ {0, 0}, /* CMD48 */
+ {0, 0}, /* CMD49 */
+ {0, 0}, /* CMD50 */
+ {0, 0}, /* CMD51 */
+ {0, 0}, /* CMD52 */
+ {0, 0}, /* CMD53 */
+ {0, 0}, /* CMD54 */
+ {0, 1}, /* CMD55 */
+ {0xff, 0xff}, /* CMD56 */
+ {0, 0}, /* CMD57 */
+ {0, 0}, /* CMD58 */
+ {0, 0}, /* CMD59 */
+ {0, 0}, /* CMD60 */
+ {0, 0}, /* CMD61 */
+ {0, 0}, /* CMD62 */
+ {0, 0} /* CMD63 */
+};
+
+static struct cvm_mmc_cr_mods cvm_mmc_get_cr_mods(struct mmc_command *cmd)
+{
+ struct cvm_mmc_cr_type *cr;
+ u8 hardware_ctype, hardware_rtype;
+ u8 desired_ctype = 0, desired_rtype = 0;
+ struct cvm_mmc_cr_mods r;
+
+ cr = cvm_mmc_cr_types + (cmd->opcode & 0x3f);
+ hardware_ctype = cr->ctype;
+ hardware_rtype = cr->rtype;
+ if (cmd->opcode == MMC_GEN_CMD)
+ hardware_ctype = (cmd->arg & 1) ? 1 : 2;
+
+ switch (mmc_cmd_type(cmd)) {
+ case MMC_CMD_ADTC:
+ desired_ctype = (cmd->data->flags & MMC_DATA_WRITE) ? 2 : 1;
+ break;
+ case MMC_CMD_AC:
+ case MMC_CMD_BC:
+ case MMC_CMD_BCR:
+ desired_ctype = 0;
+ break;
+ }
+
+ switch (mmc_resp_type(cmd)) {
+ case MMC_RSP_NONE:
+ desired_rtype = 0;
+ break;
+ case MMC_RSP_R1:/* MMC_RSP_R5, MMC_RSP_R6, MMC_RSP_R7 */
+ case MMC_RSP_R1B:
+ desired_rtype = 1;
+ break;
+ case MMC_RSP_R2:
+ desired_rtype = 2;
+ break;
+ case MMC_RSP_R3: /* MMC_RSP_R4 */
+ desired_rtype = 3;
+ break;
+ }
+ r.ctype_xor = desired_ctype ^ hardware_ctype;
+ r.rtype_xor = desired_rtype ^ hardware_rtype;
+ return r;
+}
+
+static void check_switch_errors(struct cvm_mmc_host *host)
+{
+ u64 emm_switch;
+
+ emm_switch = readq(host->base + MIO_EMM_SWITCH(host));
+ if (emm_switch & MIO_EMM_SWITCH_ERR0)
+ dev_err(host->dev, "Switch power class error\n");
+ if (emm_switch & MIO_EMM_SWITCH_ERR1)
+ dev_err(host->dev, "Switch hs timing error\n");
+ if (emm_switch & MIO_EMM_SWITCH_ERR2)
+ dev_err(host->dev, "Switch bus width error\n");
+}
+
+static void clear_bus_id(u64 *reg)
+{
+ u64 bus_id_mask = GENMASK_ULL(61, 60);
+
+ *reg &= ~bus_id_mask;
+}
+
+static void set_bus_id(u64 *reg, int bus_id)
+{
+ clear_bus_id(reg);
+ *reg |= FIELD_PREP(GENMASK(61, 60), bus_id);
+}
+
+static int get_bus_id(u64 reg)
+{
+ return FIELD_GET(GENMASK_ULL(61, 60), reg);
+}
+
+/*
+ * We never set the switch_exe bit since that would interfere
+ * with the commands send by the MMC core.
+ */
+static void do_switch(struct cvm_mmc_host *host, u64 emm_switch)
+{
+ int retries = 100;
+ u64 rsp_sts;
+ int bus_id;
+
+ /*
+ * Modes setting only taken from slot 0. Work around that hardware
+ * issue by first switching to slot 0.
+ */
+ bus_id = get_bus_id(emm_switch);
+ clear_bus_id(&emm_switch);
+ writeq(emm_switch, host->base + MIO_EMM_SWITCH(host));
+
+ set_bus_id(&emm_switch, bus_id);
+ writeq(emm_switch, host->base + MIO_EMM_SWITCH(host));
+
+ /* wait for the switch to finish */
+ do {
+ rsp_sts = readq(host->base + MIO_EMM_RSP_STS(host));
+ if (!(rsp_sts & MIO_EMM_RSP_STS_SWITCH_VAL))
+ break;
+ udelay(10);
+ } while (--retries);
+
+ check_switch_errors(host);
+}
+
+static bool switch_val_changed(struct cvm_mmc_slot *slot, u64 new_val)
+{
+ /* Match BUS_ID, HS_TIMING, BUS_WIDTH, POWER_CLASS, CLK_HI, CLK_LO */
+ u64 match = 0x3001070fffffffffull;
+
+ return (slot->cached_switch & match) != (new_val & match);
+}
+
+static void set_wdog(struct cvm_mmc_slot *slot, unsigned int ns)
+{
+ u64 timeout;
+
+ if (!slot->clock)
+ return;
+
+ if (ns)
+ timeout = (slot->clock * ns) / NSEC_PER_SEC;
+ else
+ timeout = (slot->clock * 850ull) / 1000ull;
+ writeq(timeout, slot->host->base + MIO_EMM_WDOG(slot->host));
+}
+
+static void cvm_mmc_reset_bus(struct cvm_mmc_slot *slot)
+{
+ struct cvm_mmc_host *host = slot->host;
+ u64 emm_switch, wdog;
+
+ emm_switch = readq(slot->host->base + MIO_EMM_SWITCH(host));
+ emm_switch &= ~(MIO_EMM_SWITCH_EXE | MIO_EMM_SWITCH_ERR0 |
+ MIO_EMM_SWITCH_ERR1 | MIO_EMM_SWITCH_ERR2);
+ set_bus_id(&emm_switch, slot->bus_id);
+
+ wdog = readq(slot->host->base + MIO_EMM_WDOG(host));
+ do_switch(slot->host, emm_switch);
+
+ slot->cached_switch = emm_switch;
+
+ msleep(20);
+
+ writeq(wdog, slot->host->base + MIO_EMM_WDOG(host));
+}
+
+/* Switch to another slot if needed */
+static void cvm_mmc_switch_to(struct cvm_mmc_slot *slot)
+{
+ struct cvm_mmc_host *host = slot->host;
+ struct cvm_mmc_slot *old_slot;
+ u64 emm_sample, emm_switch;
+
+ if (slot->bus_id == host->last_slot)
+ return;
+
+ if (host->last_slot >= 0 && host->slot[host->last_slot]) {
+ old_slot = host->slot[host->last_slot];
+ old_slot->cached_switch = readq(host->base + MIO_EMM_SWITCH(host));
+ old_slot->cached_rca = readq(host->base + MIO_EMM_RCA(host));
+ }
+
+ writeq(slot->cached_rca, host->base + MIO_EMM_RCA(host));
+ emm_switch = slot->cached_switch;
+ set_bus_id(&emm_switch, slot->bus_id);
+ do_switch(host, emm_switch);
+
+ emm_sample = FIELD_PREP(MIO_EMM_SAMPLE_CMD_CNT, slot->cmd_cnt) |
+ FIELD_PREP(MIO_EMM_SAMPLE_DAT_CNT, slot->dat_cnt);
+ writeq(emm_sample, host->base + MIO_EMM_SAMPLE(host));
+
+ host->last_slot = slot->bus_id;
+}
+
+static void do_read(struct cvm_mmc_host *host, struct mmc_request *req,
+ u64 dbuf)
+{
+ struct sg_mapping_iter *smi = &host->smi;
+ int data_len = req->data->blocks * req->data->blksz;
+ int bytes_xfered, shift = -1;
+ u64 dat = 0;
+
+ /* Auto inc from offset zero */
+ writeq((0x10000 | (dbuf << 6)), host->base + MIO_EMM_BUF_IDX(host));
+
+ for (bytes_xfered = 0; bytes_xfered < data_len;) {
+ if (smi->consumed >= smi->length) {
+ if (!sg_miter_next(smi))
+ break;
+ smi->consumed = 0;
+ }
+
+ if (shift < 0) {
+ dat = readq(host->base + MIO_EMM_BUF_DAT(host));
+ shift = 56;
+ }
+
+ while (smi->consumed < smi->length && shift >= 0) {
+ ((u8 *)smi->addr)[smi->consumed] = (dat >> shift) & 0xff;
+ bytes_xfered++;
+ smi->consumed++;
+ shift -= 8;
+ }
+ }
+
+ sg_miter_stop(smi);
+ req->data->bytes_xfered = bytes_xfered;
+ req->data->error = 0;
+}
+
+static void do_write(struct mmc_request *req)
+{
+ req->data->bytes_xfered = req->data->blocks * req->data->blksz;
+ req->data->error = 0;
+}
+
+static void set_cmd_response(struct cvm_mmc_host *host, struct mmc_request *req,
+ u64 rsp_sts)
+{
+ u64 rsp_hi, rsp_lo;
+
+ if (!(rsp_sts & MIO_EMM_RSP_STS_RSP_VAL))
+ return;
+
+ rsp_lo = readq(host->base + MIO_EMM_RSP_LO(host));
+
+ switch (FIELD_GET(MIO_EMM_RSP_STS_RSP_TYPE, rsp_sts)) {
+ case 1:
+ case 3:
+ req->cmd->resp[0] = (rsp_lo >> 8) & 0xffffffff;
+ req->cmd->resp[1] = 0;
+ req->cmd->resp[2] = 0;
+ req->cmd->resp[3] = 0;
+ break;
+ case 2:
+ req->cmd->resp[3] = rsp_lo & 0xffffffff;
+ req->cmd->resp[2] = (rsp_lo >> 32) & 0xffffffff;
+ rsp_hi = readq(host->base + MIO_EMM_RSP_HI(host));
+ req->cmd->resp[1] = rsp_hi & 0xffffffff;
+ req->cmd->resp[0] = (rsp_hi >> 32) & 0xffffffff;
+ break;
+ }
+}
+
+static int get_dma_dir(struct mmc_data *data)
+{
+ return (data->flags & MMC_DATA_WRITE) ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
+}
+
+static int finish_dma_single(struct cvm_mmc_host *host, struct mmc_data *data)
+{
+ data->bytes_xfered = data->blocks * data->blksz;
+ data->error = 0;
+ return 1;
+}
+
+static int finish_dma(struct cvm_mmc_host *host, struct mmc_data *data)
+{
+ return finish_dma_single(host, data);
+}
+
+static int check_status(u64 rsp_sts)
+{
+ if (rsp_sts & MIO_EMM_RSP_STS_RSP_BAD_STS ||
+ rsp_sts & MIO_EMM_RSP_STS_RSP_CRC_ERR ||
+ rsp_sts & MIO_EMM_RSP_STS_BLK_CRC_ERR)
+ return -EILSEQ;
+ if (rsp_sts & MIO_EMM_RSP_STS_RSP_TIMEOUT ||
+ rsp_sts & MIO_EMM_RSP_STS_BLK_TIMEOUT)
+ return -ETIMEDOUT;
+ if (rsp_sts & MIO_EMM_RSP_STS_DBUF_ERR)
+ return -EIO;
+ return 0;
+}
+
+/* Try to clean up failed DMA. */
+static void cleanup_dma(struct cvm_mmc_host *host, u64 rsp_sts)
+{
+ u64 emm_dma;
+
+ emm_dma = readq(host->base + MIO_EMM_DMA(host));
+ emm_dma |= FIELD_PREP(MIO_EMM_DMA_VAL, 1) |
+ FIELD_PREP(MIO_EMM_DMA_DAT_NULL, 1);
+ set_bus_id(&emm_dma, get_bus_id(rsp_sts));
+ writeq(emm_dma, host->base + MIO_EMM_DMA(host));
+}
+
+irqreturn_t cvm_mmc_interrupt(int irq, void *dev_id)
+{
+ struct cvm_mmc_host *host = dev_id;
+ struct mmc_request *req;
+ unsigned long flags = 0;
+ u64 emm_int, rsp_sts;
+ bool host_done;
+
+ if (host->need_irq_handler_lock)
+ spin_lock_irqsave(&host->irq_handler_lock, flags);
+ else
+ __acquire(&host->irq_handler_lock);
+
+ /* Clear interrupt bits (write 1 clears ). */
+ emm_int = readq(host->base + MIO_EMM_INT(host));
+ writeq(emm_int, host->base + MIO_EMM_INT(host));
+
+ if (emm_int & MIO_EMM_INT_SWITCH_ERR)
+ check_switch_errors(host);
+
+ req = host->current_req;
+ if (!req)
+ goto out;
+
+ rsp_sts = readq(host->base + MIO_EMM_RSP_STS(host));
+ /*
+ * dma_val set means DMA is still in progress. Don't touch
+ * the request and wait for the interrupt indicating that
+ * the DMA is finished.
+ */
+ if ((rsp_sts & MIO_EMM_RSP_STS_DMA_VAL) && host->dma_active)
+ goto out;
+
+ if (!host->dma_active && req->data &&
+ (emm_int & MIO_EMM_INT_BUF_DONE)) {
+ unsigned int type = (rsp_sts >> 7) & 3;
+
+ if (type == 1)
+ do_read(host, req, rsp_sts & MIO_EMM_RSP_STS_DBUF);
+ else if (type == 2)
+ do_write(req);
+ }
+
+ host_done = emm_int & MIO_EMM_INT_CMD_DONE ||
+ emm_int & MIO_EMM_INT_DMA_DONE ||
+ emm_int & MIO_EMM_INT_CMD_ERR ||
+ emm_int & MIO_EMM_INT_DMA_ERR;
+
+ if (!(host_done && req->done))
+ goto no_req_done;
+
+ req->cmd->error = check_status(rsp_sts);
+
+ if (host->dma_active && req->data)
+ if (!finish_dma(host, req->data))
+ goto no_req_done;
+
+ set_cmd_response(host, req, rsp_sts);
+ if ((emm_int & MIO_EMM_INT_DMA_ERR) &&
+ (rsp_sts & MIO_EMM_RSP_STS_DMA_PEND))
+ cleanup_dma(host, rsp_sts);
+
+ host->current_req = NULL;
+ req->done(req);
+
+no_req_done:
+ if (host->dmar_fixup_done)
+ host->dmar_fixup_done(host);
+ if (host_done)
+ host->release_bus(host);
+out:
+ if (host->need_irq_handler_lock)
+ spin_unlock_irqrestore(&host->irq_handler_lock, flags);
+ else
+ __release(&host->irq_handler_lock);
+ return IRQ_RETVAL(emm_int != 0);
+}
+
+/*
+ * Program DMA_CFG and if needed DMA_ADR.
+ * Returns 0 on error, DMA address otherwise.
+ */
+static u64 prepare_dma_single(struct cvm_mmc_host *host, struct mmc_data *data)
+{
+ u64 dma_cfg, addr;
+ int count, rw;
+
+ count = dma_map_sg(host->dev, data->sg, data->sg_len,
+ get_dma_dir(data));
+ if (!count)
+ return 0;
+
+ rw = (data->flags & MMC_DATA_WRITE) ? 1 : 0;
+ dma_cfg = FIELD_PREP(MIO_EMM_DMA_CFG_EN, 1) |
+ FIELD_PREP(MIO_EMM_DMA_CFG_RW, rw);
+#ifdef __LITTLE_ENDIAN
+ dma_cfg |= FIELD_PREP(MIO_EMM_DMA_CFG_ENDIAN, 1);
+#endif
+ dma_cfg |= FIELD_PREP(MIO_EMM_DMA_CFG_SIZE,
+ (sg_dma_len(&data->sg[0]) / 8) - 1);
+
+ addr = sg_dma_address(&data->sg[0]);
+ if (!host->big_dma_addr)
+ dma_cfg |= FIELD_PREP(MIO_EMM_DMA_CFG_ADR, addr);
+ writeq(dma_cfg, host->dma_base + MIO_EMM_DMA_CFG(host));
+
+ pr_debug("[%s] sg_dma_len: %u total sg_elem: %d\n",
+ (rw) ? "W" : "R", sg_dma_len(&data->sg[0]), count);
+
+ if (host->big_dma_addr)
+ writeq(addr, host->dma_base + MIO_EMM_DMA_ADR(host));
+ return addr;
+}
+
+static u64 prepare_dma(struct cvm_mmc_host *host, struct mmc_data *data)
+{
+ return prepare_dma_single(host, data);
+}
+
+static u64 prepare_ext_dma(struct mmc_host *mmc, struct mmc_request *mrq)
+{
+ struct cvm_mmc_slot *slot = mmc_priv(mmc);
+ u64 emm_dma;
+
+ emm_dma = FIELD_PREP(MIO_EMM_DMA_VAL, 1) |
+ FIELD_PREP(MIO_EMM_DMA_SECTOR,
+ (mrq->data->blksz == 512) ? 1 : 0) |
+ FIELD_PREP(MIO_EMM_DMA_RW,
+ (mrq->data->flags & MMC_DATA_WRITE) ? 1 : 0) |
+ FIELD_PREP(MIO_EMM_DMA_BLOCK_CNT, mrq->data->blocks) |
+ FIELD_PREP(MIO_EMM_DMA_CARD_ADDR, mrq->cmd->arg);
+ set_bus_id(&emm_dma, slot->bus_id);
+
+ if (mmc_card_mmc(mmc->card) || (mmc_card_sd(mmc->card) &&
+ (mmc->card->scr.cmds & SD_SCR_CMD23_SUPPORT)))
+ emm_dma |= FIELD_PREP(MIO_EMM_DMA_MULTI, 1);
+
+ pr_debug("[%s] blocks: %u multi: %d\n",
+ (emm_dma & MIO_EMM_DMA_RW) ? "W" : "R",
+ mrq->data->blocks, (emm_dma & MIO_EMM_DMA_MULTI) ? 1 : 0);
+ return emm_dma;
+}
+
+static void cvm_mmc_dma_request(struct mmc_host *mmc,
+ struct mmc_request *mrq)
+{
+ struct cvm_mmc_slot *slot = mmc_priv(mmc);
+ struct cvm_mmc_host *host = slot->host;
+ struct mmc_data *data;
+ u64 emm_dma, addr;
+
+ if (!mrq->data || !mrq->data->sg || !mrq->data->sg_len ||
+ !mrq->stop || mrq->stop->opcode != MMC_STOP_TRANSMISSION) {
+ dev_err(&mmc->card->dev,
+ "Error: cmv_mmc_dma_request no data\n");
+ goto error;
+ }
+
+ cvm_mmc_switch_to(slot);
+
+ data = mrq->data;
+ pr_debug("DMA request blocks: %d block_size: %d total_size: %d\n",
+ data->blocks, data->blksz, data->blocks * data->blksz);
+ if (data->timeout_ns)
+ set_wdog(slot, data->timeout_ns);
+
+ WARN_ON(host->current_req);
+ host->current_req = mrq;
+
+ emm_dma = prepare_ext_dma(mmc, mrq);
+ addr = prepare_dma(host, data);
+ if (!addr) {
+ dev_err(host->dev, "prepare_dma failed\n");
+ goto error;
+ }
+
+ host->dma_active = true;
+ host->int_enable(host, MIO_EMM_INT_CMD_ERR | MIO_EMM_INT_DMA_DONE |
+ MIO_EMM_INT_DMA_ERR);
+
+ if (host->dmar_fixup)
+ host->dmar_fixup(host, mrq->cmd, data, addr);
+
+ /*
+ * If we have a valid SD card in the slot, we set the response
+ * bit mask to check for CRC errors and timeouts only.
+ * Otherwise, use the default power reset value.
+ */
+ if (mmc->card && mmc_card_sd(mmc->card))
+ writeq(0x00b00000ull, host->base + MIO_EMM_STS_MASK(host));
+ else
+ writeq(0xe4390080ull, host->base + MIO_EMM_STS_MASK(host));
+ writeq(emm_dma, host->base + MIO_EMM_DMA(host));
+ return;
+
+error:
+ mrq->cmd->error = -EINVAL;
+ if (mrq->done)
+ mrq->done(mrq);
+ host->release_bus(host);
+}
+
+static void do_read_request(struct cvm_mmc_host *host, struct mmc_request *mrq)
+{
+ sg_miter_start(&host->smi, mrq->data->sg, mrq->data->sg_len,
+ SG_MITER_ATOMIC | SG_MITER_TO_SG);
+}
+
+static void do_write_request(struct cvm_mmc_host *host, struct mmc_request *mrq)
+{
+ unsigned int data_len = mrq->data->blocks * mrq->data->blksz;
+ struct sg_mapping_iter *smi = &host->smi;
+ unsigned int bytes_xfered;
+ int shift = 56;
+ u64 dat = 0;
+
+ /* Copy data to the xmit buffer before issuing the command. */
+ sg_miter_start(smi, mrq->data->sg, mrq->data->sg_len, SG_MITER_FROM_SG);
+
+ /* Auto inc from offset zero, dbuf zero */
+ writeq(0x10000ull, host->base + MIO_EMM_BUF_IDX(host));
+
+ for (bytes_xfered = 0; bytes_xfered < data_len;) {
+ if (smi->consumed >= smi->length) {
+ if (!sg_miter_next(smi))
+ break;
+ smi->consumed = 0;
+ }
+
+ while (smi->consumed < smi->length && shift >= 0) {
+ dat |= ((u8 *)smi->addr)[smi->consumed] << shift;
+ bytes_xfered++;
+ smi->consumed++;
+ shift -= 8;
+ }
+
+ if (shift < 0) {
+ writeq(dat, host->base + MIO_EMM_BUF_DAT(host));
+ shift = 56;
+ dat = 0;
+ }
+ }
+ sg_miter_stop(smi);
+}
+
+static void cvm_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
+{
+ struct cvm_mmc_slot *slot = mmc_priv(mmc);
+ struct cvm_mmc_host *host = slot->host;
+ struct mmc_command *cmd = mrq->cmd;
+ struct cvm_mmc_cr_mods mods;
+ u64 emm_cmd, rsp_sts;
+ int retries = 100;
+
+ /*
+ * Note about locking:
+ * All MMC devices share the same bus and controller. Allow only a
+ * single user of the bootbus/MMC bus at a time. The lock is acquired
+ * on all entry points from the MMC layer.
+ *
+ * For requests the lock is only released after the completion
+ * interrupt!
+ */
+ host->acquire_bus(host);
+
+ if (cmd->opcode == MMC_READ_MULTIPLE_BLOCK ||
+ cmd->opcode == MMC_WRITE_MULTIPLE_BLOCK)
+ return cvm_mmc_dma_request(mmc, mrq);
+
+ cvm_mmc_switch_to(slot);
+
+ mods = cvm_mmc_get_cr_mods(cmd);
+
+ WARN_ON(host->current_req);
+ host->current_req = mrq;
+
+ if (cmd->data) {
+ if (cmd->data->flags & MMC_DATA_READ)
+ do_read_request(host, mrq);
+ else
+ do_write_request(host, mrq);
+
+ if (cmd->data->timeout_ns)
+ set_wdog(slot, cmd->data->timeout_ns);
+ } else
+ set_wdog(slot, 0);
+
+ host->dma_active = false;
+ host->int_enable(host, MIO_EMM_INT_CMD_DONE | MIO_EMM_INT_CMD_ERR);
+
+ emm_cmd = FIELD_PREP(MIO_EMM_CMD_VAL, 1) |
+ FIELD_PREP(MIO_EMM_CMD_CTYPE_XOR, mods.ctype_xor) |
+ FIELD_PREP(MIO_EMM_CMD_RTYPE_XOR, mods.rtype_xor) |
+ FIELD_PREP(MIO_EMM_CMD_IDX, cmd->opcode) |
+ FIELD_PREP(MIO_EMM_CMD_ARG, cmd->arg);
+ set_bus_id(&emm_cmd, slot->bus_id);
+ if (mmc_cmd_type(cmd) == MMC_CMD_ADTC)
+ emm_cmd |= FIELD_PREP(MIO_EMM_CMD_OFFSET,
+ 64 - ((cmd->data->blocks * cmd->data->blksz) / 8));
+
+ writeq(0, host->base + MIO_EMM_STS_MASK(host));
+
+retry:
+ rsp_sts = readq(host->base + MIO_EMM_RSP_STS(host));
+ if (rsp_sts & MIO_EMM_RSP_STS_DMA_VAL ||
+ rsp_sts & MIO_EMM_RSP_STS_CMD_VAL ||
+ rsp_sts & MIO_EMM_RSP_STS_SWITCH_VAL ||
+ rsp_sts & MIO_EMM_RSP_STS_DMA_PEND) {
+ udelay(10);
+ if (--retries)
+ goto retry;
+ }
+ if (!retries)
+ dev_err(host->dev, "Bad status: %llx before command write\n", rsp_sts);
+ writeq(emm_cmd, host->base + MIO_EMM_CMD(host));
+}
+
+static void cvm_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+{
+ struct cvm_mmc_slot *slot = mmc_priv(mmc);
+ struct cvm_mmc_host *host = slot->host;
+ int clk_period = 0, power_class = 10, bus_width = 0;
+ u64 clock, emm_switch;
+
+ host->acquire_bus(host);
+ cvm_mmc_switch_to(slot);
+
+ /* Set the power state */
+ switch (ios->power_mode) {
+ case MMC_POWER_ON:
+ break;
+
+ case MMC_POWER_OFF:
+ cvm_mmc_reset_bus(slot);
+ if (host->global_pwr_gpiod)
+ host->set_shared_power(host, 0);
+ else
+ mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, 0);
+ break;
+
+ case MMC_POWER_UP:
+ if (host->global_pwr_gpiod)
+ host->set_shared_power(host, 1);
+ else
+ mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, ios->vdd);
+ break;
+ }
+
+ /* Convert bus width to HW definition */
+ switch (ios->bus_width) {
+ case MMC_BUS_WIDTH_8:
+ bus_width = 2;
+ break;
+ case MMC_BUS_WIDTH_4:
+ bus_width = 1;
+ break;
+ case MMC_BUS_WIDTH_1:
+ bus_width = 0;
+ break;
+ }
+
+ /* Change the clock frequency. */
+ clock = ios->clock;
+ if (clock > 52000000)
+ clock = 52000000;
+ slot->clock = clock;
+
+ if (clock)
+ clk_period = (host->sys_freq + clock - 1) / (2 * clock);
+
+ emm_switch = FIELD_PREP(MIO_EMM_SWITCH_HS_TIMING,
+ (ios->timing == MMC_TIMING_MMC_HS)) |
+ FIELD_PREP(MIO_EMM_SWITCH_BUS_WIDTH, bus_width) |
+ FIELD_PREP(MIO_EMM_SWITCH_POWER_CLASS, power_class) |
+ FIELD_PREP(MIO_EMM_SWITCH_CLK_HI, clk_period) |
+ FIELD_PREP(MIO_EMM_SWITCH_CLK_LO, clk_period);
+ set_bus_id(&emm_switch, slot->bus_id);
+
+ if (!switch_val_changed(slot, emm_switch))
+ goto out;
+
+ set_wdog(slot, 0);
+ do_switch(host, emm_switch);
+ slot->cached_switch = emm_switch;
+out:
+ host->release_bus(host);
+}
+
+static const struct mmc_host_ops cvm_mmc_ops = {
+ .request = cvm_mmc_request,
+ .set_ios = cvm_mmc_set_ios,
+ .get_ro = mmc_gpio_get_ro,
+ .get_cd = mmc_gpio_get_cd,
+};
+
+static void cvm_mmc_set_clock(struct cvm_mmc_slot *slot, unsigned int clock)
+{
+ struct mmc_host *mmc = slot->mmc;
+
+ clock = min(clock, mmc->f_max);
+ clock = max(clock, mmc->f_min);
+ slot->clock = clock;
+}
+
+static int cvm_mmc_init_lowlevel(struct cvm_mmc_slot *slot)
+{
+ struct cvm_mmc_host *host = slot->host;
+ u64 emm_switch;
+
+ /* Enable this bus slot. */
+ host->emm_cfg |= (1ull << slot->bus_id);
+ writeq(host->emm_cfg, slot->host->base + MIO_EMM_CFG(host));
+ udelay(10);
+
+ /* Program initial clock speed and power. */
+ cvm_mmc_set_clock(slot, slot->mmc->f_min);
+ emm_switch = FIELD_PREP(MIO_EMM_SWITCH_POWER_CLASS, 10);
+ emm_switch |= FIELD_PREP(MIO_EMM_SWITCH_CLK_HI,
+ (host->sys_freq / slot->clock) / 2);
+ emm_switch |= FIELD_PREP(MIO_EMM_SWITCH_CLK_LO,
+ (host->sys_freq / slot->clock) / 2);
+
+ /* Make the changes take effect on this bus slot. */
+ set_bus_id(&emm_switch, slot->bus_id);
+ do_switch(host, emm_switch);
+
+ slot->cached_switch = emm_switch;
+
+ /*
+ * Set watchdog timeout value and default reset value
+ * for the mask register. Finally, set the CARD_RCA
+ * bit so that we can get the card address relative
+ * to the CMD register for CMD7 transactions.
+ */
+ set_wdog(slot, 0);
+ writeq(0xe4390080ull, host->base + MIO_EMM_STS_MASK(host));
+ writeq(1, host->base + MIO_EMM_RCA(host));
+ return 0;
+}
+
+static int cvm_mmc_of_parse(struct device *dev, struct cvm_mmc_slot *slot)
+{
+ u32 id, cmd_skew = 0, dat_skew = 0, bus_width = 0;
+ struct device_node *node = dev->of_node;
+ struct mmc_host *mmc = slot->mmc;
+ u64 clock_period;
+ int ret;
+
+ ret = of_property_read_u32(node, "reg", &id);
+ if (ret) {
+ dev_err(dev, "Missing or invalid reg property on %s\n",
+ of_node_full_name(node));
+ return ret;
+ }
+
+ if (id >= CAVIUM_MAX_MMC || slot->host->slot[id]) {
+ dev_err(dev, "Invalid reg property on %s\n",
+ of_node_full_name(node));
+ return -EINVAL;
+ }
+
+ mmc->supply.vmmc = devm_regulator_get_optional(dev, "vmmc");
+ if (IS_ERR(mmc->supply.vmmc)) {
+ if (PTR_ERR(mmc->supply.vmmc) == -EPROBE_DEFER)
+ return -EPROBE_DEFER;
+ /*
+ * Legacy Octeon firmware has no regulator entry, fall-back to
+ * a hard-coded voltage to get a sane OCR.
+ */
+ mmc->ocr_avail = MMC_VDD_32_33 | MMC_VDD_33_34;
+ } else {
+ ret = mmc_regulator_get_ocrmask(mmc->supply.vmmc);
+ if (ret > 0)
+ mmc->ocr_avail = ret;
+ }
+
+ /* Common MMC bindings */
+ ret = mmc_of_parse(mmc);
+ if (ret)
+ return ret;
+
+ /* Set bus width */
+ if (!(mmc->caps & (MMC_CAP_8_BIT_DATA | MMC_CAP_4_BIT_DATA))) {
+ of_property_read_u32(node, "cavium,bus-max-width", &bus_width);
+ if (bus_width == 8)
+ mmc->caps |= MMC_CAP_8_BIT_DATA | MMC_CAP_4_BIT_DATA;
+ else if (bus_width == 4)
+ mmc->caps |= MMC_CAP_4_BIT_DATA;
+ }
+
+ /* Set maximum and minimum frequency */
+ if (!mmc->f_max)
+ of_property_read_u32(node, "spi-max-frequency", &mmc->f_max);
+ if (!mmc->f_max || mmc->f_max > 52000000)
+ mmc->f_max = 52000000;
+ mmc->f_min = 400000;
+
+ /* Sampling register settings, period in picoseconds */
+ clock_period = 1000000000000ull / slot->host->sys_freq;
+ of_property_read_u32(node, "cavium,cmd-clk-skew", &cmd_skew);
+ of_property_read_u32(node, "cavium,dat-clk-skew", &dat_skew);
+ slot->cmd_cnt = (cmd_skew + clock_period / 2) / clock_period;
+ slot->dat_cnt = (dat_skew + clock_period / 2) / clock_period;
+
+ return id;
+}
+
+int cvm_mmc_of_slot_probe(struct device *dev, struct cvm_mmc_host *host)
+{
+ struct cvm_mmc_slot *slot;
+ struct mmc_host *mmc;
+ int ret, id;
+
+ mmc = mmc_alloc_host(sizeof(struct cvm_mmc_slot), dev);
+ if (!mmc)
+ return -ENOMEM;
+
+ slot = mmc_priv(mmc);
+ slot->mmc = mmc;
+ slot->host = host;
+
+ ret = cvm_mmc_of_parse(dev, slot);
+ if (ret < 0)
+ goto error;
+ id = ret;
+
+ /* Set up host parameters */
+ mmc->ops = &cvm_mmc_ops;
+
+ mmc->caps |= MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED |
+ MMC_CAP_ERASE | MMC_CAP_CMD23 | MMC_CAP_POWER_OFF_CARD;
+
+ mmc->max_segs = 1;
+
+ /* DMA size field can address up to 8 MB */
+ mmc->max_seg_size = 8 * 1024 * 1024;
+ mmc->max_req_size = mmc->max_seg_size;
+ /* External DMA is in 512 byte blocks */
+ mmc->max_blk_size = 512;
+ /* DMA block count field is 15 bits */
+ mmc->max_blk_count = 32767;
+
+ slot->clock = mmc->f_min;
+ slot->bus_id = id;
+ slot->cached_rca = 1;
+
+ host->acquire_bus(host);
+ host->slot[id] = slot;
+ cvm_mmc_switch_to(slot);
+ cvm_mmc_init_lowlevel(slot);
+ host->release_bus(host);
+
+ ret = mmc_add_host(mmc);
+ if (ret) {
+ dev_err(dev, "mmc_add_host() returned %d\n", ret);
+ slot->host->slot[id] = NULL;
+ goto error;
+ }
+ return 0;
+
+error:
+ mmc_free_host(slot->mmc);
+ return ret;
+}
+
+int cvm_mmc_of_slot_remove(struct cvm_mmc_slot *slot)
+{
+ mmc_remove_host(slot->mmc);
+ slot->host->slot[slot->bus_id] = NULL;
+ mmc_free_host(slot->mmc);
+ return 0;
+}
diff --git a/drivers/mmc/host/cavium.h b/drivers/mmc/host/cavium.h
new file mode 100644
index 0000000..f5d2b61
--- /dev/null
+++ b/drivers/mmc/host/cavium.h
@@ -0,0 +1,192 @@
+/*
+ * Driver for MMC and SSD cards for Cavium OCTEON and ThunderX SOCs.
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2012-2017 Cavium Inc.
+ */
+
+#ifndef _CAVIUM_MMC_H_
+#define _CAVIUM_MMC_H_
+
+#include <linux/bitops.h>
+#include <linux/clk.h>
+#include <linux/gpio/consumer.h>
+#include <linux/io.h>
+#include <linux/mmc/host.h>
+#include <linux/of.h>
+#include <linux/scatterlist.h>
+#include <linux/semaphore.h>
+
+#define CAVIUM_MAX_MMC 4
+
+/* DMA register addresses */
+#define MIO_EMM_DMA_CFG(x) (0x00 + x->reg_off_dma)
+
+/* register addresses */
+#define MIO_EMM_CFG(x) (0x00 + x->reg_off)
+#define MIO_EMM_SWITCH(x) (0x48 + x->reg_off)
+#define MIO_EMM_DMA(x) (0x50 + x->reg_off)
+#define MIO_EMM_CMD(x) (0x58 + x->reg_off)
+#define MIO_EMM_RSP_STS(x) (0x60 + x->reg_off)
+#define MIO_EMM_RSP_LO(x) (0x68 + x->reg_off)
+#define MIO_EMM_RSP_HI(x) (0x70 + x->reg_off)
+#define MIO_EMM_INT(x) (0x78 + x->reg_off)
+#define MIO_EMM_INT_EN(x) (0x80 + x->reg_off)
+#define MIO_EMM_WDOG(x) (0x88 + x->reg_off)
+#define MIO_EMM_SAMPLE(x) (0x90 + x->reg_off)
+#define MIO_EMM_STS_MASK(x) (0x98 + x->reg_off)
+#define MIO_EMM_RCA(x) (0xa0 + x->reg_off)
+#define MIO_EMM_BUF_IDX(x) (0xe0 + x->reg_off)
+#define MIO_EMM_BUF_DAT(x) (0xe8 + x->reg_off)
+
+struct cvm_mmc_host {
+ struct device *dev;
+ void __iomem *base;
+ void __iomem *dma_base;
+ int reg_off;
+ int reg_off_dma;
+ u64 emm_cfg;
+ u64 n_minus_one; /* OCTEON II workaround location */
+ int last_slot;
+ struct clk *clk;
+ int sys_freq;
+
+ struct mmc_request *current_req;
+ struct sg_mapping_iter smi;
+ bool dma_active;
+
+ bool has_ciu3;
+ bool big_dma_addr;
+ bool need_irq_handler_lock;
+ spinlock_t irq_handler_lock;
+ struct semaphore mmc_serializer;
+
+ struct gpio_desc *global_pwr_gpiod;
+ atomic_t shared_power_users;
+
+ struct cvm_mmc_slot *slot[CAVIUM_MAX_MMC];
+ struct platform_device *slot_pdev[CAVIUM_MAX_MMC];
+
+ void (*set_shared_power)(struct cvm_mmc_host *, int);
+ void (*acquire_bus)(struct cvm_mmc_host *);
+ void (*release_bus)(struct cvm_mmc_host *);
+ void (*int_enable)(struct cvm_mmc_host *, u64);
+ /* required on some MIPS models */
+ void (*dmar_fixup)(struct cvm_mmc_host *, struct mmc_command *,
+ struct mmc_data *, u64);
+ void (*dmar_fixup_done)(struct cvm_mmc_host *);
+};
+
+struct cvm_mmc_slot {
+ struct mmc_host *mmc; /* slot-level mmc_core object */
+ struct cvm_mmc_host *host; /* common hw for all slots */
+
+ u64 clock;
+
+ u64 cached_switch;
+ u64 cached_rca;
+
+ unsigned int cmd_cnt; /* sample delay */
+ unsigned int dat_cnt; /* sample delay */
+
+ int bus_id;
+};
+
+struct cvm_mmc_cr_type {
+ u8 ctype;
+ u8 rtype;
+};
+
+struct cvm_mmc_cr_mods {
+ u8 ctype_xor;
+ u8 rtype_xor;
+};
+
+/* Bitfield definitions */
+#define MIO_EMM_CMD_SKIP_BUSY BIT_ULL(62)
+#define MIO_EMM_CMD_BUS_ID GENMASK_ULL(61, 60)
+#define MIO_EMM_CMD_VAL BIT_ULL(59)
+#define MIO_EMM_CMD_DBUF BIT_ULL(55)
+#define MIO_EMM_CMD_OFFSET GENMASK_ULL(54, 49)
+#define MIO_EMM_CMD_CTYPE_XOR GENMASK_ULL(42, 41)
+#define MIO_EMM_CMD_RTYPE_XOR GENMASK_ULL(40, 38)
+#define MIO_EMM_CMD_IDX GENMASK_ULL(37, 32)
+#define MIO_EMM_CMD_ARG GENMASK_ULL(31, 0)
+
+#define MIO_EMM_DMA_SKIP_BUSY BIT_ULL(62)
+#define MIO_EMM_DMA_BUS_ID GENMASK_ULL(61, 60)
+#define MIO_EMM_DMA_VAL BIT_ULL(59)
+#define MIO_EMM_DMA_SECTOR BIT_ULL(58)
+#define MIO_EMM_DMA_DAT_NULL BIT_ULL(57)
+#define MIO_EMM_DMA_THRES GENMASK_ULL(56, 51)
+#define MIO_EMM_DMA_REL_WR BIT_ULL(50)
+#define MIO_EMM_DMA_RW BIT_ULL(49)
+#define MIO_EMM_DMA_MULTI BIT_ULL(48)
+#define MIO_EMM_DMA_BLOCK_CNT GENMASK_ULL(47, 32)
+#define MIO_EMM_DMA_CARD_ADDR GENMASK_ULL(31, 0)
+
+#define MIO_EMM_DMA_CFG_EN BIT_ULL(63)
+#define MIO_EMM_DMA_CFG_RW BIT_ULL(62)
+#define MIO_EMM_DMA_CFG_CLR BIT_ULL(61)
+#define MIO_EMM_DMA_CFG_SWAP32 BIT_ULL(59)
+#define MIO_EMM_DMA_CFG_SWAP16 BIT_ULL(58)
+#define MIO_EMM_DMA_CFG_SWAP8 BIT_ULL(57)
+#define MIO_EMM_DMA_CFG_ENDIAN BIT_ULL(56)
+#define MIO_EMM_DMA_CFG_SIZE GENMASK_ULL(55, 36)
+#define MIO_EMM_DMA_CFG_ADR GENMASK_ULL(35, 0)
+
+#define MIO_EMM_INT_SWITCH_ERR BIT_ULL(6)
+#define MIO_EMM_INT_SWITCH_DONE BIT_ULL(5)
+#define MIO_EMM_INT_DMA_ERR BIT_ULL(4)
+#define MIO_EMM_INT_CMD_ERR BIT_ULL(3)
+#define MIO_EMM_INT_DMA_DONE BIT_ULL(2)
+#define MIO_EMM_INT_CMD_DONE BIT_ULL(1)
+#define MIO_EMM_INT_BUF_DONE BIT_ULL(0)
+
+#define MIO_EMM_RSP_STS_BUS_ID GENMASK_ULL(61, 60)
+#define MIO_EMM_RSP_STS_CMD_VAL BIT_ULL(59)
+#define MIO_EMM_RSP_STS_SWITCH_VAL BIT_ULL(58)
+#define MIO_EMM_RSP_STS_DMA_VAL BIT_ULL(57)
+#define MIO_EMM_RSP_STS_DMA_PEND BIT_ULL(56)
+#define MIO_EMM_RSP_STS_DBUF_ERR BIT_ULL(28)
+#define MIO_EMM_RSP_STS_DBUF BIT_ULL(23)
+#define MIO_EMM_RSP_STS_BLK_TIMEOUT BIT_ULL(22)
+#define MIO_EMM_RSP_STS_BLK_CRC_ERR BIT_ULL(21)
+#define MIO_EMM_RSP_STS_RSP_BUSYBIT BIT_ULL(20)
+#define MIO_EMM_RSP_STS_STP_TIMEOUT BIT_ULL(19)
+#define MIO_EMM_RSP_STS_STP_CRC_ERR BIT_ULL(18)
+#define MIO_EMM_RSP_STS_STP_BAD_STS BIT_ULL(17)
+#define MIO_EMM_RSP_STS_STP_VAL BIT_ULL(16)
+#define MIO_EMM_RSP_STS_RSP_TIMEOUT BIT_ULL(15)
+#define MIO_EMM_RSP_STS_RSP_CRC_ERR BIT_ULL(14)
+#define MIO_EMM_RSP_STS_RSP_BAD_STS BIT_ULL(13)
+#define MIO_EMM_RSP_STS_RSP_VAL BIT_ULL(12)
+#define MIO_EMM_RSP_STS_RSP_TYPE GENMASK_ULL(11, 9)
+#define MIO_EMM_RSP_STS_CMD_TYPE GENMASK_ULL(8, 7)
+#define MIO_EMM_RSP_STS_CMD_IDX GENMASK_ULL(6, 1)
+#define MIO_EMM_RSP_STS_CMD_DONE BIT_ULL(0)
+
+#define MIO_EMM_SAMPLE_CMD_CNT GENMASK_ULL(25, 16)
+#define MIO_EMM_SAMPLE_DAT_CNT GENMASK_ULL(9, 0)
+
+#define MIO_EMM_SWITCH_BUS_ID GENMASK_ULL(61, 60)
+#define MIO_EMM_SWITCH_EXE BIT_ULL(59)
+#define MIO_EMM_SWITCH_ERR0 BIT_ULL(58)
+#define MIO_EMM_SWITCH_ERR1 BIT_ULL(57)
+#define MIO_EMM_SWITCH_ERR2 BIT_ULL(56)
+#define MIO_EMM_SWITCH_HS_TIMING BIT_ULL(48)
+#define MIO_EMM_SWITCH_BUS_WIDTH GENMASK_ULL(42, 40)
+#define MIO_EMM_SWITCH_POWER_CLASS GENMASK_ULL(35, 32)
+#define MIO_EMM_SWITCH_CLK_HI GENMASK_ULL(31, 16)
+#define MIO_EMM_SWITCH_CLK_LO GENMASK_ULL(15, 0)
+
+/* Protoypes */
+irqreturn_t cvm_mmc_interrupt(int irq, void *dev_id);
+int cvm_mmc_of_slot_probe(struct device *dev, struct cvm_mmc_host *host);
+int cvm_mmc_of_slot_remove(struct cvm_mmc_slot *slot);
+extern const char *cvm_mmc_irq_names[];
+
+#endif
--
2.9.0.rc0.21.g7777322
Add support for switching to DDR mode for eMMC devices.
Signed-off-by: Jan Glauber <[email protected]>
---
drivers/mmc/host/cavium.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/drivers/mmc/host/cavium.c b/drivers/mmc/host/cavium.c
index eebb387..d842b69 100644
--- a/drivers/mmc/host/cavium.c
+++ b/drivers/mmc/host/cavium.c
@@ -864,6 +864,10 @@ static void cvm_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
break;
}
+ /* DDR is available for 4/8 bit bus width */
+ if (ios->bus_width && ios->timing == MMC_TIMING_MMC_DDR52)
+ bus_width |= 4;
+
/* Change the clock frequency. */
clock = ios->clock;
if (clock > 52000000)
@@ -1032,8 +1036,14 @@ int cvm_mmc_of_slot_probe(struct device *dev, struct cvm_mmc_host *host)
/* Set up host parameters */
mmc->ops = &cvm_mmc_ops;
+ /*
+ * We only have a 3.3v supply, we cannot support any
+ * of the UHS modes. We do support the high speed DDR
+ * modes up to 52MHz.
+ */
mmc->caps |= MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED |
- MMC_CAP_ERASE | MMC_CAP_CMD23 | MMC_CAP_POWER_OFF_CARD;
+ MMC_CAP_ERASE | MMC_CAP_CMD23 | MMC_CAP_POWER_OFF_CARD |
+ MMC_CAP_3_3V_DDR;
if (host->use_sg)
mmc->max_segs = 16;
--
2.9.0.rc0.21.g7777322
Add Support for the scatter-gather DMA available in the
ThunderX MMC units. Up to 16 DMA requests can be processed
together.
Signed-off-by: Jan Glauber <[email protected]>
---
drivers/mmc/host/cavium-thunderx.c | 5 +-
drivers/mmc/host/cavium.c | 104 +++++++++++++++++++++++++++++++++++--
drivers/mmc/host/cavium.h | 28 +++++++---
3 files changed, 127 insertions(+), 10 deletions(-)
diff --git a/drivers/mmc/host/cavium-thunderx.c b/drivers/mmc/host/cavium-thunderx.c
index cba108b..65244e8 100644
--- a/drivers/mmc/host/cavium-thunderx.c
+++ b/drivers/mmc/host/cavium-thunderx.c
@@ -82,7 +82,7 @@ static int thunder_mmc_probe(struct pci_dev *pdev,
host->dma_base = host->base;
host->reg_off = 0x2000;
- host->reg_off_dma = 0x180;
+ host->reg_off_dma = 0x160;
host->clk = devm_clk_get(dev, NULL);
if (IS_ERR(host->clk))
@@ -101,6 +101,7 @@ static int thunder_mmc_probe(struct pci_dev *pdev,
host->release_bus = thunder_mmc_release_bus;
host->int_enable = thunder_mmc_int_enable;
+ host->use_sg = true;
host->big_dma_addr = true;
host->need_irq_handler_lock = true;
host->last_slot = -1;
@@ -115,6 +116,8 @@ static int thunder_mmc_probe(struct pci_dev *pdev,
*/
writeq(127, host->base + MIO_EMM_INT_EN(host));
writeq(3, host->base + MIO_EMM_DMA_INT_ENA_W1C(host));
+ /* Clear DMA FIFO */
+ writeq(BIT_ULL(16), host->base + MIO_EMM_DMA_FIFO_CFG(host));
ret = thunder_mmc_register_interrupts(host, pdev);
if (ret)
diff --git a/drivers/mmc/host/cavium.c b/drivers/mmc/host/cavium.c
index 910e290..eebb387 100644
--- a/drivers/mmc/host/cavium.c
+++ b/drivers/mmc/host/cavium.c
@@ -377,9 +377,32 @@ static int finish_dma_single(struct cvm_mmc_host *host, struct mmc_data *data)
return 1;
}
+static int finish_dma_sg(struct cvm_mmc_host *host, struct mmc_data *data)
+{
+ u64 fifo_cfg;
+ int count;
+
+ /* Check if there are any pending requests left */
+ fifo_cfg = readq(host->dma_base + MIO_EMM_DMA_FIFO_CFG(host));
+ count = FIELD_GET(MIO_EMM_DMA_FIFO_CFG_COUNT, fifo_cfg);
+ if (count)
+ dev_err(host->dev, "%u requests still pending\n", count);
+
+ data->bytes_xfered = data->blocks * data->blksz;
+ data->error = 0;
+
+ /* Clear and disable FIFO */
+ writeq(BIT_ULL(16), host->dma_base + MIO_EMM_DMA_FIFO_CFG(host));
+ dma_unmap_sg(host->dev, data->sg, data->sg_len, get_dma_dir(data));
+ return 1;
+}
+
static int finish_dma(struct cvm_mmc_host *host, struct mmc_data *data)
{
- return finish_dma_single(host, data);
+ if (host->use_sg && data->sg_len > 1)
+ return finish_dma_sg(host, data);
+ else
+ return finish_dma_single(host, data);
}
static int check_status(u64 rsp_sts)
@@ -522,9 +545,81 @@ static u64 prepare_dma_single(struct cvm_mmc_host *host, struct mmc_data *data)
return addr;
}
+/*
+ * Queue complete sg list into the FIFO.
+ * Returns 0 on error, 1 otherwise.
+ */
+static u64 prepare_dma_sg(struct cvm_mmc_host *host, struct mmc_data *data)
+{
+ struct scatterlist *sg;
+ u64 fifo_cmd, addr;
+ int count, i, rw;
+
+ count = dma_map_sg(host->dev, data->sg, data->sg_len,
+ get_dma_dir(data));
+ if (!count)
+ return 0;
+ if (count > 16)
+ goto error;
+
+ /* Enable FIFO by removing CLR bit */
+ writeq(0, host->dma_base + MIO_EMM_DMA_FIFO_CFG(host));
+
+ for_each_sg(data->sg, sg, count, i) {
+ /* Program DMA address */
+ addr = sg_dma_address(sg);
+ if (addr & 7)
+ goto error;
+ writeq(addr, host->dma_base + MIO_EMM_DMA_FIFO_ADR(host));
+
+ /*
+ * If we have scatter-gather support we also have an extra
+ * register for the DMA addr, so no need to check
+ * host->big_dma_addr here.
+ */
+ rw = (data->flags & MMC_DATA_WRITE) ? 1 : 0;
+ fifo_cmd = FIELD_PREP(MIO_EMM_DMA_FIFO_CMD_RW, rw);
+
+ /* enable interrupts on the last element */
+ fifo_cmd |= FIELD_PREP(MIO_EMM_DMA_FIFO_CMD_INTDIS,
+ (i + 1 == count) ? 0 : 1);
+
+#ifdef __LITTLE_ENDIAN
+ fifo_cmd |= FIELD_PREP(MIO_EMM_DMA_FIFO_CMD_ENDIAN, 1);
+#endif
+ fifo_cmd |= FIELD_PREP(MIO_EMM_DMA_FIFO_CMD_SIZE,
+ sg_dma_len(sg) / 8 - 1);
+ /*
+ * The write copies the address and the command to the FIFO
+ * and increments the FIFO's COUNT field.
+ */
+ writeq(fifo_cmd, host->dma_base + MIO_EMM_DMA_FIFO_CMD(host));
+ pr_debug("[%s] sg_dma_len: %u sg_elem: %d/%d\n",
+ (rw) ? "W" : "R", sg_dma_len(sg), i, count);
+ }
+
+ /*
+ * In difference to prepare_dma_single we don't return the
+ * address here, as it would not make sense for scatter-gather.
+ * The dma fixup is only required on models that don't support
+ * scatter-gather, so that is not a problem.
+ */
+ return 1;
+
+error:
+ WARN_ON_ONCE(1);
+ dma_unmap_sg(host->dev, data->sg, data->sg_len, get_dma_dir(data));
+ /* Disable FIFO */
+ writeq(BIT_ULL(16), host->dma_base + MIO_EMM_DMA_FIFO_CFG(host));
+ return 0;
+}
+
static u64 prepare_dma(struct cvm_mmc_host *host, struct mmc_data *data)
{
- return prepare_dma_single(host, data);
+ if (host->use_sg && data->sg_len > 1)
+ return prepare_dma_sg(host, data);
+ else
+ return prepare_dma_single(host, data);
}
static u64 prepare_ext_dma(struct mmc_host *mmc, struct mmc_request *mrq)
@@ -940,7 +1035,10 @@ int cvm_mmc_of_slot_probe(struct device *dev, struct cvm_mmc_host *host)
mmc->caps |= MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED |
MMC_CAP_ERASE | MMC_CAP_CMD23 | MMC_CAP_POWER_OFF_CARD;
- mmc->max_segs = 1;
+ if (host->use_sg)
+ mmc->max_segs = 16;
+ else
+ mmc->max_segs = 1;
/* DMA size field can address up to 8 MB */
mmc->max_seg_size = 8 * 1024 * 1024;
diff --git a/drivers/mmc/host/cavium.h b/drivers/mmc/host/cavium.h
index 66eec21..f3eea5e 100644
--- a/drivers/mmc/host/cavium.h
+++ b/drivers/mmc/host/cavium.h
@@ -23,12 +23,15 @@
#define CAVIUM_MAX_MMC 4
/* DMA register addresses */
-#define MIO_EMM_DMA_CFG(x) (0x00 + x->reg_off_dma)
-#define MIO_EMM_DMA_ADR(x) (0x08 + x->reg_off_dma)
-#define MIO_EMM_DMA_INT(x) (0x10 + x->reg_off_dma)
-#define MIO_EMM_DMA_INT_W1S(x) (0x18 + x->reg_off_dma)
-#define MIO_EMM_DMA_INT_ENA_W1S(x) (0x20 + x->reg_off_dma)
-#define MIO_EMM_DMA_INT_ENA_W1C(x) (0x28 + x->reg_off_dma)
+#define MIO_EMM_DMA_FIFO_CFG(x) (0x00 + x->reg_off_dma)
+#define MIO_EMM_DMA_FIFO_ADR(x) (0x10 + x->reg_off_dma)
+#define MIO_EMM_DMA_FIFO_CMD(x) (0x18 + x->reg_off_dma)
+#define MIO_EMM_DMA_CFG(x) (0x20 + x->reg_off_dma)
+#define MIO_EMM_DMA_ADR(x) (0x28 + x->reg_off_dma)
+#define MIO_EMM_DMA_INT(x) (0x30 + x->reg_off_dma)
+#define MIO_EMM_DMA_INT_W1S(x) (0x38 + x->reg_off_dma)
+#define MIO_EMM_DMA_INT_ENA_W1S(x) (0x40 + x->reg_off_dma)
+#define MIO_EMM_DMA_INT_ENA_W1C(x) (0x48 + x->reg_off_dma)
/* register addresses */
#define MIO_EMM_CFG(x) (0x00 + x->reg_off)
@@ -64,6 +67,7 @@ struct cvm_mmc_host {
struct mmc_request *current_req;
struct sg_mapping_iter smi;
bool dma_active;
+ bool use_sg;
bool has_ciu3;
bool big_dma_addr;
@@ -113,6 +117,18 @@ struct cvm_mmc_cr_mods {
};
/* Bitfield definitions */
+#define MIO_EMM_DMA_FIFO_CFG_CLR BIT_ULL(16)
+#define MIO_EMM_DMA_FIFO_CFG_INT_LVL GENMASK_ULL(12, 8)
+#define MIO_EMM_DMA_FIFO_CFG_COUNT GENMASK_ULL(4, 0)
+
+#define MIO_EMM_DMA_FIFO_CMD_RW BIT_ULL(62)
+#define MIO_EMM_DMA_FIFO_CMD_INTDIS BIT_ULL(60)
+#define MIO_EMM_DMA_FIFO_CMD_SWAP32 BIT_ULL(59)
+#define MIO_EMM_DMA_FIFO_CMD_SWAP16 BIT_ULL(58)
+#define MIO_EMM_DMA_FIFO_CMD_SWAP8 BIT_ULL(57)
+#define MIO_EMM_DMA_FIFO_CMD_ENDIAN BIT_ULL(56)
+#define MIO_EMM_DMA_FIFO_CMD_SIZE GENMASK_ULL(55, 36)
+
#define MIO_EMM_CMD_SKIP_BUSY BIT_ULL(62)
#define MIO_EMM_CMD_BUS_ID GENMASK_ULL(61, 60)
#define MIO_EMM_CMD_VAL BIT_ULL(59)
--
2.9.0.rc0.21.g7777322
Signed-off-by: Jan Glauber <[email protected]>
Signed-off-by: David Daney <[email protected]>
Signed-off-by: Steven J. Hill <[email protected]>
---
MAINTAINERS | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index c776906..25c3009 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3041,6 +3041,14 @@ S: Supported
F: drivers/i2c/busses/i2c-octeon*
F: drivers/i2c/busses/i2c-thunderx*
+CAVIUM MMC DRIVER
+M: Jan Glauber <[email protected]>
+M: David Daney <[email protected]>
+M: Steven J. Hill <[email protected]>
+W: http://www.cavium.com
+S: Supported
+F: drivers/mmc/host/cavium*
+
CAVIUM LIQUIDIO NETWORK DRIVER
M: Derek Chickles <[email protected]>
M: Satanand Burla <[email protected]>
--
2.9.0.rc0.21.g7777322
Add a platform driver for ThunderX ARM SOCs.
Signed-off-by: Jan Glauber <[email protected]>
---
drivers/mmc/host/Kconfig | 10 ++
drivers/mmc/host/Makefile | 2 +
drivers/mmc/host/cavium-thunderx.c | 195 +++++++++++++++++++++++++++++++++++++
drivers/mmc/host/cavium.h | 7 ++
4 files changed, 214 insertions(+)
create mode 100644 drivers/mmc/host/cavium-thunderx.c
diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig
index f08691a..c882795 100644
--- a/drivers/mmc/host/Kconfig
+++ b/drivers/mmc/host/Kconfig
@@ -622,6 +622,16 @@ config SDH_BFIN_MISSING_CMD_PULLUP_WORKAROUND
help
If you say yes here SD-Cards may work on the EZkit.
+config MMC_CAVIUM_THUNDERX
+ tristate "Cavium ThunderX SD/MMC Card Interface support"
+ depends on PCI && 64BIT && (ARM64 || COMPILE_TEST)
+ depends on GPIOLIB
+ help
+ This selects Cavium ThunderX SD/MMC Card Interface.
+ If you have an Cavium ARM64 board with a Multimedia Card slot
+ or builtin eMMC chip say Y or M here. If built as a module
+ the module will be called thunderx_mmc.ko.
+
config MMC_DW
tristate "Synopsys DesignWare Memory Card Interface"
depends on HAS_DMA
diff --git a/drivers/mmc/host/Makefile b/drivers/mmc/host/Makefile
index 6d548c4..f40c6b3 100644
--- a/drivers/mmc/host/Makefile
+++ b/drivers/mmc/host/Makefile
@@ -42,6 +42,8 @@ obj-$(CONFIG_MMC_SDHI) += sh_mobile_sdhi.o
obj-$(CONFIG_MMC_CB710) += cb710-mmc.o
obj-$(CONFIG_MMC_VIA_SDMMC) += via-sdmmc.o
obj-$(CONFIG_SDH_BFIN) += bfin_sdh.o
+thunderx-mmc-objs := cavium.o cavium-thunderx.o
+obj-$(CONFIG_MMC_CAVIUM_THUNDERX) += thunderx-mmc.o
obj-$(CONFIG_MMC_DW) += dw_mmc.o
obj-$(CONFIG_MMC_DW_PLTFM) += dw_mmc-pltfm.o
obj-$(CONFIG_MMC_DW_EXYNOS) += dw_mmc-exynos.o
diff --git a/drivers/mmc/host/cavium-thunderx.c b/drivers/mmc/host/cavium-thunderx.c
new file mode 100644
index 0000000..cba108b
--- /dev/null
+++ b/drivers/mmc/host/cavium-thunderx.c
@@ -0,0 +1,195 @@
+/*
+ * Driver for MMC and SSD cards for Cavium ThunderX SOCs.
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2016 Cavium Inc.
+ */
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/mmc/mmc.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_platform.h>
+#include <linux/pci.h>
+#include "cavium.h"
+
+static void thunder_mmc_acquire_bus(struct cvm_mmc_host *host)
+{
+ down(&host->mmc_serializer);
+}
+
+static void thunder_mmc_release_bus(struct cvm_mmc_host *host)
+{
+ up(&host->mmc_serializer);
+}
+
+static void thunder_mmc_int_enable(struct cvm_mmc_host *host, u64 val)
+{
+ writeq(val, host->base + MIO_EMM_INT(host));
+ writeq(val, host->base + MIO_EMM_INT_EN_SET(host));
+}
+
+static int thunder_mmc_register_interrupts(struct cvm_mmc_host *host,
+ struct pci_dev *pdev)
+{
+ int nvec, ret, i;
+
+ nvec = pci_alloc_irq_vectors(pdev, 1, 9, PCI_IRQ_MSIX);
+ if (nvec < 0)
+ return nvec;
+
+ /* register interrupts */
+ for (i = 0; i < nvec; i++) {
+ ret = devm_request_irq(&pdev->dev, pci_irq_vector(pdev, i),
+ cvm_mmc_interrupt,
+ 0, cvm_mmc_irq_names[i], host);
+ if (ret)
+ return ret;
+ }
+ return 0;
+}
+
+static int thunder_mmc_probe(struct pci_dev *pdev,
+ const struct pci_device_id *id)
+{
+ struct device_node *node = pdev->dev.of_node;
+ struct device *dev = &pdev->dev;
+ struct device_node *child_node;
+ struct cvm_mmc_host *host;
+ int ret, i = 0;
+
+ host = devm_kzalloc(dev, sizeof(*host), GFP_KERNEL);
+ if (!host)
+ return -ENOMEM;
+
+ pci_set_drvdata(pdev, host);
+ ret = pcim_enable_device(pdev);
+ if (ret)
+ return ret;
+
+ ret = pci_request_regions(pdev, KBUILD_MODNAME);
+ if (ret)
+ return ret;
+
+ host->base = pcim_iomap(pdev, 0, pci_resource_len(pdev, 0));
+ if (!host->base)
+ return -EINVAL;
+
+ /* On ThunderX these are identical */
+ host->dma_base = host->base;
+
+ host->reg_off = 0x2000;
+ host->reg_off_dma = 0x180;
+
+ host->clk = devm_clk_get(dev, NULL);
+ if (IS_ERR(host->clk))
+ return PTR_ERR(host->clk);
+
+ ret = clk_prepare_enable(host->clk);
+ if (ret)
+ return ret;
+ host->sys_freq = clk_get_rate(host->clk);
+
+ spin_lock_init(&host->irq_handler_lock);
+ sema_init(&host->mmc_serializer, 1);
+
+ host->dev = dev;
+ host->acquire_bus = thunder_mmc_acquire_bus;
+ host->release_bus = thunder_mmc_release_bus;
+ host->int_enable = thunder_mmc_int_enable;
+
+ host->big_dma_addr = true;
+ host->need_irq_handler_lock = true;
+ host->last_slot = -1;
+
+ ret = dma_set_mask(dev, DMA_BIT_MASK(48));
+ if (ret)
+ goto error;
+
+ /*
+ * Clear out any pending interrupts that may be left over from
+ * bootloader. Writing 1 to the bits clears them.
+ */
+ writeq(127, host->base + MIO_EMM_INT_EN(host));
+ writeq(3, host->base + MIO_EMM_DMA_INT_ENA_W1C(host));
+
+ ret = thunder_mmc_register_interrupts(host, pdev);
+ if (ret)
+ goto error;
+
+ for_each_child_of_node(node, child_node) {
+ /*
+ * mmc_of_parse and devm* require one device per slot.
+ * Create a dummy device per slot and set the node pointer to
+ * the slot. The easiest way to get this is using
+ * of_platform_device_create.
+ */
+ if (of_device_is_compatible(child_node, "mmc-slot")) {
+ host->slot_pdev[i] = of_platform_device_create(child_node, NULL,
+ &pdev->dev);
+ if (!host->slot_pdev[i])
+ continue;
+
+ ret = cvm_mmc_of_slot_probe(&host->slot_pdev[i]->dev, host);
+ if (ret)
+ goto error;
+ }
+ i++;
+ }
+ dev_info(dev, "probed\n");
+ return 0;
+
+error:
+ clk_disable_unprepare(host->clk);
+ return ret;
+}
+
+static void thunder_mmc_remove(struct pci_dev *pdev)
+{
+ struct cvm_mmc_host *host = pci_get_drvdata(pdev);
+ u64 dma_cfg;
+ int i;
+
+ for (i = 0; i < CAVIUM_MAX_MMC; i++)
+ if (host->slot[i])
+ cvm_mmc_of_slot_remove(host->slot[i]);
+
+ dma_cfg = readq(host->dma_base + MIO_EMM_DMA_CFG(host));
+ dma_cfg &= ~MIO_EMM_DMA_CFG_EN;
+ writeq(dma_cfg, host->dma_base + MIO_EMM_DMA_CFG(host));
+
+ clk_disable_unprepare(host->clk);
+}
+
+static const struct pci_device_id thunder_mmc_id_table[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, 0xa010) },
+ { 0, } /* end of table */
+};
+
+static struct pci_driver thunder_mmc_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = thunder_mmc_id_table,
+ .probe = thunder_mmc_probe,
+ .remove = thunder_mmc_remove,
+};
+
+static int __init thunder_mmc_init_module(void)
+{
+ return pci_register_driver(&thunder_mmc_driver);
+}
+
+static void __exit thunder_mmc_exit_module(void)
+{
+ pci_unregister_driver(&thunder_mmc_driver);
+}
+
+module_init(thunder_mmc_init_module);
+module_exit(thunder_mmc_exit_module);
+
+MODULE_AUTHOR("Cavium Inc.");
+MODULE_DESCRIPTION("Cavium ThunderX eMMC Driver");
+MODULE_LICENSE("GPL");
+MODULE_DEVICE_TABLE(pci, thunder_mmc_id_table);
diff --git a/drivers/mmc/host/cavium.h b/drivers/mmc/host/cavium.h
index f5d2b61..66eec21 100644
--- a/drivers/mmc/host/cavium.h
+++ b/drivers/mmc/host/cavium.h
@@ -24,6 +24,11 @@
/* DMA register addresses */
#define MIO_EMM_DMA_CFG(x) (0x00 + x->reg_off_dma)
+#define MIO_EMM_DMA_ADR(x) (0x08 + x->reg_off_dma)
+#define MIO_EMM_DMA_INT(x) (0x10 + x->reg_off_dma)
+#define MIO_EMM_DMA_INT_W1S(x) (0x18 + x->reg_off_dma)
+#define MIO_EMM_DMA_INT_ENA_W1S(x) (0x20 + x->reg_off_dma)
+#define MIO_EMM_DMA_INT_ENA_W1C(x) (0x28 + x->reg_off_dma)
/* register addresses */
#define MIO_EMM_CFG(x) (0x00 + x->reg_off)
@@ -39,6 +44,8 @@
#define MIO_EMM_SAMPLE(x) (0x90 + x->reg_off)
#define MIO_EMM_STS_MASK(x) (0x98 + x->reg_off)
#define MIO_EMM_RCA(x) (0xa0 + x->reg_off)
+#define MIO_EMM_INT_EN_SET(x) (0xb0 + x->reg_off)
+#define MIO_EMM_INT_EN_CLR(x) (0xb8 + x->reg_off)
#define MIO_EMM_BUF_IDX(x) (0xe0 + x->reg_off)
#define MIO_EMM_BUF_DAT(x) (0xe8 + x->reg_off)
--
2.9.0.rc0.21.g7777322
Add description of Cavium Octeon and ThunderX SOC device tree bindings.
CC: Ulf Hansson <[email protected]>
CC: Rob Herring <[email protected]>
CC: Mark Rutland <[email protected]>
CC: [email protected]
Signed-off-by: Jan Glauber <[email protected]>
Signed-off-by: David Daney <[email protected]>
Signed-off-by: Steven J. Hill <[email protected]>
Acked-by: Rob Herring <[email protected]>
---
.../devicetree/bindings/mmc/cavium-mmc.txt | 57 ++++++++++++++++++++++
1 file changed, 57 insertions(+)
create mode 100644 Documentation/devicetree/bindings/mmc/cavium-mmc.txt
diff --git a/Documentation/devicetree/bindings/mmc/cavium-mmc.txt b/Documentation/devicetree/bindings/mmc/cavium-mmc.txt
new file mode 100644
index 0000000..1433e62
--- /dev/null
+++ b/Documentation/devicetree/bindings/mmc/cavium-mmc.txt
@@ -0,0 +1,57 @@
+* Cavium Octeon & ThunderX MMC controller
+
+The highspeed MMC host controller on Caviums SoCs provides an interface
+for MMC and SD types of memory cards.
+
+Supported maximum speeds are the ones of the eMMC standard 4.41 as well
+as the speed of SD standard 4.0. Only 3.3 Volt is supported.
+
+Required properties:
+ - compatible : should be one of:
+ cavium,octeon-6130-mmc
+ cavium,octeon-7890-mmc
+ cavium,thunder-8190-mmc
+ cavium,thunder-8390-mmc
+ mmc-slot
+ - reg : mmc controller base registers
+ - clocks : phandle
+
+Optional properties:
+ - for cd, bus-width and additional generic mmc parameters
+ please refer to mmc.txt within this directory
+ - cavium,cmd-clk-skew : number of coprocessor clocks before sampling command
+ - cavium,dat-clk-skew : number of coprocessor clocks before sampling data
+
+Deprecated properties:
+- spi-max-frequency : use max-frequency instead
+- cavium,bus-max-width : use bus-width instead
+- power-gpios : use vmmc-supply instead
+- cavium,octeon-6130-mmc-slot : use mmc-slot instead
+
+Examples:
+ mmc_1_4: mmc@1,4 {
+ compatible = "cavium,thunder-8390-mmc";
+ reg = <0x0c00 0 0 0 0>; /* DEVFN = 0x0c (1:4) */
+ #address-cells = <1>;
+ #size-cells = <0>;
+ clocks = <&sclk>;
+
+ mmc-slot@0 {
+ compatible = "mmc-slot";
+ reg = <0>;
+ vmmc-supply = <&mmc_supply_3v3>;
+ max-frequency = <42000000>;
+ bus-width = <4>;
+ cap-sd-highspeed;
+ };
+
+ mmc-slot@1 {
+ compatible = "mmc-slot";
+ reg = <1>;
+ vmmc-supply = <&mmc_supply_3v3>;
+ max-frequency = <42000000>;
+ bus-width = <8>;
+ cap-mmc-highspeed;
+ non-removable;
+ };
+ };
--
2.9.0.rc0.21.g7777322
Hi Jan,
[auto build test ERROR on linus/master]
[also build test ERROR on v4.11-rc4 next-20170331]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Jan-Glauber/Cavium-MMC-driver/20170401-055302
config: sparc-allyesconfig (attached as .config)
compiler: sparc64-linux-gnu-gcc (Debian 6.1.1-9) 6.1.1 20160705
reproduce:
wget https://raw.githubusercontent.com/01org/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
make.cross ARCH=sparc
All errors (new ones prefixed by >>):
drivers/built-in.o: In function `thunder_mmc_probe':
>> cavium-thunderx.c:(.text+0x2830fcc): undefined reference to `of_platform_device_create'
`.exit.data' referenced in section `.exit.text' of drivers/built-in.o: defined in discarded section `.exit.data' of drivers/built-in.o
`.exit.data' referenced in section `.exit.text' of drivers/built-in.o: defined in discarded section `.exit.data' of drivers/built-in.o
`.exit.data' referenced in section `.exit.text' of drivers/built-in.o: defined in discarded section `.exit.data' of drivers/built-in.o
`.exit.data' referenced in section `.exit.text' of drivers/built-in.o: defined in discarded section `.exit.data' of drivers/built-in.o
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
Hi Jan,
[auto build test ERROR on linus/master]
[also build test ERROR on v4.11-rc4 next-20170331]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Jan-Glauber/Cavium-MMC-driver/20170401-055302
config: sparc64-allmodconfig (attached as .config)
compiler: sparc64-linux-gnu-gcc (Debian 6.1.1-9) 6.1.1 20160705
reproduce:
wget https://raw.githubusercontent.com/01org/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
make.cross ARCH=sparc64
All errors (new ones prefixed by >>):
>> ERROR: "of_platform_device_create" [drivers/mmc/host/thunderx-mmc.ko] undefined!
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
On Sat, Apr 01, 2017 at 12:46:16PM +0800, kbuild test robot wrote:
> Hi Jan,
>
> [auto build test ERROR on linus/master]
> [also build test ERROR on v4.11-rc4 next-20170331]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
>
> url: https://github.com/0day-ci/linux/commits/Jan-Glauber/Cavium-MMC-driver/20170401-055302
> config: sparc64-allmodconfig (attached as .config)
> compiler: sparc64-linux-gnu-gcc (Debian 6.1.1-9) 6.1.1 20160705
> reproduce:
> wget https://raw.githubusercontent.com/01org/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # save the attached .config to linux build tree
> make.cross ARCH=sparc64
>
> All errors (new ones prefixed by >>):
>
> >> ERROR: "of_platform_device_create" [drivers/mmc/host/thunderx-mmc.ko] undefined!
>
> ---
> 0-DAY kernel test infrastructure Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all Intel Corporation
commit d431e1fd494546795ab6478dece96532106b5e62
Author: Jan Glauber <[email protected]>
Date: Sat Apr 1 14:43:51 2017 +0200
mmc: thunderx: Make driver depend on OF_ADDRESS
Prevent this compile error (COMPILE_TEST) on sparc64:
>> ERROR: "of_platform_device_create" [drivers/mmc/host/thunderx-mmc.ko] undefined!
Signed-off-by: Jan Glauber <[email protected]>
diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig
index c882795..c2d05ba 100644
--- a/drivers/mmc/host/Kconfig
+++ b/drivers/mmc/host/Kconfig
@@ -626,6 +626,7 @@ config MMC_CAVIUM_THUNDERX
tristate "Cavium ThunderX SD/MMC Card Interface support"
depends on PCI && 64BIT && (ARM64 || COMPILE_TEST)
depends on GPIOLIB
+ depends on OF_ADDRESS
help
This selects Cavium ThunderX SD/MMC Card Interface.
If you have an Cavium ARM64 board with a Multimedia Card slot
On 30 March 2017 at 17:31, Jan Glauber <[email protected]> wrote:
> Hi Ulf,
>
> we have a bug on some Octeon plattforms so I removed the Octeon driver for now
> (but kept the DT bindings for it). We'll submit the Octeon driver later when
> we've fixed the issue.
>
> Changes to v12:
> - dts: use generic "mmc-slot" for slots
> - dts: mention deprecated power gpio
> - Rename driver files
> - Use hardcoded voltage instead of mmc_of_parse_voltage()
> - Phase out gpiod usage from cavium.c
> - Change DT property scan order
> - Clean up bus_width setting
> - Use GPIOLIB depend for ThunderX driver
> - ThunderX: Remove TODO
> - ThunderX: Move platform pointers to host struct
> - Check slot node compatible string
> - Remove gpio includes from ThunderX driver
>
> Changes to v11:
> - Fix build error and kill IS_ENABLED() by using an offset per arch
> - Added Rob's ACK for the DT bindings
> - Removed obsolete voltage-ranges from DT example
> - Replace pci_msix_enable() with pci_alloc_irq_vectors()
> - Remove superior hardware comment
> - Prefixed probe/removal functions with of_
> - Merged OF parsing code into one function, change order of property
> lookup and simplify code
> - Removed slot->sclock, no need to store it there
> - Substituted now invisible mmc_card_blockaddr()
> - Use new 3.3V CAP for DDR
> - Update Copyright
> - Allow set_ios to set clock to zero
> - Converted bitfields to shift-n-mask logic
> - Improved error codes after receiving error interrupt
> - Added ifndef guards to header
> - Add meaningful interrupt names
> - Remove stale mmc_host_ops prototype
>
> Changes to v10:
> - Renamed files to get a common prefix
> - Select GPIO driver in Kconfig
> - Support a fixed regulator
> - dts: fixed quotes and re-ordered example
> - Use new MMC_CAP_3_3V_DDR instead of 1_8V hack
> - Use blksz instead of now internal mmc_card_blockaddr
> - Added some maintainers
>
> Previous versions:
> v10: https://www.mail-archive.com/[email protected]/msg1295316.html
> v9: http://marc.info/?l=linux-mmc&m=147431759215233&w=2
>
> Cheers,
> Jan
Thanks, applied for next! Amending patch 3 with the fix you posted on top.
Kind regards
Uffe
>
> -------
>
>
> Jan Glauber (6):
> dt-bindings: mmc: Add Cavium SOCs MMC bindings
> mmc: cavium: Add core MMC driver for Cavium SOCs
> mmc: cavium: Add MMC PCI driver for ThunderX SOCs
> mmc: cavium: Add scatter-gather DMA support
> mmc: cavium: Support DDR mode for eMMC devices
> MAINTAINERS: Add entry for Cavium MMC driver
>
> .../devicetree/bindings/mmc/cavium-mmc.txt | 57 +
> MAINTAINERS | 8 +
> drivers/mmc/host/Kconfig | 10 +
> drivers/mmc/host/Makefile | 2 +
> drivers/mmc/host/cavium-thunderx.c | 198 ++++
> drivers/mmc/host/cavium.c | 1090 ++++++++++++++++++++
> drivers/mmc/host/cavium.h | 215 ++++
> 7 files changed, 1580 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/mmc/cavium-mmc.txt
> create mode 100644 drivers/mmc/host/cavium-thunderx.c
> create mode 100644 drivers/mmc/host/cavium.c
> create mode 100644 drivers/mmc/host/cavium.h
>
> --
> 2.9.0.rc0.21.g7777322
>
Hi,
On Thu, Mar 30, 2017 at 05:31:22PM +0200, Jan Glauber wrote:
> Hi Ulf,
>
> we have a bug on some Octeon plattforms so I removed the Octeon driver for now
> (but kept the DT bindings for it). We'll submit the Octeon driver later when
> we've fixed the issue.
Please rather post a new version that also works with OCTEON. I don't
think a partial driver should be merged; originally this driver was
working fine with OCTEON so there should be no issue supporting that?!
A.
On 04/12/2017 05:37 PM, Aaro Koskinen wrote:
>
> Please rather post a new version that also works with OCTEON. I don't
> think a partial driver should be merged; originally this driver was
> working fine with OCTEON so there should be no issue supporting that?!
>
Hey Aaro.
The difference is that Jan added scatter/gather support to take
advantage of the DMA FIFOs on Thunder. The same FIFOs exist on
Octeon parts 73xx, 76xx, 78xx, CNF73xx, and CNF75xx to name a
few. In order to support those, portions of the Octeon platform
code had to be rewritten as well as minor changes in the core
Cavium driver code. I have a fully tested patchset that cleanly
applies on top of Jan's v13 driver. My personal preference is
for the Octeon code to be a separate patch. I will defer that
decision to Jan, David and the MMC maintainers.
Steve