Hi,
The series have build dependency on ti_sci/soc series (v1):
https://lore.kernel.org/lkml/[email protected]/
The unmapped event handling in INTA is also needed, but it is not a build
dependency (v2):
https://lore.kernel.org/lkml/[email protected]/
The DMSS introduced within AM64 as a simplified Data movement engine is built
on similar grounds as the K3 NAVSS and UDMAP, but with significant architectural
changes.
- Rings are built into the DMAs
The DMAs no longer use the general purpose ringacc, all rings has been moved
inside of the DMAs. The new rings within the DMAs are simplified to be dual
directional compared to the uni-directional rings in ringacc.
There is no more of a concept of generic purpose rings, all rings are assigned
to specific channels or flows.
- Per channel coherency support
The DMAs use the 'ASEL' bits to select data and configuration fetch path. The
ASEL bits are placed at the unused parts of any address field used by the
DMAs (pointers to descriptors, addresses in descriptors, ring base addresses).
The ASEL is not part of the address (the DMAs can address 48bits).
Individual channels can be configured to be coherent (via ACP port) or non
coherent individually by configuring the ASEL to appropriate value.
- Two different DMAs (well, three actually)
PKTDMA
Similar to UDMAP channels configured in packet mode.
The flow configuration of the channels has changed significantly in a way that
each channel have at least one flow assigned at design time and each flow is
directly mapped to corresponding ring.
When multiple flows are set, the channel can only use the flows within it's
assigned range.
PKTDMA also introduced multiple tflows which did not existed in UDMAP.
BCDMA
It has two types of channels:
- split channels (tchan/rchan): Similar to UDMAP channels configured in TR mode.
- Block copy channels (bchan): Similar to EDMA or traditional DMA channels, they
can be used for mem2mem type of transfers or to service peripherals not
accessible via PSI-L by using external triggers for the TR.
BCDMA channels do not have support for multiple flows
With the introduction of the new DMAs (especially the BCDMA) we also need to
update the resource manager code to support the second range from sysfw for
UDMA channels.
The two outstanding change in the series in my view is
the handling of the DMAs sideband signal of ASEL to select path to provide
coherency or non coherency.
the smaller one is the device_router_config callback to allow the configuration
of the triggers when BCDMA is servicing a triggering peripheral to solve a
chicken-egg situation:
The router needs to know the event number to send which in turn depends on the
channel we got for servicing the peripheral.
I'm sending this series as early as possible to have time for review and
changes.
When all things resolved, it would be nice if Santosh could create an immutable
branch with the ti_sci/soc patches for Vinod to use for this series.
Regards,
Peter
---
Grygorii Strashko (1):
soc: ti: k3-ringacc: add AM64 DMA rings support.
Peter Ujfalusi (16):
dmaengine: of-dma: Add support for optional router configuration
callback
dmaengine: Add support for per channel coherency handling
dmaengine: doc: client: Update for dmaengine_get_dma_device() usage
dmaengine: dmatest: Use dmaengine_get_dma_device
dmaengine: ti: k3-udma: Wait for peer teardown completion if supported
dmaengine: ti: k3-udma: Add support for second resource range from
sysfw
dmaengine: ti: k3-udma-glue: Add function to get device pointer for
DMA API
dmaengine: ti: k3-udma-glue: Configure the dma_dev for rings
dt-bindings: dma: ti: Add document for K3 BCDMA
dt-bindings: dma: ti: Add document for K3 PKTDMA
dmaengine: ti: k3-psil: Extend psil_endpoint_config for K3 PKTDMA
dmaengine: ti: k3-psil: Add initial map for AM64
dmaengine: ti: Add support for k3 event routers
dmaengine: ti: k3-udma: Initial support for K3 BCDMA
dmaengine: ti: k3-udma: Add support for BCDMA channel TPL handling
dmaengine: ti: k3-udma: Initial support for K3 PKTDMA
Vignesh Raghavendra (1):
dmaengine: ti: k3-udma-glue: Add support for K3 PKTDMA
.../devicetree/bindings/dma/ti/k3-bcdma.yaml | 183 ++
.../devicetree/bindings/dma/ti/k3-pktdma.yaml | 189 ++
Documentation/driver-api/dmaengine/client.rst | 4 +-
drivers/dma/dmatest.c | 13 +-
drivers/dma/of-dma.c | 10 +
drivers/dma/ti/Makefile | 3 +-
drivers/dma/ti/k3-psil-am64.c | 75 +
drivers/dma/ti/k3-psil-priv.h | 1 +
drivers/dma/ti/k3-psil.c | 1 +
drivers/dma/ti/k3-udma-glue.c | 294 ++-
drivers/dma/ti/k3-udma-private.c | 39 +
drivers/dma/ti/k3-udma.c | 1975 +++++++++++++++--
drivers/dma/ti/k3-udma.h | 27 +-
drivers/soc/ti/k3-ringacc.c | 325 ++-
include/linux/dma/k3-event-router.h | 16 +
include/linux/dma/k3-psil.h | 16 +
include/linux/dma/k3-udma-glue.h | 12 +
include/linux/dmaengine.h | 14 +
include/linux/soc/ti/k3-ringacc.h | 17 +
19 files changed, 2994 insertions(+), 220 deletions(-)
create mode 100644 Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml
create mode 100644 Documentation/devicetree/bindings/dma/ti/k3-pktdma.yaml
create mode 100644 drivers/dma/ti/k3-psil-am64.c
create mode 100644 include/linux/dma/k3-event-router.h
--
Peter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
Additional configuration for the DMA event router might be needed for a
channel which can not be done during device_alloc_chan_resources callback
since the router information is not yet present for the drivers.
If there is a need for additional configuration for the channel if DMA
router is in use, then the driver can implement the device_router_config
callback.
Signed-off-by: Peter Ujfalusi <[email protected]>
---
drivers/dma/of-dma.c | 10 ++++++++++
include/linux/dmaengine.h | 2 ++
2 files changed, 12 insertions(+)
diff --git a/drivers/dma/of-dma.c b/drivers/dma/of-dma.c
index 8a4f608904b9..ec00b20ae8e4 100644
--- a/drivers/dma/of-dma.c
+++ b/drivers/dma/of-dma.c
@@ -75,8 +75,18 @@ static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec,
ofdma->dma_router->route_free(ofdma->dma_router->dev,
route_data);
} else {
+ int ret = 0;
+
chan->router = ofdma->dma_router;
chan->route_data = route_data;
+
+ if (chan->device->device_router_config)
+ ret = chan->device->device_router_config(chan);
+
+ if (ret) {
+ dma_release_channel(chan);
+ chan = ERR_PTR(ret);
+ }
}
/*
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index dd357a747780..d6197fe875af 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -800,6 +800,7 @@ struct dma_filter {
* by tx_status
* @device_alloc_chan_resources: allocate resources and return the
* number of allocated descriptors
+ * @device_router_config: optional callback for DMA router configuration
* @device_free_chan_resources: release DMA channel's resources
* @device_prep_dma_memcpy: prepares a memcpy operation
* @device_prep_dma_xor: prepares a xor operation
@@ -874,6 +875,7 @@ struct dma_device {
enum dma_residue_granularity residue_granularity;
int (*device_alloc_chan_resources)(struct dma_chan *chan);
+ int (*device_router_config)(struct dma_chan *chan);
void (*device_free_chan_resources)(struct dma_chan *chan);
struct dma_async_tx_descriptor *(*device_prep_dma_memcpy)(
--
Peter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
New binding document for
Texas Instruments K3 Packet DMA (PKTDMA).
PKTDMA is introduced as part of AM64.
Signed-off-by: Peter Ujfalusi <[email protected]>
---
.../devicetree/bindings/dma/ti/k3-pktdma.yaml | 189 ++++++++++++++++++
1 file changed, 189 insertions(+)
create mode 100644 Documentation/devicetree/bindings/dma/ti/k3-pktdma.yaml
diff --git a/Documentation/devicetree/bindings/dma/ti/k3-pktdma.yaml b/Documentation/devicetree/bindings/dma/ti/k3-pktdma.yaml
new file mode 100644
index 000000000000..75b35bd85830
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/ti/k3-pktdma.yaml
@@ -0,0 +1,189 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/dma/ti/k3-pktdma.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Texas Instruments K3 DMSS PKTDMA Device Tree Bindings
+
+maintainers:
+ - Peter Ujfalusi <[email protected]>
+
+description: |
+ The Packet DMA (PKTDMA) is intended to perform similar functions as the packet
+ mode channels of K3 UDMA-P.
+ PKTDMA only includes Split channels to service PSI-L based peripherals.
+
+ The peripherals can be PSI-L native or legacy, non PSI-L native peripherals
+ with PDMAs. PDMA is tasked to act as a bridge between the PSI-L fabric and the
+ legacy peripheral.
+
+ PDMAs can be configured via PKTDMA split channel's peer registers to match
+ with the configuration of the legacy peripheral.
+
+allOf:
+ - $ref: /schemas/dma/dma-controller.yaml#
+
+properties:
+ "#dma-cells":
+ const: 2
+ description: |
+ The first cell is the PSI-L thread ID of the remote (to PKTDMA) end.
+ Valid ranges for thread ID depends on the data movement direction:
+ for source thread IDs (rx): 0 - 0x7fff
+ for destination thread IDs (tx): 0x8000 - 0xffff
+
+ Please refer to the device documentation for the PSI-L thread map and also
+ the PSI-L peripheral chapter for the correct thread ID.
+
+ The second cell is the ASEL value for the channel
+
+ compatible:
+ enum:
+ - ti,am64-dmss-pktdma
+
+ "#address-cells":
+ const: 2
+
+ "#size-cells":
+ const: 2
+
+ reg:
+ maxItems: 4
+
+ reg-names:
+ items:
+ - const: gcfg
+ - const: rchanrt
+ - const: tchanrt
+ - const: ringrt
+
+ msi-parent: true
+
+ ti,sci:
+ description: phandle to TI-SCI compatible System controller node
+ allOf:
+ - $ref: /schemas/types.yaml#/definitions/phandle
+
+ ti,sci-dev-id:
+ description: TI-SCI device id of PKTDMA
+ allOf:
+ - $ref: /schemas/types.yaml#/definitions/uint32
+
+ ti,sci-rm-range-tchan:
+ description: |
+ Array of PKTDMA split tx channel resource subtypes for resource allocation
+ for this host
+ allOf:
+ - $ref: /schemas/types.yaml#/definitions/uint32-array
+ minItems: 1
+ # Should be enough
+ maxItems: 255
+
+ ti,sci-rm-range-tflow:
+ description: |
+ Array of PKTDMA split tx flow resource subtypes for resource allocation
+ for this host
+ allOf:
+ - $ref: /schemas/types.yaml#/definitions/uint32-array
+ minItems: 1
+ # Should be enough
+ maxItems: 255
+
+ ti,sci-rm-range-rchan:
+ description: |
+ Array of PKTDMA split rx channel resource subtypes for resource allocation
+ for this host
+ allOf:
+ - $ref: /schemas/types.yaml#/definitions/uint32-array
+ minItems: 1
+ # Should be enough
+ maxItems: 255
+
+ ti,sci-rm-range-rflow:
+ description: |
+ Array of PKTDMA split rx flow resource subtypes for resource allocation
+ for this host
+ allOf:
+ - $ref: /schemas/types.yaml#/definitions/uint32-array
+ minItems: 1
+ # Should be enough
+ maxItems: 255
+
+required:
+ - compatible
+ - "#address-cells"
+ - "#size-cells"
+ - "#dma-cells"
+ - reg
+ - reg-names
+ - msi-parent
+ - ti,sci
+ - ti,sci-dev-id
+ - ti,sci-rm-range-tchan
+ - ti,sci-rm-range-tflow
+ - ti,sci-rm-range-rchan
+ - ti,sci-rm-range-rflow
+
+additionalProperties: false
+
+examples:
+ - |+
+ cbass_main {
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ main_dmss {
+ compatible = "simple-mfd";
+ #address-cells = <2>;
+ #size-cells = <2>;
+ dma-ranges;
+ ranges;
+
+ ti,sci-dev-id = <25>;
+
+ main_pktdma: dma-controller@485c0000 {
+ compatible = "ti,am64-dmss-pktdma";
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ reg = <0x0 0x485c0000 0x0 0x100>,
+ <0x0 0x4a800000 0x0 0x20000>,
+ <0x0 0x4aa00000 0x0 0x40000>,
+ <0x0 0x4b800000 0x0 0x400000>;
+ reg-names = "gcfg", "rchanrt", "tchanrt", "ringrt";
+ msi-parent = <&inta_main_dmss>;
+ #dma-cells = <2>;
+
+ ti,sci = <&dmsc>;
+ ti,sci-dev-id = <30>;
+
+ ti,sci-rm-range-tchan = <0x23>, /* UNMAPPED_TX_CHAN */
+ <0x24>, /* CPSW_TX_CHAN */
+ <0x25>, /* SAUL_TX_0_CHAN */
+ <0x26>, /* SAUL_TX_1_CHAN */
+ <0x27>, /* ICSSG_0_TX_CHAN */
+ <0x28>; /* ICSSG_1_TX_CHAN */
+ ti,sci-rm-range-tflow = <0x10>, /* RING_UNMAPPED_TX_CHAN */
+ <0x11>, /* RING_CPSW_TX_CHAN */
+ <0x12>, /* RING_SAUL_TX_0_CHAN */
+ <0x13>, /* RING_SAUL_TX_1_CHAN */
+ <0x14>, /* RING_ICSSG_0_TX_CHAN */
+ <0x15>; /* RING_ICSSG_1_TX_CHAN */
+ ti,sci-rm-range-rchan = <0x29>, /* UNMAPPED_RX_CHAN */
+ <0x2b>, /* CPSW_RX_CHAN */
+ <0x2d>, /* SAUL_RX_0_CHAN */
+ <0x2f>, /* SAUL_RX_1_CHAN */
+ <0x31>, /* SAUL_RX_2_CHAN */
+ <0x33>, /* SAUL_RX_3_CHAN */
+ <0x35>, /* ICSSG_0_RX_CHAN */
+ <0x37>; /* ICSSG_1_RX_CHAN */
+ ti,sci-rm-range-rflow = <0x2a>, /* FLOW_UNMAPPED_RX_CHAN */
+ <0x2c>, /* FLOW_CPSW_RX_CHAN */
+ <0x2e>, /* FLOW_SAUL_RX_0/1_CHAN */
+ <0x32>, /* FLOW_SAUL_RX_2/3_CHAN */
+ <0x36>, /* FLOW_ICSSG_0_RX_CHAN */
+ <0x38>; /* FLOW_ICSSG_1_RX_CHAN */
+ };
+ };
+ };
--
Peter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
Additional fields needed for K3 PKTDMA to be able to handle the mapped
channels (channels are locked to handle specific threads) and flow ranges
for these mapped threads.
PKTDMA also introduces tflow for tx channels which can not be found in
K3 UDMA architecture.
Signed-off-by: Peter Ujfalusi <[email protected]>
---
include/linux/dma/k3-psil.h | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/include/linux/dma/k3-psil.h b/include/linux/dma/k3-psil.h
index 1962f75fa2d3..36e22c5a0f29 100644
--- a/include/linux/dma/k3-psil.h
+++ b/include/linux/dma/k3-psil.h
@@ -50,6 +50,15 @@ enum psil_endpoint_type {
* @channel_tpl: Desired throughput level for the channel
* @pdma_acc32: ACC32 must be enabled on the PDMA side
* @pdma_burst: BURST must be enabled on the PDMA side
+ * @mapped_channel_id: PKTDMA thread to channel mapping for mapped channels.
+ * The thread must be serviced by the specified channel if
+ * mapped_channel_id is >= 0 in case of PKTDMA
+ * @flow_start: PKDMA flow range start of mapped channel. Unmapped
+ * channels use flow_id == chan_id
+ * @flow_num: PKDMA flow count of mapped channel. Unmapped channels
+ * use flow_id == chan_id
+ * @default_flow_id: PKDMA default (r)flow index of mapped channel.
+ * Must be within the flow range of the mapped channel.
*/
struct psil_endpoint_config {
enum psil_endpoint_type ep_type;
@@ -63,6 +72,13 @@ struct psil_endpoint_config {
/* PDMA properties, valid for PSIL_EP_PDMA_* */
unsigned pdma_acc32:1;
unsigned pdma_burst:1;
+
+ /* PKDMA mapped channel */
+ int mapped_channel_id;
+ /* PKTDMA tflow and rflow ranges for mapped channel */
+ u16 flow_start;
+ u16 flow_num;
+ u16 default_flow_id;
};
int psil_set_new_ep_config(struct device *dev, const char *name,
--
Peter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
From: Grygorii Strashko <[email protected]>
The DMAs in AM64 have built in rings compared to AM654/J721e/J7200 where a
separate and generic ringacc is used.
The ring SW interface is similar to ringacc with some major architectural
differences, like
They are part of the DMA (BCDMA or PKTDMA).
They are dual mode rings are modeled as pair of Rings objects which has
common configuration and memory buffer, but separate real-time control
register sets for each direction mem2dev (forward) and dev2mem (reverse).
The ringacc driver must be initialized for DMA rings use with
k3_ringacc_dmarings_init() as it is not an independent device as ringacc
is.
AM64 rings must be requested only using k3_ringacc_request_rings_pair(),
and forward ring must always be initialized/configured. After this any
other Ringacc APIs can be used without any callers changes.
Signed-off-by: Grygorii Strashko <[email protected]>
Signed-off-by: Peter Ujfalusi <[email protected]>
---
drivers/soc/ti/k3-ringacc.c | 325 +++++++++++++++++++++++++++++-
include/linux/soc/ti/k3-ringacc.h | 17 ++
2 files changed, 335 insertions(+), 7 deletions(-)
diff --git a/drivers/soc/ti/k3-ringacc.c b/drivers/soc/ti/k3-ringacc.c
index 7d0b4092fce8..25eca75b859a 100644
--- a/drivers/soc/ti/k3-ringacc.c
+++ b/drivers/soc/ti/k3-ringacc.c
@@ -11,6 +11,7 @@
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/sys_soc.h>
+#include <linux/dma/ti-cppi5.h>
#include <linux/soc/ti/k3-ringacc.h>
#include <linux/soc/ti/ti_sci_protocol.h>
#include <linux/soc/ti/ti_sci_inta_msi.h>
@@ -21,6 +22,7 @@ static LIST_HEAD(k3_ringacc_list);
static DEFINE_MUTEX(k3_ringacc_list_lock);
#define K3_RINGACC_CFG_RING_SIZE_ELCNT_MASK GENMASK(19, 0)
+#define K3_DMARING_CFG_RING_SIZE_ELCNT_MASK GENMASK(15, 0)
/**
* struct k3_ring_rt_regs - The RA realtime Control/Status Registers region
@@ -43,7 +45,13 @@ struct k3_ring_rt_regs {
u32 hwindx;
};
-#define K3_RINGACC_RT_REGS_STEP 0x1000
+#define K3_RINGACC_RT_REGS_STEP 0x1000
+#define K3_DMARING_RT_REGS_STEP 0x2000
+#define K3_DMARING_RT_REGS_REVERSE_OFS 0x1000
+#define K3_RINGACC_RT_OCC_MASK GENMASK(20, 0)
+#define K3_DMARING_RT_OCC_TDOWN_COMPLETE BIT(31)
+#define K3_DMARING_RT_DB_ENTRY_MASK GENMASK(7, 0)
+#define K3_DMARING_RT_DB_TDOWN_ACK BIT(31)
/**
* struct k3_ring_fifo_regs - The Ring Accelerator Queues Registers region
@@ -122,6 +130,7 @@ struct k3_ring_state {
u32 occ;
u32 windex;
u32 rindex;
+ u32 tdown_complete:1;
};
/**
@@ -142,6 +151,7 @@ struct k3_ring_state {
* @use_count: Use count for shared rings
* @proxy_id: RA Ring Proxy Id (only if @K3_RINGACC_RING_USE_PROXY)
* @dma_dev: device to be used for DMA API (allocation, mapping)
+ * @asel: Address Space Select value for physical addresses
*/
struct k3_ring {
struct k3_ring_rt_regs __iomem *rt;
@@ -156,12 +166,15 @@ struct k3_ring {
u32 flags;
#define K3_RING_FLAG_BUSY BIT(1)
#define K3_RING_FLAG_SHARED BIT(2)
+#define K3_RING_FLAG_REVERSE BIT(3)
struct k3_ring_state state;
u32 ring_id;
struct k3_ringacc *parent;
u32 use_count;
int proxy_id;
struct device *dma_dev;
+ u32 asel;
+#define K3_ADDRESS_ASEL_SHIFT 48
};
struct k3_ringacc_ops {
@@ -187,6 +200,7 @@ struct k3_ringacc_ops {
* @tisci_ring_ops: ti-sci rings ops
* @tisci_dev_id: ti-sci device id
* @ops: SoC specific ringacc operation
+ * @dma_rings: indicate DMA ring (dual ring within BCDMA/PKTDMA)
*/
struct k3_ringacc {
struct device *dev;
@@ -209,6 +223,7 @@ struct k3_ringacc {
u32 tisci_dev_id;
const struct k3_ringacc_ops *ops;
+ bool dma_rings;
};
/**
@@ -220,6 +235,21 @@ struct k3_ringacc_soc_data {
unsigned dma_ring_reset_quirk:1;
};
+static int k3_ringacc_ring_read_occ(struct k3_ring *ring)
+{
+ return readl(&ring->rt->occ) & K3_RINGACC_RT_OCC_MASK;
+}
+
+static void k3_ringacc_ring_update_occ(struct k3_ring *ring)
+{
+ u32 val;
+
+ val = readl(&ring->rt->occ);
+
+ ring->state.occ = val & K3_RINGACC_RT_OCC_MASK;
+ ring->state.tdown_complete = !!(val & K3_DMARING_RT_OCC_TDOWN_COMPLETE);
+}
+
static long k3_ringacc_ring_get_fifo_pos(struct k3_ring *ring)
{
return K3_RINGACC_FIFO_WINDOW_SIZE_BYTES -
@@ -233,12 +263,24 @@ static void *k3_ringacc_get_elm_addr(struct k3_ring *ring, u32 idx)
static int k3_ringacc_ring_push_mem(struct k3_ring *ring, void *elem);
static int k3_ringacc_ring_pop_mem(struct k3_ring *ring, void *elem);
+static int k3_dmaring_fwd_pop(struct k3_ring *ring, void *elem);
+static int k3_dmaring_reverse_pop(struct k3_ring *ring, void *elem);
static struct k3_ring_ops k3_ring_mode_ring_ops = {
.push_tail = k3_ringacc_ring_push_mem,
.pop_head = k3_ringacc_ring_pop_mem,
};
+static struct k3_ring_ops k3_dmaring_fwd_ops = {
+ .push_tail = k3_ringacc_ring_push_mem,
+ .pop_head = k3_dmaring_fwd_pop,
+};
+
+static struct k3_ring_ops k3_dmaring_reverse_ops = {
+ /* Reverse side of the DMA ring can only be popped by SW */
+ .pop_head = k3_dmaring_reverse_pop,
+};
+
static int k3_ringacc_ring_push_io(struct k3_ring *ring, void *elem);
static int k3_ringacc_ring_pop_io(struct k3_ring *ring, void *elem);
static int k3_ringacc_ring_push_head_io(struct k3_ring *ring, void *elem);
@@ -341,6 +383,40 @@ struct k3_ring *k3_ringacc_request_ring(struct k3_ringacc *ringacc,
}
EXPORT_SYMBOL_GPL(k3_ringacc_request_ring);
+static int k3_dmaring_request_dual_ring(struct k3_ringacc *ringacc, int fwd_id,
+ struct k3_ring **fwd_ring,
+ struct k3_ring **compl_ring)
+{
+ int ret = 0;
+
+ /*
+ * DMA rings must be requested by ID, completion ring is the reverse
+ * side of the forward ring
+ */
+ if (fwd_id < 0)
+ return -EINVAL;
+
+ mutex_lock(&ringacc->req_lock);
+
+ if (test_bit(fwd_id, ringacc->rings_inuse)) {
+ ret = -EBUSY;
+ goto error;
+ }
+
+ *fwd_ring = &ringacc->rings[fwd_id];
+ *compl_ring = &ringacc->rings[fwd_id + ringacc->num_rings];
+ set_bit(fwd_id, ringacc->rings_inuse);
+ ringacc->rings[fwd_id].use_count++;
+ dev_dbg(ringacc->dev, "Giving ring#%d\n", fwd_id);
+
+ mutex_unlock(&ringacc->req_lock);
+ return 0;
+
+error:
+ mutex_unlock(&ringacc->req_lock);
+ return ret;
+}
+
int k3_ringacc_request_rings_pair(struct k3_ringacc *ringacc,
int fwd_id, int compl_id,
struct k3_ring **fwd_ring,
@@ -351,6 +427,10 @@ int k3_ringacc_request_rings_pair(struct k3_ringacc *ringacc,
if (!fwd_ring || !compl_ring)
return -EINVAL;
+ if (ringacc->dma_rings)
+ return k3_dmaring_request_dual_ring(ringacc, fwd_id,
+ fwd_ring, compl_ring);
+
*fwd_ring = k3_ringacc_request_ring(ringacc, fwd_id, 0);
if (!(*fwd_ring))
return -ENODEV;
@@ -420,7 +500,7 @@ void k3_ringacc_ring_reset_dma(struct k3_ring *ring, u32 occ)
goto reset;
if (!occ)
- occ = readl(&ring->rt->occ);
+ occ = k3_ringacc_ring_read_occ(ring);
if (occ) {
u32 db_ring_cnt, db_ring_cnt_cur;
@@ -495,6 +575,13 @@ int k3_ringacc_ring_free(struct k3_ring *ring)
ringacc = ring->parent;
+ /*
+ * DMA rings: rings shared memory and configuration, only forward ring
+ * is configured and reverse ring considered as slave.
+ */
+ if (ringacc->dma_rings && (ring->flags & K3_RING_FLAG_REVERSE))
+ return 0;
+
dev_dbg(ring->parent->dev, "flags: 0x%08x\n", ring->flags);
if (!test_bit(ring->ring_id, ringacc->rings_inuse))
@@ -516,6 +603,8 @@ int k3_ringacc_ring_free(struct k3_ring *ring)
ring->flags = 0;
ring->ops = NULL;
ring->dma_dev = NULL;
+ ring->asel = 0;
+
if (ring->proxy_id != K3_RINGACC_PROXY_NOT_USED) {
clear_bit(ring->proxy_id, ringacc->proxy_inuse);
ring->proxy = NULL;
@@ -580,6 +669,7 @@ static int k3_ringacc_ring_cfg_sci(struct k3_ring *ring)
ring_cfg.count = ring->size;
ring_cfg.mode = ring->mode;
ring_cfg.size = ring->elm_size;
+ ring_cfg.asel = ring->asel;
ret = ringacc->tisci_ring_ops->set_cfg(ringacc->tisci, &ring_cfg);
if (ret)
@@ -589,6 +679,90 @@ static int k3_ringacc_ring_cfg_sci(struct k3_ring *ring)
return ret;
}
+static int k3_dmaring_cfg(struct k3_ring *ring, struct k3_ring_cfg *cfg)
+{
+ struct k3_ringacc *ringacc;
+ struct k3_ring *reverse_ring;
+ int ret = 0;
+
+ if (cfg->elm_size != K3_RINGACC_RING_ELSIZE_8 ||
+ cfg->mode != K3_RINGACC_RING_MODE_RING ||
+ cfg->size & ~K3_DMARING_CFG_RING_SIZE_ELCNT_MASK)
+ return -EINVAL;
+
+ ringacc = ring->parent;
+
+ /*
+ * DMA rings: rings shared memory and configuration, only forward ring
+ * is configured and reverse ring considered as slave.
+ */
+ if (ringacc->dma_rings && (ring->flags & K3_RING_FLAG_REVERSE))
+ return 0;
+
+ if (!test_bit(ring->ring_id, ringacc->rings_inuse))
+ return -EINVAL;
+
+ ring->size = cfg->size;
+ ring->elm_size = cfg->elm_size;
+ ring->mode = cfg->mode;
+ ring->asel = cfg->asel;
+ ring->dma_dev = cfg->dma_dev;
+ if (!ring->dma_dev) {
+ dev_warn(ringacc->dev, "dma_dev is not provided for ring%d\n",
+ ring->ring_id);
+ ring->dma_dev = ringacc->dev;
+ }
+
+ memset(&ring->state, 0, sizeof(ring->state));
+
+ ring->ops = &k3_dmaring_fwd_ops;
+
+ ring->ring_mem_virt = dma_alloc_coherent(ring->dma_dev,
+ ring->size * (4 << ring->elm_size),
+ &ring->ring_mem_dma, GFP_KERNEL);
+ if (!ring->ring_mem_virt) {
+ dev_err(ringacc->dev, "Failed to alloc ring mem\n");
+ ret = -ENOMEM;
+ goto err_free_ops;
+ }
+
+ ret = k3_ringacc_ring_cfg_sci(ring);
+ if (ret)
+ goto err_free_mem;
+
+ ring->flags |= K3_RING_FLAG_BUSY;
+
+ k3_ringacc_ring_dump(ring);
+
+ /* DMA rings: configure reverse ring */
+ reverse_ring = &ringacc->rings[ring->ring_id + ringacc->num_rings];
+ reverse_ring->size = cfg->size;
+ reverse_ring->elm_size = cfg->elm_size;
+ reverse_ring->mode = cfg->mode;
+ reverse_ring->asel = cfg->asel;
+ memset(&reverse_ring->state, 0, sizeof(reverse_ring->state));
+ reverse_ring->ops = &k3_dmaring_reverse_ops;
+
+ reverse_ring->ring_mem_virt = ring->ring_mem_virt;
+ reverse_ring->ring_mem_dma = ring->ring_mem_dma;
+ reverse_ring->flags |= K3_RING_FLAG_BUSY;
+ k3_ringacc_ring_dump(reverse_ring);
+
+ return 0;
+
+err_free_mem:
+ dma_free_coherent(ring->dma_dev,
+ ring->size * (4 << ring->elm_size),
+ ring->ring_mem_virt,
+ ring->ring_mem_dma);
+err_free_ops:
+ ring->ops = NULL;
+ ring->proxy = NULL;
+ ring->dma_dev = NULL;
+ ring->asel = 0;
+ return ret;
+}
+
int k3_ringacc_ring_cfg(struct k3_ring *ring, struct k3_ring_cfg *cfg)
{
struct k3_ringacc *ringacc;
@@ -596,8 +770,12 @@ int k3_ringacc_ring_cfg(struct k3_ring *ring, struct k3_ring_cfg *cfg)
if (!ring || !cfg)
return -EINVAL;
+
ringacc = ring->parent;
+ if (ringacc->dma_rings)
+ return k3_dmaring_cfg(ring, cfg);
+
if (cfg->elm_size > K3_RINGACC_RING_ELSIZE_256 ||
cfg->mode >= K3_RINGACC_RING_MODE_INVALID ||
cfg->size & ~K3_RINGACC_CFG_RING_SIZE_ELCNT_MASK ||
@@ -704,7 +882,7 @@ u32 k3_ringacc_ring_get_free(struct k3_ring *ring)
return -EINVAL;
if (!ring->state.free)
- ring->state.free = ring->size - readl(&ring->rt->occ);
+ ring->state.free = ring->size - k3_ringacc_ring_read_occ(ring);
return ring->state.free;
}
@@ -715,7 +893,7 @@ u32 k3_ringacc_ring_get_occ(struct k3_ring *ring)
if (!ring || !(ring->flags & K3_RING_FLAG_BUSY))
return -EINVAL;
- return readl(&ring->rt->occ);
+ return k3_ringacc_ring_read_occ(ring);
}
EXPORT_SYMBOL_GPL(k3_ringacc_ring_get_occ);
@@ -891,6 +1069,72 @@ static int k3_ringacc_ring_pop_tail_io(struct k3_ring *ring, void *elem)
K3_RINGACC_ACCESS_MODE_POP_HEAD);
}
+/*
+ * The element is 48 bits of address + ASEL bits in the ring.
+ * ASEL is used by the DMAs and should be removed for the kernel as it is not
+ * part of the physical memory address.
+ */
+static void k3_dmaring_remove_asel_from_elem(u64 *elem)
+{
+ *elem &= GENMASK_ULL(K3_ADDRESS_ASEL_SHIFT - 1, 0);
+}
+
+static int k3_dmaring_fwd_pop(struct k3_ring *ring, void *elem)
+{
+ void *elem_ptr;
+ u32 elem_idx;
+
+ /*
+ * DMA rings: forward ring is always tied DMA channel and HW does not
+ * maintain any state data required for POP operation and its unknown
+ * how much elements were consumed by HW. So, to actually
+ * do POP, the read pointer has to be recalculated every time.
+ */
+ ring->state.occ = k3_ringacc_ring_read_occ(ring);
+ if (ring->state.windex >= ring->state.occ)
+ elem_idx = ring->state.windex - ring->state.occ;
+ else
+ elem_idx = ring->size - (ring->state.occ - ring->state.windex);
+
+ elem_ptr = k3_ringacc_get_elm_addr(ring, elem_idx);
+ memcpy(elem, elem_ptr, (4 << ring->elm_size));
+ k3_dmaring_remove_asel_from_elem(elem);
+
+ ring->state.occ--;
+ writel(-1, &ring->rt->db);
+
+ dev_dbg(ring->parent->dev, "%s: occ%d Windex%d Rindex%d pos_ptr%px\n",
+ __func__, ring->state.occ, ring->state.windex, elem_idx,
+ elem_ptr);
+ return 0;
+}
+
+static int k3_dmaring_reverse_pop(struct k3_ring *ring, void *elem)
+{
+ void *elem_ptr;
+
+ elem_ptr = k3_ringacc_get_elm_addr(ring, ring->state.rindex);
+
+ if (ring->state.occ) {
+ memcpy(elem, elem_ptr, (4 << ring->elm_size));
+ k3_dmaring_remove_asel_from_elem(elem);
+
+ ring->state.rindex = (ring->state.rindex + 1) % ring->size;
+ ring->state.occ--;
+ writel(-1 & K3_DMARING_RT_DB_ENTRY_MASK, &ring->rt->db);
+ } else if (ring->state.tdown_complete) {
+ dma_addr_t *value = elem;
+
+ *value = CPPI5_TDCM_MARKER;
+ writel(K3_DMARING_RT_DB_TDOWN_ACK, &ring->rt->db);
+ ring->state.tdown_complete = false;
+ }
+
+ dev_dbg(ring->parent->dev, "%s: occ%d index%d pos_ptr%px\n",
+ __func__, ring->state.occ, ring->state.rindex, elem_ptr);
+ return 0;
+}
+
static int k3_ringacc_ring_push_mem(struct k3_ring *ring, void *elem)
{
void *elem_ptr;
@@ -898,6 +1142,11 @@ static int k3_ringacc_ring_push_mem(struct k3_ring *ring, void *elem)
elem_ptr = k3_ringacc_get_elm_addr(ring, ring->state.windex);
memcpy(elem_ptr, elem, (4 << ring->elm_size));
+ if (ring->parent->dma_rings) {
+ u64 *addr = elem_ptr;
+
+ *addr |= ((u64)ring->asel << K3_ADDRESS_ASEL_SHIFT);
+ }
ring->state.windex = (ring->state.windex + 1) % ring->size;
ring->state.free--;
@@ -974,12 +1223,12 @@ int k3_ringacc_ring_pop(struct k3_ring *ring, void *elem)
return -EINVAL;
if (!ring->state.occ)
- ring->state.occ = k3_ringacc_ring_get_occ(ring);
+ k3_ringacc_ring_update_occ(ring);
dev_dbg(ring->parent->dev, "ring_pop: occ%d index%d\n", ring->state.occ,
ring->state.rindex);
- if (!ring->state.occ)
+ if (!ring->state.occ && !ring->state.tdown_complete)
return -ENODATA;
if (ring->ops && ring->ops->pop_head)
@@ -997,7 +1246,7 @@ int k3_ringacc_ring_pop_tail(struct k3_ring *ring, void *elem)
return -EINVAL;
if (!ring->state.occ)
- ring->state.occ = k3_ringacc_ring_get_occ(ring);
+ k3_ringacc_ring_update_occ(ring);
dev_dbg(ring->parent->dev, "ring_pop_tail: occ%d index%d\n",
ring->state.occ, ring->state.rindex);
@@ -1202,6 +1451,68 @@ static const struct of_device_id k3_ringacc_of_match[] = {
{},
};
+struct k3_ringacc *k3_ringacc_dmarings_init(struct platform_device *pdev,
+ struct k3_ringacc_init_data *data)
+{
+ struct device *dev = &pdev->dev;
+ struct k3_ringacc *ringacc;
+ void __iomem *base_rt;
+ struct resource *res;
+ int i;
+
+ ringacc = devm_kzalloc(dev, sizeof(*ringacc), GFP_KERNEL);
+ if (!ringacc)
+ return ERR_PTR(-ENOMEM);
+
+ ringacc->dev = dev;
+ ringacc->dma_rings = true;
+ ringacc->num_rings = data->num_rings;
+ ringacc->tisci = data->tisci;
+ ringacc->tisci_dev_id = data->tisci_dev_id;
+
+ mutex_init(&ringacc->req_lock);
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ringrt");
+ base_rt = devm_ioremap_resource(dev, res);
+ if (IS_ERR(base_rt))
+ return base_rt;
+
+ ringacc->rings = devm_kzalloc(dev,
+ sizeof(*ringacc->rings) *
+ ringacc->num_rings * 2,
+ GFP_KERNEL);
+ ringacc->rings_inuse = devm_kcalloc(dev,
+ BITS_TO_LONGS(ringacc->num_rings),
+ sizeof(unsigned long), GFP_KERNEL);
+
+ if (!ringacc->rings || !ringacc->rings_inuse)
+ return ERR_PTR(-ENOMEM);
+
+ for (i = 0; i < ringacc->num_rings; i++) {
+ struct k3_ring *ring = &ringacc->rings[i];
+
+ ring->rt = base_rt + K3_DMARING_RT_REGS_STEP * i;
+ ring->parent = ringacc;
+ ring->ring_id = i;
+ ring->proxy_id = K3_RINGACC_PROXY_NOT_USED;
+
+ ring = &ringacc->rings[ringacc->num_rings + i];
+ ring->rt = base_rt + K3_DMARING_RT_REGS_STEP * i +
+ K3_DMARING_RT_REGS_REVERSE_OFS;
+ ring->parent = ringacc;
+ ring->ring_id = i;
+ ring->proxy_id = K3_RINGACC_PROXY_NOT_USED;
+ ring->flags = K3_RING_FLAG_REVERSE;
+ }
+
+ ringacc->tisci_ring_ops = &ringacc->tisci->ops.rm_ring_ops;
+
+ dev_info(dev, "Number of rings: %u\n", ringacc->num_rings);
+
+ return ringacc;
+}
+EXPORT_SYMBOL_GPL(k3_ringacc_dmarings_init);
+
static int k3_ringacc_probe(struct platform_device *pdev)
{
const struct ringacc_match_data *match_data;
diff --git a/include/linux/soc/ti/k3-ringacc.h b/include/linux/soc/ti/k3-ringacc.h
index 658dc71d2901..39b022b92598 100644
--- a/include/linux/soc/ti/k3-ringacc.h
+++ b/include/linux/soc/ti/k3-ringacc.h
@@ -70,6 +70,7 @@ struct k3_ring;
* @dma_dev: Master device which is using and accessing to the ring
* memory when the mode is K3_RINGACC_RING_MODE_RING. Memory allocations
* should be done using this device.
+ * @asel: Address Space Select value for physical addresses
*/
struct k3_ring_cfg {
u32 size;
@@ -79,6 +80,7 @@ struct k3_ring_cfg {
u32 flags;
struct device *dma_dev;
+ u32 asel;
};
#define K3_RINGACC_RING_ID_ANY (-1)
@@ -250,4 +252,19 @@ int k3_ringacc_ring_pop_tail(struct k3_ring *ring, void *elem);
u32 k3_ringacc_get_tisci_dev_id(struct k3_ring *ring);
+/* DMA ring support */
+struct ti_sci_handle;
+
+/**
+ * struct struct k3_ringacc_init_data - Initialization data for DMA rings
+ */
+struct k3_ringacc_init_data {
+ const struct ti_sci_handle *tisci;
+ u32 tisci_dev_id;
+ u32 num_rings;
+};
+
+struct k3_ringacc *k3_ringacc_dmarings_init(struct platform_device *pdev,
+ struct k3_ringacc_init_data *data);
+
#endif /* __SOC_TI_K3_RINGACC_API_H_ */
--
Peter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
In k3 architecture a DMA channel (in TR momde) can be triggered by global
events, origination from different modules.
The events for triggers can be sent from any module which is connected to
PSI-L fabric, but the event number to be sent is DMA channel specific, it
is only known after the channel itself is requested.
The router operation needs to be split up:
- route_allocate: configure the dma_spec for the DMA and store the
configuration which is needed for the router's input
- set_event: callback used by the DMA driver to set the event number for
the channel and enable the routing
Signed-off-by: Peter Ujfalusi <[email protected]>
---
include/linux/dma/k3-event-router.h | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
create mode 100644 include/linux/dma/k3-event-router.h
diff --git a/include/linux/dma/k3-event-router.h b/include/linux/dma/k3-event-router.h
new file mode 100644
index 000000000000..e3f88b2f87be
--- /dev/null
+++ b/include/linux/dma/k3-event-router.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Texas Instruments Incorporated - https://www.ti.com
+ */
+
+#ifndef K3_EVENT_ROUTER_
+#define K3_EVENT_ROUTER_
+
+#include <linux/types.h>
+
+struct k3_event_route_data {
+ void *priv;
+ int (*set_event)(void *priv, u32 event);
+};
+
+#endif /* K3_EVENT_ROUTER_ */
--
Peter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
One of the DMAs introduced with AM64 is the Block Copy DMA (BCDMA).
It serves similar purpose as K3 UDMAP channels in TR mode.
The rings for the BCDMA is integrated within the DMA itself instead of
using rings from the general purpose ringacc.
A BCDMA have two different type of channels:
- Block Copy Channels (bchan)
- Split Channels (tchan and rchan)
tchan and rchan can be used to service PSI-L peripherals, similarly to
K3 UDMA channels.
bchan can be only used for block copy operation (TR type15) like the
paired K3 UDMA tchan/rchan configured in block copy mode.
bchans can be also used to service peripherals directly if an external
trigger is selected for the channel.
Most of the driver code can be reused for BCDMA bchan/tchan/rchan support
but new setup and allocation functions are needed to handle the
differences between the DMAs.
Signed-off-by: Peter Ujfalusi <[email protected]>
---
drivers/dma/ti/k3-udma.c | 1386 ++++++++++++++++++++++++++++++++++----
drivers/dma/ti/k3-udma.h | 12 +-
2 files changed, 1247 insertions(+), 151 deletions(-)
diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
index 1ae5d09e2059..a342e89a4bae 100644
--- a/drivers/dma/ti/k3-udma.c
+++ b/drivers/dma/ti/k3-udma.c
@@ -26,6 +26,7 @@
#include <linux/soc/ti/k3-ringacc.h>
#include <linux/soc/ti/ti_sci_protocol.h>
#include <linux/soc/ti/ti_sci_inta_msi.h>
+#include <linux/dma/k3-event-router.h>
#include <linux/dma/ti-cppi5.h>
#include "../virt-dma.h"
@@ -55,14 +56,25 @@ struct udma_static_tr {
struct udma_chan;
+enum k3_dma_type {
+ DMA_TYPE_UDMA = 0,
+ DMA_TYPE_BCDMA,
+};
+
enum udma_mmr {
MMR_GCFG = 0,
+ MMR_BCHANRT,
MMR_RCHANRT,
MMR_TCHANRT,
MMR_LAST,
};
-static const char * const mmr_names[] = { "gcfg", "rchanrt", "tchanrt" };
+static const char * const mmr_names[] = {
+ [MMR_GCFG] = "gcfg",
+ [MMR_BCHANRT] = "bchanrt",
+ [MMR_RCHANRT] = "rchanrt",
+ [MMR_TCHANRT] = "tchanrt",
+};
struct udma_tchan {
void __iomem *reg_rt;
@@ -72,6 +84,8 @@ struct udma_tchan {
struct k3_ring *tc_ring; /* Transmit Completion ring */
};
+#define udma_bchan udma_tchan
+
struct udma_rflow {
int id;
struct k3_ring *fd_ring; /* Free Descriptor ring */
@@ -84,11 +98,25 @@ struct udma_rchan {
int id;
};
+struct udma_oes_offsets {
+ /* K3 UDMA Output Event Offset */
+ u32 udma_rchan;
+
+ /* BCDMA Output Event Offsets */
+ u32 bcdma_bchan_data;
+ u32 bcdma_bchan_ring;
+ u32 bcdma_tchan_data;
+ u32 bcdma_tchan_ring;
+ u32 bcdma_rchan_data;
+ u32 bcdma_rchan_ring;
+};
+
#define UDMA_FLAG_PDMA_ACC32 BIT(0)
#define UDMA_FLAG_PDMA_BURST BIT(1)
#define UDMA_FLAG_TDTYPE BIT(2)
struct udma_match_data {
+ enum k3_dma_type type;
u32 psil_base;
bool enable_memcpy_support;
u32 flags;
@@ -96,7 +124,8 @@ struct udma_match_data {
};
struct udma_soc_data {
- u32 rchan_oes_offset;
+ struct udma_oes_offsets oes;
+ u32 bcdma_trigger_event_offset;
};
struct udma_hwdesc {
@@ -139,16 +168,19 @@ struct udma_dev {
struct udma_rx_flush rx_flush;
+ int bchan_cnt;
int tchan_cnt;
int echan_cnt;
int rchan_cnt;
int rflow_cnt;
+ unsigned long *bchan_map;
unsigned long *tchan_map;
unsigned long *rchan_map;
unsigned long *rflow_gp_map;
unsigned long *rflow_gp_map_allocated;
unsigned long *rflow_in_use;
+ struct udma_bchan *bchans;
struct udma_tchan *tchans;
struct udma_rchan *rchans;
struct udma_rflow *rflows;
@@ -156,6 +188,7 @@ struct udma_dev {
struct udma_chan *channels;
u32 psil_base;
u32 atype;
+ u32 asel;
};
struct udma_desc {
@@ -200,6 +233,7 @@ struct udma_chan_config {
bool notdpkt; /* Suppress sending TDC packet */
int remote_thread_id;
u32 atype;
+ u32 asel;
u32 src_thread;
u32 dst_thread;
enum psil_endpoint_type ep_type;
@@ -207,6 +241,8 @@ struct udma_chan_config {
bool enable_burst;
enum udma_tp_level channel_tpl; /* Channel Throughput Level */
+ u32 tr_trigger_type;
+
enum dma_transfer_direction dir;
};
@@ -214,11 +250,13 @@ struct udma_chan {
struct virt_dma_chan vc;
struct dma_slave_config cfg;
struct udma_dev *ud;
+ struct device *dma_dev;
struct udma_desc *desc;
struct udma_desc *terminated_desc;
struct udma_static_tr static_tr;
char *name;
+ struct udma_bchan *bchan;
struct udma_tchan *tchan;
struct udma_rchan *rchan;
struct udma_rflow *rflow;
@@ -354,6 +392,30 @@ static int navss_psil_unpair(struct udma_dev *ud, u32 src_thread,
src_thread, dst_thread);
}
+static void k3_configure_chan_coherency(struct dma_chan *chan, u32 asel)
+{
+ struct device *chan_dev = &chan->dev->device;
+
+ if (asel == 0) {
+ /* No special handling for the channel */
+ chan->dev->chan_dma_dev = false;
+
+ chan_dev->dma_coherent = false;
+ chan_dev->dma_parms = NULL;
+ } else if (asel == 14 || asel == 15) {
+ chan->dev->chan_dma_dev = true;
+
+ chan_dev->dma_coherent = true;
+ dma_coerce_mask_and_coherent(chan_dev, DMA_BIT_MASK(48));
+ chan_dev->dma_parms = chan_dev->parent->dma_parms;
+ } else {
+ dev_warn(chan->device->dev, "Invalid ASEL value: %u\n", asel);
+
+ chan_dev->dma_coherent = false;
+ chan_dev->dma_parms = NULL;
+ }
+}
+
static void udma_reset_uchan(struct udma_chan *uc)
{
memset(&uc->config, 0, sizeof(uc->config));
@@ -440,9 +502,7 @@ static void udma_free_hwdesc(struct udma_chan *uc, struct udma_desc *d)
d->hwdesc[i].cppi5_desc_vaddr = NULL;
}
} else if (d->hwdesc[0].cppi5_desc_vaddr) {
- struct udma_dev *ud = uc->ud;
-
- dma_free_coherent(ud->dev, d->hwdesc[0].cppi5_desc_size,
+ dma_free_coherent(uc->dma_dev, d->hwdesc[0].cppi5_desc_size,
d->hwdesc[0].cppi5_desc_vaddr,
d->hwdesc[0].cppi5_desc_paddr);
@@ -671,8 +731,10 @@ static void udma_reset_counters(struct udma_chan *uc)
val = udma_tchanrt_read(uc, UDMA_CHAN_RT_PCNT_REG);
udma_tchanrt_write(uc, UDMA_CHAN_RT_PCNT_REG, val);
- val = udma_tchanrt_read(uc, UDMA_CHAN_RT_PEER_BCNT_REG);
- udma_tchanrt_write(uc, UDMA_CHAN_RT_PEER_BCNT_REG, val);
+ if (!uc->bchan) {
+ val = udma_tchanrt_read(uc, UDMA_CHAN_RT_PEER_BCNT_REG);
+ udma_tchanrt_write(uc, UDMA_CHAN_RT_PEER_BCNT_REG, val);
+ }
}
if (uc->rchan) {
@@ -1235,6 +1297,42 @@ static struct udma_##res *__udma_reserve_##res(struct udma_dev *ud, \
UDMA_RESERVE_RESOURCE(tchan);
UDMA_RESERVE_RESOURCE(rchan);
+static struct udma_bchan *__bcdma_reserve_bchan(struct udma_dev *ud, int id)
+{
+ if (id >= 0) {
+ if (test_bit(id, ud->bchan_map)) {
+ dev_err(ud->dev, "bchan%d is in use\n", id);
+ return ERR_PTR(-ENOENT);
+ }
+ } else {
+ id = find_next_zero_bit(ud->bchan_map, ud->bchan_cnt, 0);
+ if (id == ud->bchan_cnt)
+ return ERR_PTR(-ENOENT);
+ }
+
+ set_bit(id, ud->bchan_map);
+ return &ud->bchans[id];
+}
+
+static int bcdma_get_bchan(struct udma_chan *uc)
+{
+ struct udma_dev *ud = uc->ud;
+
+ if (uc->bchan) {
+ dev_dbg(ud->dev, "chan%d: already have bchan%d allocated\n",
+ uc->id, uc->bchan->id);
+ return 0;
+ }
+
+ uc->bchan = __bcdma_reserve_bchan(ud, -1);
+ if (IS_ERR(uc->bchan))
+ return PTR_ERR(uc->bchan);
+
+ uc->tchan = uc->bchan;
+
+ return 0;
+}
+
static int udma_get_tchan(struct udma_chan *uc)
{
struct udma_dev *ud = uc->ud;
@@ -1327,6 +1425,19 @@ static int udma_get_rflow(struct udma_chan *uc, int flow_id)
return PTR_ERR_OR_ZERO(uc->rflow);
}
+static void bcdma_put_bchan(struct udma_chan *uc)
+{
+ struct udma_dev *ud = uc->ud;
+
+ if (uc->bchan) {
+ dev_dbg(ud->dev, "chan%d: put bchan%d\n", uc->id,
+ uc->bchan->id);
+ clear_bit(uc->bchan->id, ud->bchan_map);
+ uc->bchan = NULL;
+ uc->tchan = NULL;
+ }
+}
+
static void udma_put_rchan(struct udma_chan *uc)
{
struct udma_dev *ud = uc->ud;
@@ -1363,6 +1474,65 @@ static void udma_put_rflow(struct udma_chan *uc)
}
}
+static void bcdma_free_bchan_resources(struct udma_chan *uc)
+{
+ if (!uc->bchan)
+ return;
+
+ k3_ringacc_ring_free(uc->bchan->tc_ring);
+ k3_ringacc_ring_free(uc->bchan->t_ring);
+ uc->bchan->tc_ring = NULL;
+ uc->bchan->t_ring = NULL;
+ k3_configure_chan_coherency(&uc->vc.chan, 0);
+
+ bcdma_put_bchan(uc);
+}
+
+static int bcdma_alloc_bchan_resources(struct udma_chan *uc)
+{
+ struct k3_ring_cfg ring_cfg;
+ struct udma_dev *ud = uc->ud;
+ int ret;
+
+ ret = bcdma_get_bchan(uc);
+ if (ret)
+ return ret;
+
+ ret = k3_ringacc_request_rings_pair(ud->ringacc, uc->bchan->id, -1,
+ &uc->bchan->t_ring,
+ &uc->bchan->tc_ring);
+ if (ret) {
+ ret = -EBUSY;
+ goto err_ring;
+ }
+
+ memset(&ring_cfg, 0, sizeof(ring_cfg));
+ ring_cfg.size = K3_UDMA_DEFAULT_RING_SIZE;
+ ring_cfg.elm_size = K3_RINGACC_RING_ELSIZE_8;
+ ring_cfg.mode = K3_RINGACC_RING_MODE_RING;
+
+ k3_configure_chan_coherency(&uc->vc.chan, ud->asel);
+ ring_cfg.asel = ud->asel;
+ ring_cfg.dma_dev = dmaengine_get_dma_device(&uc->vc.chan);
+
+ ret = k3_ringacc_ring_cfg(uc->bchan->t_ring, &ring_cfg);
+ if (ret)
+ goto err_ringcfg;
+
+ return 0;
+
+err_ringcfg:
+ k3_ringacc_ring_free(uc->bchan->tc_ring);
+ uc->bchan->tc_ring = NULL;
+ k3_ringacc_ring_free(uc->bchan->t_ring);
+ uc->bchan->t_ring = NULL;
+ k3_configure_chan_coherency(&uc->vc.chan, 0);
+err_ring:
+ bcdma_put_bchan(uc);
+
+ return ret;
+}
+
static void udma_free_tx_resources(struct udma_chan *uc)
{
if (!uc->tchan)
@@ -1380,15 +1550,19 @@ static int udma_alloc_tx_resources(struct udma_chan *uc)
{
struct k3_ring_cfg ring_cfg;
struct udma_dev *ud = uc->ud;
- int ret;
+ struct udma_tchan *tchan;
+ int ring_idx, ret;
ret = udma_get_tchan(uc);
if (ret)
return ret;
- ret = k3_ringacc_request_rings_pair(ud->ringacc, uc->tchan->id, -1,
- &uc->tchan->t_ring,
- &uc->tchan->tc_ring);
+ tchan = uc->tchan;
+ ring_idx = ud->bchan_cnt + tchan->id;
+
+ ret = k3_ringacc_request_rings_pair(ud->ringacc, ring_idx, -1,
+ &tchan->t_ring,
+ &tchan->tc_ring);
if (ret) {
ret = -EBUSY;
goto err_ring;
@@ -1397,10 +1571,18 @@ static int udma_alloc_tx_resources(struct udma_chan *uc)
memset(&ring_cfg, 0, sizeof(ring_cfg));
ring_cfg.size = K3_UDMA_DEFAULT_RING_SIZE;
ring_cfg.elm_size = K3_RINGACC_RING_ELSIZE_8;
- ring_cfg.mode = K3_RINGACC_RING_MODE_MESSAGE;
+ if (ud->match_data->type == DMA_TYPE_UDMA) {
+ ring_cfg.mode = K3_RINGACC_RING_MODE_MESSAGE;
+ } else {
+ ring_cfg.mode = K3_RINGACC_RING_MODE_RING;
+
+ k3_configure_chan_coherency(&uc->vc.chan, uc->config.asel);
+ ring_cfg.asel = uc->config.asel;
+ ring_cfg.dma_dev = dmaengine_get_dma_device(&uc->vc.chan);
+ }
- ret = k3_ringacc_ring_cfg(uc->tchan->t_ring, &ring_cfg);
- ret |= k3_ringacc_ring_cfg(uc->tchan->tc_ring, &ring_cfg);
+ ret = k3_ringacc_ring_cfg(tchan->t_ring, &ring_cfg);
+ ret |= k3_ringacc_ring_cfg(tchan->tc_ring, &ring_cfg);
if (ret)
goto err_ringcfg;
@@ -1460,7 +1642,8 @@ static int udma_alloc_rx_resources(struct udma_chan *uc)
}
rflow = uc->rflow;
- fd_ring_id = ud->tchan_cnt + ud->echan_cnt + uc->rchan->id;
+ fd_ring_id = ud->bchan_cnt + ud->tchan_cnt + ud->echan_cnt +
+ uc->rchan->id;
ret = k3_ringacc_request_rings_pair(ud->ringacc, fd_ring_id, -1,
&rflow->fd_ring, &rflow->r_ring);
if (ret) {
@@ -1470,15 +1653,25 @@ static int udma_alloc_rx_resources(struct udma_chan *uc)
memset(&ring_cfg, 0, sizeof(ring_cfg));
- if (uc->config.pkt_mode)
- ring_cfg.size = SG_MAX_SEGMENTS;
- else
+ ring_cfg.elm_size = K3_RINGACC_RING_ELSIZE_8;
+ if (ud->match_data->type == DMA_TYPE_UDMA) {
+ if (uc->config.pkt_mode)
+ ring_cfg.size = SG_MAX_SEGMENTS;
+ else
+ ring_cfg.size = K3_UDMA_DEFAULT_RING_SIZE;
+
+ ring_cfg.mode = K3_RINGACC_RING_MODE_MESSAGE;
+ } else {
ring_cfg.size = K3_UDMA_DEFAULT_RING_SIZE;
+ ring_cfg.mode = K3_RINGACC_RING_MODE_RING;
- ring_cfg.elm_size = K3_RINGACC_RING_ELSIZE_8;
- ring_cfg.mode = K3_RINGACC_RING_MODE_MESSAGE;
+ k3_configure_chan_coherency(&uc->vc.chan, uc->config.asel);
+ ring_cfg.asel = uc->config.asel;
+ ring_cfg.dma_dev = dmaengine_get_dma_device(&uc->vc.chan);
+ }
ret = k3_ringacc_ring_cfg(rflow->fd_ring, &ring_cfg);
+
ring_cfg.size = K3_UDMA_DEFAULT_RING_SIZE;
ret |= k3_ringacc_ring_cfg(rflow->r_ring, &ring_cfg);
@@ -1500,7 +1693,18 @@ static int udma_alloc_rx_resources(struct udma_chan *uc)
return ret;
}
-#define TISCI_TCHAN_VALID_PARAMS ( \
+#define TISCI_BCDMA_BCHAN_VALID_PARAMS ( \
+ TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID | \
+ TI_SCI_MSG_VALUE_RM_UDMAP_CH_EXTENDED_CH_TYPE_VALID)
+
+#define TISCI_BCDMA_TCHAN_VALID_PARAMS ( \
+ TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID | \
+ TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_SUPR_TDPKT_VALID)
+
+#define TISCI_BCDMA_RCHAN_VALID_PARAMS ( \
+ TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID)
+
+#define TISCI_UDMA_TCHAN_VALID_PARAMS ( \
TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID | \
TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_FILT_EINFO_VALID | \
TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_FILT_PSWORDS_VALID | \
@@ -1510,7 +1714,7 @@ static int udma_alloc_rx_resources(struct udma_chan *uc)
TI_SCI_MSG_VALUE_RM_UDMAP_CH_CQ_QNUM_VALID | \
TI_SCI_MSG_VALUE_RM_UDMAP_CH_ATYPE_VALID)
-#define TISCI_RCHAN_VALID_PARAMS ( \
+#define TISCI_UDMA_RCHAN_VALID_PARAMS ( \
TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID | \
TI_SCI_MSG_VALUE_RM_UDMAP_CH_FETCH_SIZE_VALID | \
TI_SCI_MSG_VALUE_RM_UDMAP_CH_CQ_QNUM_VALID | \
@@ -1535,7 +1739,7 @@ static int udma_tisci_m2m_channel_config(struct udma_chan *uc)
struct ti_sci_msg_rm_udmap_tx_ch_cfg req_tx = { 0 };
struct ti_sci_msg_rm_udmap_rx_ch_cfg req_rx = { 0 };
- req_tx.valid_params = TISCI_TCHAN_VALID_PARAMS;
+ req_tx.valid_params = TISCI_UDMA_TCHAN_VALID_PARAMS;
req_tx.nav_id = tisci_rm->tisci_dev_id;
req_tx.index = tchan->id;
req_tx.tx_chan_type = TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_BCOPY_PBRR;
@@ -1549,7 +1753,7 @@ static int udma_tisci_m2m_channel_config(struct udma_chan *uc)
return ret;
}
- req_rx.valid_params = TISCI_RCHAN_VALID_PARAMS;
+ req_rx.valid_params = TISCI_UDMA_RCHAN_VALID_PARAMS;
req_rx.nav_id = tisci_rm->tisci_dev_id;
req_rx.index = rchan->id;
req_rx.rx_fetch_size = sizeof(struct cppi5_desc_hdr_t) >> 2;
@@ -1564,6 +1768,27 @@ static int udma_tisci_m2m_channel_config(struct udma_chan *uc)
return ret;
}
+static int bcdma_tisci_m2m_channel_config(struct udma_chan *uc)
+{
+ struct udma_dev *ud = uc->ud;
+ struct udma_tisci_rm *tisci_rm = &ud->tisci_rm;
+ const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops;
+ struct ti_sci_msg_rm_udmap_tx_ch_cfg req_tx = { 0 };
+ struct udma_bchan *bchan = uc->bchan;
+ int ret = 0;
+
+ req_tx.valid_params = TISCI_BCDMA_BCHAN_VALID_PARAMS;
+ req_tx.nav_id = tisci_rm->tisci_dev_id;
+ req_tx.extended_ch_type = TI_SCI_RM_BCDMA_EXTENDED_CH_TYPE_BCHAN;
+ req_tx.index = bchan->id;
+
+ ret = tisci_ops->tx_ch_cfg(tisci_rm->tisci, &req_tx);
+ if (ret)
+ dev_err(ud->dev, "bchan%d cfg failed %d\n", bchan->id, ret);
+
+ return ret;
+}
+
static int udma_tisci_tx_channel_config(struct udma_chan *uc)
{
struct udma_dev *ud = uc->ud;
@@ -1584,7 +1809,7 @@ static int udma_tisci_tx_channel_config(struct udma_chan *uc)
fetch_size = sizeof(struct cppi5_desc_hdr_t);
}
- req_tx.valid_params = TISCI_TCHAN_VALID_PARAMS;
+ req_tx.valid_params = TISCI_UDMA_TCHAN_VALID_PARAMS;
req_tx.nav_id = tisci_rm->tisci_dev_id;
req_tx.index = tchan->id;
req_tx.tx_chan_type = mode;
@@ -1607,6 +1832,33 @@ static int udma_tisci_tx_channel_config(struct udma_chan *uc)
return ret;
}
+static int bcdma_tisci_tx_channel_config(struct udma_chan *uc)
+{
+ struct udma_dev *ud = uc->ud;
+ struct udma_tisci_rm *tisci_rm = &ud->tisci_rm;
+ const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops;
+ struct udma_tchan *tchan = uc->tchan;
+ struct ti_sci_msg_rm_udmap_tx_ch_cfg req_tx = { 0 };
+ int ret = 0;
+
+ req_tx.valid_params = TISCI_BCDMA_TCHAN_VALID_PARAMS;
+ req_tx.nav_id = tisci_rm->tisci_dev_id;
+ req_tx.index = tchan->id;
+ req_tx.tx_supr_tdpkt = uc->config.notdpkt;
+ if (ud->match_data->flags & UDMA_FLAG_TDTYPE) {
+ /* wait for peer to complete the teardown for PDMAs */
+ req_tx.valid_params |=
+ TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_TDTYPE_VALID;
+ req_tx.tx_tdtype = 1;
+ }
+
+ ret = tisci_ops->tx_ch_cfg(tisci_rm->tisci, &req_tx);
+ if (ret)
+ dev_err(ud->dev, "tchan%d cfg failed %d\n", tchan->id, ret);
+
+ return ret;
+}
+
static int udma_tisci_rx_channel_config(struct udma_chan *uc)
{
struct udma_dev *ud = uc->ud;
@@ -1629,7 +1881,7 @@ static int udma_tisci_rx_channel_config(struct udma_chan *uc)
fetch_size = sizeof(struct cppi5_desc_hdr_t);
}
- req_rx.valid_params = TISCI_RCHAN_VALID_PARAMS;
+ req_rx.valid_params = TISCI_UDMA_RCHAN_VALID_PARAMS;
req_rx.nav_id = tisci_rm->tisci_dev_id;
req_rx.index = rchan->id;
req_rx.rx_fetch_size = fetch_size >> 2;
@@ -1688,6 +1940,26 @@ static int udma_tisci_rx_channel_config(struct udma_chan *uc)
return 0;
}
+static int bcdma_tisci_rx_channel_config(struct udma_chan *uc)
+{
+ struct udma_dev *ud = uc->ud;
+ struct udma_tisci_rm *tisci_rm = &ud->tisci_rm;
+ const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops;
+ struct udma_rchan *rchan = uc->rchan;
+ struct ti_sci_msg_rm_udmap_rx_ch_cfg req_rx = { 0 };
+ int ret = 0;
+
+ req_rx.valid_params = TISCI_BCDMA_RCHAN_VALID_PARAMS;
+ req_rx.nav_id = tisci_rm->tisci_dev_id;
+ req_rx.index = rchan->id;
+
+ ret = tisci_ops->rx_ch_cfg(tisci_rm->tisci, &req_rx);
+ if (ret)
+ dev_err(ud->dev, "rchan%d cfg failed %d\n", rchan->id, ret);
+
+ return ret;
+}
+
static int udma_alloc_chan_resources(struct dma_chan *chan)
{
struct udma_chan *uc = to_udma_chan(chan);
@@ -1697,6 +1969,8 @@ static int udma_alloc_chan_resources(struct dma_chan *chan)
u32 irq_udma_idx;
int ret;
+ uc->dma_dev = ud->dev;
+
if (uc->config.pkt_mode || uc->config.dir == DMA_MEM_TO_MEM) {
uc->use_dma_pool = true;
/* in case of MEM_TO_MEM we have maximum of two TRs */
@@ -1792,7 +2066,7 @@ static int udma_alloc_chan_resources(struct dma_chan *chan)
K3_PSIL_DST_THREAD_ID_OFFSET;
irq_ring = uc->rflow->r_ring;
- irq_udma_idx = soc_data->rchan_oes_offset + uc->rchan->id;
+ irq_udma_idx = soc_data->oes.udma_rchan + uc->rchan->id;
ret = udma_tisci_rx_channel_config(uc);
break;
@@ -1892,81 +2166,293 @@ static int udma_alloc_chan_resources(struct dma_chan *chan)
return ret;
}
-static int udma_slave_config(struct dma_chan *chan,
- struct dma_slave_config *cfg)
+static int bcdma_alloc_chan_resources(struct dma_chan *chan)
{
struct udma_chan *uc = to_udma_chan(chan);
+ struct udma_dev *ud = to_udma_dev(chan->device);
+ const struct udma_oes_offsets *oes = &ud->soc_data->oes;
+ u32 irq_udma_idx, irq_ring_idx;
+ int ret;
- memcpy(&uc->cfg, cfg, sizeof(uc->cfg));
+ /* Only TR mode is supported */
+ uc->config.pkt_mode = false;
- return 0;
-}
+ /*
+ * Make sure that the completion is in a known state:
+ * No teardown, the channel is idle
+ */
+ reinit_completion(&uc->teardown_completed);
+ complete_all(&uc->teardown_completed);
+ uc->state = UDMA_CHAN_IS_IDLE;
-static struct udma_desc *udma_alloc_tr_desc(struct udma_chan *uc,
- size_t tr_size, int tr_count,
- enum dma_transfer_direction dir)
-{
- struct udma_hwdesc *hwdesc;
- struct cppi5_desc_hdr_t *tr_desc;
- struct udma_desc *d;
- u32 reload_count = 0;
- u32 ring_id;
+ switch (uc->config.dir) {
+ case DMA_MEM_TO_MEM:
+ /* Non synchronized - mem to mem type of transfer */
+ dev_dbg(uc->ud->dev, "%s: chan%d as MEM-to-MEM\n", __func__,
+ uc->id);
- switch (tr_size) {
- case 16:
- case 32:
- case 64:
- case 128:
- break;
- default:
- dev_err(uc->ud->dev, "Unsupported TR size of %zu\n", tr_size);
- return NULL;
- }
+ ret = bcdma_alloc_bchan_resources(uc);
+ if (ret)
+ return ret;
- /* We have only one descriptor containing multiple TRs */
- d = kzalloc(sizeof(*d) + sizeof(d->hwdesc[0]), GFP_NOWAIT);
- if (!d)
- return NULL;
+ irq_ring_idx = uc->bchan->id + oes->bcdma_bchan_ring;
+ irq_udma_idx = uc->bchan->id + oes->bcdma_bchan_data;
- d->sglen = tr_count;
+ ret = bcdma_tisci_m2m_channel_config(uc);
+ break;
+ case DMA_MEM_TO_DEV:
+ /* Slave transfer synchronized - mem to dev (TX) trasnfer */
+ dev_dbg(uc->ud->dev, "%s: chan%d as MEM-to-DEV\n", __func__,
+ uc->id);
- d->hwdesc_count = 1;
- hwdesc = &d->hwdesc[0];
+ ret = udma_alloc_tx_resources(uc);
+ if (ret) {
+ uc->config.remote_thread_id = -1;
+ return ret;
+ }
- /* Allocate memory for DMA ring descriptor */
- if (uc->use_dma_pool) {
- hwdesc->cppi5_desc_size = uc->config.hdesc_size;
- hwdesc->cppi5_desc_vaddr = dma_pool_zalloc(uc->hdesc_pool,
- GFP_NOWAIT,
- &hwdesc->cppi5_desc_paddr);
- } else {
- hwdesc->cppi5_desc_size = cppi5_trdesc_calc_size(tr_size,
- tr_count);
- hwdesc->cppi5_desc_size = ALIGN(hwdesc->cppi5_desc_size,
- uc->ud->desc_align);
- hwdesc->cppi5_desc_vaddr = dma_alloc_coherent(uc->ud->dev,
- hwdesc->cppi5_desc_size,
- &hwdesc->cppi5_desc_paddr,
- GFP_NOWAIT);
- }
+ uc->config.src_thread = ud->psil_base + uc->tchan->id;
+ uc->config.dst_thread = uc->config.remote_thread_id;
+ uc->config.dst_thread |= K3_PSIL_DST_THREAD_ID_OFFSET;
- if (!hwdesc->cppi5_desc_vaddr) {
- kfree(d);
- return NULL;
- }
+ irq_ring_idx = uc->tchan->id + oes->bcdma_tchan_ring;
+ irq_udma_idx = uc->tchan->id + oes->bcdma_tchan_data;
- /* Start of the TR req records */
- hwdesc->tr_req_base = hwdesc->cppi5_desc_vaddr + tr_size;
- /* Start address of the TR response array */
- hwdesc->tr_resp_base = hwdesc->tr_req_base + tr_size * tr_count;
+ ret = bcdma_tisci_tx_channel_config(uc);
+ break;
+ case DMA_DEV_TO_MEM:
+ /* Slave transfer synchronized - dev to mem (RX) trasnfer */
+ dev_dbg(uc->ud->dev, "%s: chan%d as DEV-to-MEM\n", __func__,
+ uc->id);
- tr_desc = hwdesc->cppi5_desc_vaddr;
+ ret = udma_alloc_rx_resources(uc);
+ if (ret) {
+ uc->config.remote_thread_id = -1;
+ return ret;
+ }
- if (uc->cyclic)
- reload_count = CPPI5_INFO0_TRDESC_RLDCNT_INFINITE;
+ uc->config.src_thread = uc->config.remote_thread_id;
+ uc->config.dst_thread = (ud->psil_base + uc->rchan->id) |
+ K3_PSIL_DST_THREAD_ID_OFFSET;
- if (dir == DMA_DEV_TO_MEM)
- ring_id = k3_ringacc_get_ring_id(uc->rflow->r_ring);
+ irq_ring_idx = uc->rchan->id + oes->bcdma_rchan_ring;
+ irq_udma_idx = uc->rchan->id + oes->bcdma_rchan_data;
+
+ ret = bcdma_tisci_rx_channel_config(uc);
+ break;
+ default:
+ /* Can not happen */
+ dev_err(uc->ud->dev, "%s: chan%d invalid direction (%u)\n",
+ __func__, uc->id, uc->config.dir);
+ return -EINVAL;
+ }
+
+ /* check if the channel configuration was successful */
+ if (ret)
+ goto err_res_free;
+
+ if (udma_is_chan_running(uc)) {
+ dev_warn(ud->dev, "chan%d: is running!\n", uc->id);
+ udma_reset_chan(uc, false);
+ if (udma_is_chan_running(uc)) {
+ dev_err(ud->dev, "chan%d: won't stop!\n", uc->id);
+ ret = -EBUSY;
+ goto err_res_free;
+ }
+ }
+
+ uc->dma_dev = dmaengine_get_dma_device(chan);
+ if (uc->config.dir == DMA_MEM_TO_MEM && !uc->config.tr_trigger_type) {
+ uc->config.hdesc_size = cppi5_trdesc_calc_size(
+ sizeof(struct cppi5_tr_type15_t), 2);
+
+ uc->hdesc_pool = dma_pool_create(uc->name, ud->ddev.dev,
+ uc->config.hdesc_size,
+ ud->desc_align,
+ 0);
+ if (!uc->hdesc_pool) {
+ dev_err(ud->ddev.dev,
+ "Descriptor pool allocation failed\n");
+ uc->use_dma_pool = false;
+ return -ENOMEM;
+ }
+
+ uc->use_dma_pool = true;
+ } else if (uc->config.dir != DMA_MEM_TO_MEM) {
+ /* PSI-L pairing */
+ ret = navss_psil_pair(ud, uc->config.src_thread,
+ uc->config.dst_thread);
+ if (ret) {
+ dev_err(ud->dev,
+ "PSI-L pairing failed: 0x%04x -> 0x%04x\n",
+ uc->config.src_thread, uc->config.dst_thread);
+ goto err_res_free;
+ }
+
+ uc->psil_paired = true;
+ }
+
+ uc->irq_num_ring = ti_sci_inta_msi_get_virq(ud->dev, irq_ring_idx);
+ if (uc->irq_num_ring <= 0) {
+ dev_err(ud->dev, "Failed to get ring irq (index: %u)\n",
+ irq_ring_idx);
+ ret = -EINVAL;
+ goto err_psi_free;
+ }
+
+ ret = request_irq(uc->irq_num_ring, udma_ring_irq_handler,
+ IRQF_TRIGGER_HIGH, uc->name, uc);
+ if (ret) {
+ dev_err(ud->dev, "chan%d: ring irq request failed\n", uc->id);
+ goto err_irq_free;
+ }
+
+ /* Event from BCDMA (TR events) only needed for slave channels */
+ if (is_slave_direction(uc->config.dir)) {
+ uc->irq_num_udma = ti_sci_inta_msi_get_virq(ud->dev,
+ irq_udma_idx);
+ if (uc->irq_num_udma <= 0) {
+ dev_err(ud->dev, "Failed to get bcdma irq (index: %u)\n",
+ irq_udma_idx);
+ free_irq(uc->irq_num_ring, uc);
+ ret = -EINVAL;
+ goto err_irq_free;
+ }
+
+ ret = request_irq(uc->irq_num_udma, udma_udma_irq_handler, 0,
+ uc->name, uc);
+ if (ret) {
+ dev_err(ud->dev, "chan%d: BCDMA irq request failed\n",
+ uc->id);
+ free_irq(uc->irq_num_ring, uc);
+ goto err_irq_free;
+ }
+ } else {
+ uc->irq_num_udma = 0;
+ }
+
+ udma_reset_rings(uc);
+
+ INIT_DELAYED_WORK_ONSTACK(&uc->tx_drain.work,
+ udma_check_tx_completion);
+ return 0;
+
+err_irq_free:
+ uc->irq_num_ring = 0;
+ uc->irq_num_udma = 0;
+err_psi_free:
+ if (uc->psil_paired)
+ navss_psil_unpair(ud, uc->config.src_thread,
+ uc->config.dst_thread);
+ uc->psil_paired = false;
+err_res_free:
+ bcdma_free_bchan_resources(uc);
+ udma_free_tx_resources(uc);
+ udma_free_rx_resources(uc);
+
+ udma_reset_uchan(uc);
+
+ if (uc->use_dma_pool) {
+ dma_pool_destroy(uc->hdesc_pool);
+ uc->use_dma_pool = false;
+ }
+
+ return ret;
+}
+
+static int bcdma_router_config(struct dma_chan *chan)
+{
+ struct k3_event_route_data *router_data = chan->route_data;
+ struct udma_chan *uc = to_udma_chan(chan);
+ u32 trigger_event;
+
+ if (!uc->bchan)
+ return -EINVAL;
+
+ if (uc->config.tr_trigger_type != 1 && uc->config.tr_trigger_type != 2)
+ return -EINVAL;
+
+ trigger_event = uc->ud->soc_data->bcdma_trigger_event_offset;
+ trigger_event += (uc->bchan->id * 2) + uc->config.tr_trigger_type - 1;
+
+ return router_data->set_event(router_data->priv, trigger_event);
+}
+
+static int udma_slave_config(struct dma_chan *chan,
+ struct dma_slave_config *cfg)
+{
+ struct udma_chan *uc = to_udma_chan(chan);
+
+ memcpy(&uc->cfg, cfg, sizeof(uc->cfg));
+
+ return 0;
+}
+
+static struct udma_desc *udma_alloc_tr_desc(struct udma_chan *uc,
+ size_t tr_size, int tr_count,
+ enum dma_transfer_direction dir)
+{
+ struct udma_hwdesc *hwdesc;
+ struct cppi5_desc_hdr_t *tr_desc;
+ struct udma_desc *d;
+ u32 reload_count = 0;
+ u32 ring_id;
+
+ switch (tr_size) {
+ case 16:
+ case 32:
+ case 64:
+ case 128:
+ break;
+ default:
+ dev_err(uc->ud->dev, "Unsupported TR size of %zu\n", tr_size);
+ return NULL;
+ }
+
+ /* We have only one descriptor containing multiple TRs */
+ d = kzalloc(sizeof(*d) + sizeof(d->hwdesc[0]), GFP_NOWAIT);
+ if (!d)
+ return NULL;
+
+ d->sglen = tr_count;
+
+ d->hwdesc_count = 1;
+ hwdesc = &d->hwdesc[0];
+
+ /* Allocate memory for DMA ring descriptor */
+ if (uc->use_dma_pool) {
+ hwdesc->cppi5_desc_size = uc->config.hdesc_size;
+ hwdesc->cppi5_desc_vaddr = dma_pool_zalloc(uc->hdesc_pool,
+ GFP_NOWAIT,
+ &hwdesc->cppi5_desc_paddr);
+ } else {
+ hwdesc->cppi5_desc_size = cppi5_trdesc_calc_size(tr_size,
+ tr_count);
+ hwdesc->cppi5_desc_size = ALIGN(hwdesc->cppi5_desc_size,
+ uc->ud->desc_align);
+ hwdesc->cppi5_desc_vaddr = dma_alloc_coherent(uc->ud->dev,
+ hwdesc->cppi5_desc_size,
+ &hwdesc->cppi5_desc_paddr,
+ GFP_NOWAIT);
+ }
+
+ if (!hwdesc->cppi5_desc_vaddr) {
+ kfree(d);
+ return NULL;
+ }
+
+ /* Start of the TR req records */
+ hwdesc->tr_req_base = hwdesc->cppi5_desc_vaddr + tr_size;
+ /* Start address of the TR response array */
+ hwdesc->tr_resp_base = hwdesc->tr_req_base + tr_size * tr_count;
+
+ tr_desc = hwdesc->cppi5_desc_vaddr;
+
+ if (uc->cyclic)
+ reload_count = CPPI5_INFO0_TRDESC_RLDCNT_INFINITE;
+
+ if (dir == DMA_DEV_TO_MEM)
+ ring_id = k3_ringacc_get_ring_id(uc->rflow->r_ring);
else
ring_id = k3_ringacc_get_ring_id(uc->tchan->tc_ring);
@@ -2036,6 +2522,7 @@ udma_prep_slave_sg_tr(struct udma_chan *uc, struct scatterlist *sgl,
size_t tr_size;
int num_tr = 0;
int tr_idx = 0;
+ u64 asel;
/* estimate the number of TRs we will need */
for_each_sg(sgl, sgent, sglen, i) {
@@ -2053,6 +2540,11 @@ udma_prep_slave_sg_tr(struct udma_chan *uc, struct scatterlist *sgl,
d->sglen = sglen;
+ if (uc->ud->match_data->type == DMA_TYPE_UDMA)
+ asel = 0;
+ else
+ asel = (u64)uc->config.asel << K3_ADDRESS_ASEL_SHIFT;
+
tr_req = d->hwdesc[0].tr_req_base;
for_each_sg(sgl, sgent, sglen, i) {
dma_addr_t sg_addr = sg_dma_address(sgent);
@@ -2071,6 +2563,7 @@ udma_prep_slave_sg_tr(struct udma_chan *uc, struct scatterlist *sgl,
false, CPPI5_TR_EVENT_SIZE_COMPLETION, 0);
cppi5_tr_csf_set(&tr_req[tr_idx].flags, CPPI5_TR_CSF_SUPR_EVT);
+ sg_addr |= asel;
tr_req[tr_idx].addr = sg_addr;
tr_req[tr_idx].icnt0 = tr0_cnt0;
tr_req[tr_idx].icnt1 = tr0_cnt1;
@@ -2100,6 +2593,205 @@ udma_prep_slave_sg_tr(struct udma_chan *uc, struct scatterlist *sgl,
return d;
}
+static struct udma_desc *
+udma_prep_slave_sg_triggered_tr(struct udma_chan *uc, struct scatterlist *sgl,
+ unsigned int sglen,
+ enum dma_transfer_direction dir,
+ unsigned long tx_flags, void *context)
+{
+ struct scatterlist *sgent;
+ struct cppi5_tr_type15_t *tr_req = NULL;
+ enum dma_slave_buswidth dev_width;
+ u16 tr_cnt0, tr_cnt1;
+ dma_addr_t dev_addr;
+ struct udma_desc *d;
+ unsigned int i;
+ size_t tr_size, sg_len;
+ int num_tr = 0;
+ int tr_idx = 0;
+ u32 burst, trigger_size, port_window;
+ u64 asel;
+
+ if (dir == DMA_DEV_TO_MEM) {
+ dev_addr = uc->cfg.src_addr;
+ dev_width = uc->cfg.src_addr_width;
+ burst = uc->cfg.src_maxburst;
+ port_window = uc->cfg.src_port_window_size;
+ } else if (dir == DMA_MEM_TO_DEV) {
+ dev_addr = uc->cfg.dst_addr;
+ dev_width = uc->cfg.dst_addr_width;
+ burst = uc->cfg.dst_maxburst;
+ port_window = uc->cfg.dst_port_window_size;
+ } else {
+ dev_err(uc->ud->dev, "%s: bad direction?\n", __func__);
+ return NULL;
+ }
+
+ if (!burst)
+ burst = 1;
+
+ if (port_window) {
+ if (port_window != burst) {
+ dev_err(uc->ud->dev,
+ "The burst must be equal to port_window\n");
+ return NULL;
+ }
+
+ tr_cnt0 = dev_width * port_window;
+ tr_cnt1 = 1;
+ } else {
+ tr_cnt0 = dev_width;
+ tr_cnt1 = burst;
+ }
+ trigger_size = tr_cnt0 * tr_cnt1;
+
+ /* estimate the number of TRs we will need */
+ for_each_sg(sgl, sgent, sglen, i) {
+ sg_len = sg_dma_len(sgent);
+
+ if (sg_len % trigger_size) {
+ dev_err(uc->ud->dev,
+ "Not aligned SG entry (%zu for %u)\n", sg_len,
+ trigger_size);
+ return NULL;
+ }
+
+ if (sg_len / trigger_size < SZ_64K)
+ num_tr++;
+ else
+ num_tr += 2;
+ }
+
+ /* Now allocate and setup the descriptor. */
+ tr_size = sizeof(struct cppi5_tr_type15_t);
+ d = udma_alloc_tr_desc(uc, tr_size, num_tr, dir);
+ if (!d)
+ return NULL;
+
+ d->sglen = sglen;
+
+ if (uc->ud->match_data->type == DMA_TYPE_UDMA) {
+ asel = 0;
+ } else {
+ asel = (u64)uc->config.asel << K3_ADDRESS_ASEL_SHIFT;
+ dev_addr |= asel;
+ }
+
+ tr_req = d->hwdesc[0].tr_req_base;
+ for_each_sg(sgl, sgent, sglen, i) {
+ u16 tr0_cnt2, tr0_cnt3, tr1_cnt2;
+ dma_addr_t sg_addr = sg_dma_address(sgent);
+
+ sg_len = sg_dma_len(sgent);
+ num_tr = udma_get_tr_counters(sg_len / trigger_size, 0,
+ &tr0_cnt2, &tr0_cnt3, &tr1_cnt2);
+ if (num_tr < 0) {
+ dev_err(uc->ud->dev, "size %zu is not supported\n",
+ sg_len);
+ udma_free_hwdesc(uc, d);
+ kfree(d);
+ return NULL;
+ }
+
+ cppi5_tr_init(&tr_req[tr_idx].flags, CPPI5_TR_TYPE15, false,
+ true, CPPI5_TR_EVENT_SIZE_COMPLETION, 0);
+ cppi5_tr_csf_set(&tr_req[tr_idx].flags, CPPI5_TR_CSF_SUPR_EVT);
+ cppi5_tr_set_trigger(&tr_req[tr_idx].flags,
+ uc->config.tr_trigger_type,
+ CPPI5_TR_TRIGGER_TYPE_ICNT2_DEC, 0, 0);
+
+ sg_addr |= asel;
+ if (dir == DMA_DEV_TO_MEM) {
+ tr_req[tr_idx].addr = dev_addr;
+ tr_req[tr_idx].icnt0 = tr_cnt0;
+ tr_req[tr_idx].icnt1 = tr_cnt1;
+ tr_req[tr_idx].icnt2 = tr0_cnt2;
+ tr_req[tr_idx].icnt3 = tr0_cnt3;
+ tr_req[tr_idx].dim1 = (-1) * tr_cnt0;
+
+ tr_req[tr_idx].daddr = sg_addr;
+ tr_req[tr_idx].dicnt0 = tr_cnt0;
+ tr_req[tr_idx].dicnt1 = tr_cnt1;
+ tr_req[tr_idx].dicnt2 = tr0_cnt2;
+ tr_req[tr_idx].dicnt3 = tr0_cnt3;
+ tr_req[tr_idx].ddim1 = tr_cnt0;
+ tr_req[tr_idx].ddim2 = trigger_size;
+ tr_req[tr_idx].ddim3 = trigger_size * tr0_cnt2;
+ } else {
+ tr_req[tr_idx].addr = sg_addr;
+ tr_req[tr_idx].icnt0 = tr_cnt0;
+ tr_req[tr_idx].icnt1 = tr_cnt1;
+ tr_req[tr_idx].icnt2 = tr0_cnt2;
+ tr_req[tr_idx].icnt3 = tr0_cnt3;
+ tr_req[tr_idx].dim1 = tr_cnt0;
+ tr_req[tr_idx].dim2 = trigger_size;
+ tr_req[tr_idx].dim3 = trigger_size * tr0_cnt2;
+
+ tr_req[tr_idx].daddr = dev_addr;
+ tr_req[tr_idx].dicnt0 = tr_cnt0;
+ tr_req[tr_idx].dicnt1 = tr_cnt1;
+ tr_req[tr_idx].dicnt2 = tr0_cnt2;
+ tr_req[tr_idx].dicnt3 = tr0_cnt3;
+ tr_req[tr_idx].ddim1 = (-1) * tr_cnt0;
+ }
+
+ tr_idx++;
+
+ if (num_tr == 2) {
+ cppi5_tr_init(&tr_req[tr_idx].flags, CPPI5_TR_TYPE15,
+ false, true,
+ CPPI5_TR_EVENT_SIZE_COMPLETION, 0);
+ cppi5_tr_csf_set(&tr_req[tr_idx].flags,
+ CPPI5_TR_CSF_SUPR_EVT);
+ cppi5_tr_set_trigger(&tr_req[tr_idx].flags,
+ uc->config.tr_trigger_type,
+ CPPI5_TR_TRIGGER_TYPE_ICNT2_DEC,
+ 0, 0);
+
+ sg_addr += trigger_size * tr0_cnt2 * tr0_cnt3;
+ if (dir == DMA_DEV_TO_MEM) {
+ tr_req[tr_idx].addr = dev_addr;
+ tr_req[tr_idx].icnt0 = tr_cnt0;
+ tr_req[tr_idx].icnt1 = tr_cnt1;
+ tr_req[tr_idx].icnt2 = tr1_cnt2;
+ tr_req[tr_idx].icnt3 = 1;
+ tr_req[tr_idx].dim1 = (-1) * tr_cnt0;
+
+ tr_req[tr_idx].daddr = sg_addr;
+ tr_req[tr_idx].dicnt0 = tr_cnt0;
+ tr_req[tr_idx].dicnt1 = tr_cnt1;
+ tr_req[tr_idx].dicnt2 = tr1_cnt2;
+ tr_req[tr_idx].dicnt3 = 1;
+ tr_req[tr_idx].ddim1 = tr_cnt0;
+ tr_req[tr_idx].ddim2 = trigger_size;
+ } else {
+ tr_req[tr_idx].addr = sg_addr;
+ tr_req[tr_idx].icnt0 = tr_cnt0;
+ tr_req[tr_idx].icnt1 = tr_cnt1;
+ tr_req[tr_idx].icnt2 = tr1_cnt2;
+ tr_req[tr_idx].icnt3 = 1;
+ tr_req[tr_idx].dim1 = tr_cnt0;
+ tr_req[tr_idx].dim2 = trigger_size;
+
+ tr_req[tr_idx].daddr = dev_addr;
+ tr_req[tr_idx].dicnt0 = tr_cnt0;
+ tr_req[tr_idx].dicnt1 = tr_cnt1;
+ tr_req[tr_idx].dicnt2 = tr1_cnt2;
+ tr_req[tr_idx].dicnt3 = 1;
+ tr_req[tr_idx].ddim1 = (-1) * tr_cnt0;
+ }
+ tr_idx++;
+ }
+
+ d->residue += sg_len;
+ }
+
+ cppi5_tr_csf_set(&tr_req[tr_idx - 1].flags,
+ CPPI5_TR_CSF_SUPR_EVT | CPPI5_TR_CSF_EOP);
+
+ return d;
+}
+
static int udma_configure_statictr(struct udma_chan *uc, struct udma_desc *d,
enum dma_slave_buswidth dev_width,
u16 elcnt)
@@ -2341,7 +3033,8 @@ udma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
struct udma_desc *d;
u32 burst;
- if (dir != uc->config.dir) {
+ if (dir != uc->config.dir &&
+ (uc->config.dir == DMA_MEM_TO_MEM && !uc->config.tr_trigger_type)) {
dev_err(chan->device->dev,
"%s: chan%d is for %s, not supporting %s\n",
__func__, uc->id,
@@ -2367,9 +3060,12 @@ udma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
if (uc->config.pkt_mode)
d = udma_prep_slave_sg_pkt(uc, sgl, sglen, dir, tx_flags,
context);
- else
+ else if (is_slave_direction(uc->config.dir))
d = udma_prep_slave_sg_tr(uc, sgl, sglen, dir, tx_flags,
context);
+ else
+ d = udma_prep_slave_sg_triggered_tr(uc, sgl, sglen, dir,
+ tx_flags, context);
if (!d)
return NULL;
@@ -2423,7 +3119,12 @@ udma_prep_dma_cyclic_tr(struct udma_chan *uc, dma_addr_t buf_addr,
return NULL;
tr_req = d->hwdesc[0].tr_req_base;
- period_addr = buf_addr;
+ if (uc->ud->match_data->type == DMA_TYPE_UDMA)
+ period_addr = buf_addr;
+ else
+ period_addr = buf_addr |
+ ((u64)uc->config.asel << K3_ADDRESS_ASEL_SHIFT);
+
for (i = 0; i < periods; i++) {
int tr_idx = i * num_tr;
@@ -2629,6 +3330,11 @@ udma_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
d->tr_idx = 0;
d->residue = len;
+ if (uc->ud->match_data->type != DMA_TYPE_UDMA) {
+ src |= (u64)uc->ud->asel << K3_ADDRESS_ASEL_SHIFT;
+ dest |= (u64)uc->ud->asel << K3_ADDRESS_ASEL_SHIFT;
+ }
+
tr_req = d->hwdesc[0].tr_req_base;
cppi5_tr_init(&tr_req[0].flags, CPPI5_TR_TYPE15, false, true,
@@ -2986,6 +3692,7 @@ static void udma_free_chan_resources(struct dma_chan *chan)
vchan_free_chan_resources(&uc->vc);
tasklet_kill(&uc->vc.task);
+ bcdma_free_bchan_resources(uc);
udma_free_tx_resources(uc);
udma_free_rx_resources(uc);
udma_reset_uchan(uc);
@@ -2997,10 +3704,13 @@ static void udma_free_chan_resources(struct dma_chan *chan)
}
static struct platform_driver udma_driver;
+static struct platform_driver bcdma_driver;
struct udma_filter_param {
int remote_thread_id;
u32 atype;
+ u32 asel;
+ u32 tr_trigger_type;
};
static bool udma_dma_filter_fn(struct dma_chan *chan, void *param)
@@ -3011,7 +3721,8 @@ static bool udma_dma_filter_fn(struct dma_chan *chan, void *param)
struct udma_chan *uc;
struct udma_dev *ud;
- if (chan->device->dev->driver != &udma_driver.driver)
+ if (chan->device->dev->driver != &udma_driver.driver &&
+ chan->device->dev->driver != &bcdma_driver.driver)
return false;
uc = to_udma_chan(chan);
@@ -3025,13 +3736,25 @@ static bool udma_dma_filter_fn(struct dma_chan *chan, void *param)
return false;
}
+ if (filter_param->asel > 15) {
+ dev_err(ud->dev, "Invalid channel asel: %u\n",
+ filter_param->asel);
+ return false;
+ }
+
ucc->remote_thread_id = filter_param->remote_thread_id;
ucc->atype = filter_param->atype;
+ ucc->asel = filter_param->asel;
+ ucc->tr_trigger_type = filter_param->tr_trigger_type;
- if (ucc->remote_thread_id & K3_PSIL_DST_THREAD_ID_OFFSET)
+ if (ucc->tr_trigger_type) {
+ ucc->dir = DMA_MEM_TO_MEM;
+ goto triggered_bchan;
+ } else if (ucc->remote_thread_id & K3_PSIL_DST_THREAD_ID_OFFSET) {
ucc->dir = DMA_MEM_TO_DEV;
- else
+ } else {
ucc->dir = DMA_DEV_TO_MEM;
+ }
ep_config = psil_get_ep_config(ucc->remote_thread_id);
if (IS_ERR(ep_config)) {
@@ -3040,6 +3763,19 @@ static bool udma_dma_filter_fn(struct dma_chan *chan, void *param)
ucc->dir = DMA_MEM_TO_MEM;
ucc->remote_thread_id = -1;
ucc->atype = 0;
+ ucc->asel = 0;
+ return false;
+ }
+
+ if (ud->match_data->type == DMA_TYPE_BCDMA &&
+ ep_config->pkt_mode) {
+ dev_err(ud->dev,
+ "Only TR mode is supported (psi-l thread 0x%04x)\n",
+ ucc->remote_thread_id);
+ ucc->dir = DMA_MEM_TO_MEM;
+ ucc->remote_thread_id = -1;
+ ucc->atype = 0;
+ ucc->asel = 0;
return false;
}
@@ -3071,6 +3807,13 @@ static bool udma_dma_filter_fn(struct dma_chan *chan, void *param)
ucc->remote_thread_id, dmaengine_get_direction_text(ucc->dir));
return true;
+
+triggered_bchan:
+ dev_dbg(ud->dev, "chan%d: triggered channel (type: %u)\n", uc->id,
+ ucc->tr_trigger_type);
+
+ return true;
+
}
static struct dma_chan *udma_of_xlate(struct of_phandle_args *dma_spec,
@@ -3081,14 +3824,33 @@ static struct dma_chan *udma_of_xlate(struct of_phandle_args *dma_spec,
struct udma_filter_param filter_param;
struct dma_chan *chan;
- if (dma_spec->args_count != 1 && dma_spec->args_count != 2)
- return NULL;
+ if (ud->match_data->type == DMA_TYPE_BCDMA) {
+ if (dma_spec->args_count != 3)
+ return NULL;
- filter_param.remote_thread_id = dma_spec->args[0];
- if (dma_spec->args_count == 2)
- filter_param.atype = dma_spec->args[1];
- else
+ filter_param.tr_trigger_type = dma_spec->args[0];
+ filter_param.remote_thread_id = dma_spec->args[1];
+ filter_param.asel = dma_spec->args[2];
filter_param.atype = 0;
+ } else {
+ if (dma_spec->args_count != 1 && dma_spec->args_count != 2)
+ return NULL;
+
+ filter_param.remote_thread_id = dma_spec->args[0];
+ filter_param.tr_trigger_type = 0;
+ if (dma_spec->args_count == 2) {
+ if (ud->match_data->type == DMA_TYPE_UDMA) {
+ filter_param.atype = dma_spec->args[1];
+ filter_param.asel = 0;
+ } else {
+ filter_param.atype = 0;
+ filter_param.asel = dma_spec->args[1];
+ }
+ } else {
+ filter_param.atype = 0;
+ filter_param.asel = 0;
+ }
+ }
chan = __dma_request_channel(&mask, udma_dma_filter_fn, &filter_param,
ofdma->of_node);
@@ -3101,18 +3863,21 @@ static struct dma_chan *udma_of_xlate(struct of_phandle_args *dma_spec,
}
static struct udma_match_data am654_main_data = {
+ .type = DMA_TYPE_UDMA,
.psil_base = 0x1000,
.enable_memcpy_support = true,
.statictr_z_mask = GENMASK(11, 0),
};
static struct udma_match_data am654_mcu_data = {
+ .type = DMA_TYPE_UDMA,
.psil_base = 0x6000,
.enable_memcpy_support = false,
.statictr_z_mask = GENMASK(11, 0),
};
static struct udma_match_data j721e_main_data = {
+ .type = DMA_TYPE_UDMA,
.psil_base = 0x1000,
.enable_memcpy_support = true,
.flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE,
@@ -3120,12 +3885,21 @@ static struct udma_match_data j721e_main_data = {
};
static struct udma_match_data j721e_mcu_data = {
+ .type = DMA_TYPE_UDMA,
.psil_base = 0x6000,
.enable_memcpy_support = false, /* MEM_TO_MEM is slow via MCU UDMA */
.flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE,
.statictr_z_mask = GENMASK(23, 0),
};
+static struct udma_match_data am64_bcdma_data = {
+ .type = DMA_TYPE_BCDMA,
+ .psil_base = 0x2000, /* for tchan and rchan, not applicable to bchan */
+ .enable_memcpy_support = true, /* Supported via bchan */
+ .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE,
+ .statictr_z_mask = GENMASK(23, 0),
+};
+
static const struct of_device_id udma_of_match[] = {
{
.compatible = "ti,am654-navss-main-udmap",
@@ -3144,31 +3918,91 @@ static const struct of_device_id udma_of_match[] = {
{ /* Sentinel */ },
};
+static const struct of_device_id bcdma_of_match[] = {
+ {
+ .compatible = "ti,am64-dmss-bcdma",
+ .data = &am64_bcdma_data,
+ },
+ { /* Sentinel */ },
+};
+
static struct udma_soc_data am654_soc_data = {
- .rchan_oes_offset = 0x200,
+ .oes = {
+ .udma_rchan = 0x200,
+ },
};
static struct udma_soc_data j721e_soc_data = {
- .rchan_oes_offset = 0x400,
+ .oes = {
+ .udma_rchan = 0x400,
+ },
};
static struct udma_soc_data j7200_soc_data = {
- .rchan_oes_offset = 0x80,
+ .oes = {
+ .udma_rchan = 0x80,
+ },
+};
+
+static struct udma_soc_data am64_soc_data = {
+ .oes = {
+ .bcdma_bchan_data = 0x2200,
+ .bcdma_bchan_ring = 0x2400,
+ .bcdma_tchan_data = 0x2800,
+ .bcdma_tchan_ring = 0x2a00,
+ .bcdma_rchan_data = 0x2e00,
+ .bcdma_rchan_ring = 0x3000,
+ },
+ .bcdma_trigger_event_offset = 0xc400,
};
static const struct soc_device_attribute k3_soc_devices[] = {
{ .family = "AM65X", .data = &am654_soc_data },
{ .family = "J721E", .data = &j721e_soc_data },
{ .family = "J7200", .data = &j7200_soc_data },
+ { .family = "AM64", .data = &am64_soc_data },
{ /* sentinel */ }
};
static int udma_get_mmrs(struct platform_device *pdev, struct udma_dev *ud)
{
struct resource *res;
+ u32 cap2, cap3;
int i;
- for (i = 0; i < MMR_LAST; i++) {
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ mmr_names[MMR_GCFG]);
+ ud->mmrs[MMR_GCFG] = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(ud->mmrs[MMR_GCFG]))
+ return PTR_ERR(ud->mmrs[MMR_GCFG]);
+
+ cap2 = udma_read(ud->mmrs[MMR_GCFG], 0x28);
+ cap3 = udma_read(ud->mmrs[MMR_GCFG], 0x2c);
+
+ switch (ud->match_data->type) {
+ case DMA_TYPE_UDMA:
+ ud->rflow_cnt = UDMA_CAP3_RFLOW_CNT(cap3);
+ ud->tchan_cnt = UDMA_CAP2_TCHAN_CNT(cap2);
+ ud->echan_cnt = UDMA_CAP2_ECHAN_CNT(cap2);
+ ud->rchan_cnt = UDMA_CAP2_RCHAN_CNT(cap2);
+ break;
+ case DMA_TYPE_BCDMA:
+ ud->bchan_cnt = BCDMA_CAP2_BCHAN_CNT(cap2);
+ ud->tchan_cnt = BCDMA_CAP2_TCHAN_CNT(cap2);
+ ud->rchan_cnt = BCDMA_CAP2_RCHAN_CNT(cap2);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ for (i = 1; i < MMR_LAST; i++) {
+ if (i == MMR_BCHANRT && ud->bchan_cnt == 0)
+ continue;
+ if (i == MMR_TCHANRT && ud->tchan_cnt == 0)
+ continue;
+ if (i == MMR_RCHANRT && ud->rchan_cnt == 0)
+ continue;
+
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
mmr_names[i]);
ud->mmrs[i] = devm_ioremap_resource(&pdev->dev, res);
@@ -3190,27 +4024,23 @@ static void udma_mark_resource_ranges(struct udma_dev *ud, unsigned long *map,
rm_desc->num_sec);
}
+static const char * const range_names[] = {
+ [RM_RANGE_BCHAN] = "ti,sci-rm-range-bchan",
+ [RM_RANGE_TCHAN] = "ti,sci-rm-range-tchan",
+ [RM_RANGE_RCHAN] = "ti,sci-rm-range-rchan",
+ [RM_RANGE_RFLOW] = "ti,sci-rm-range-rflow"
+};
+
static int udma_setup_resources(struct udma_dev *ud)
{
+ int ret, i, j;
struct device *dev = ud->dev;
- int ch_count, ret, i, j;
- u32 cap2, cap3;
struct ti_sci_resource *rm_res, irq_res;
struct udma_tisci_rm *tisci_rm = &ud->tisci_rm;
- static const char * const range_names[] = { "ti,sci-rm-range-tchan",
- "ti,sci-rm-range-rchan",
- "ti,sci-rm-range-rflow" };
-
- cap2 = udma_read(ud->mmrs[MMR_GCFG], UDMA_CAP_REG(2));
- cap3 = udma_read(ud->mmrs[MMR_GCFG], UDMA_CAP_REG(3));
-
- ud->rflow_cnt = UDMA_CAP3_RFLOW_CNT(cap3);
- ud->tchan_cnt = UDMA_CAP2_TCHAN_CNT(cap2);
- ud->echan_cnt = UDMA_CAP2_ECHAN_CNT(cap2);
- ud->rchan_cnt = UDMA_CAP2_RCHAN_CNT(cap2);
- ch_count = ud->tchan_cnt + ud->rchan_cnt;
+ u32 cap3;
/* Set up the throughput level start indexes */
+ cap3 = udma_read(ud->mmrs[MMR_GCFG], 0x2c);
if (of_device_is_compatible(dev->of_node,
"ti,am654-navss-main-udmap")) {
ud->tpl_levels = 2;
@@ -3268,11 +4098,15 @@ static int udma_setup_resources(struct udma_dev *ud)
bitmap_set(ud->rflow_gp_map, 0, ud->rflow_cnt);
/* Get resource ranges from tisci */
- for (i = 0; i < RM_RANGE_LAST; i++)
+ for (i = 0; i < RM_RANGE_LAST; i++) {
+ if (i == RM_RANGE_BCHAN)
+ continue;
+
tisci_rm->rm_ranges[i] =
devm_ti_sci_get_of_resource(tisci_rm->tisci, dev,
tisci_rm->tisci_dev_id,
(char *)range_names[i]);
+ }
/* tchan ranges */
rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN];
@@ -3310,12 +4144,12 @@ static int udma_setup_resources(struct udma_dev *ud)
for (j = 0; j < rm_res->sets; j++, i++) {
if (rm_res->desc[j].num) {
irq_res.desc[i].start = rm_res->desc[j].start +
- ud->soc_data->rchan_oes_offset;
+ ud->soc_data->oes.udma_rchan;
irq_res.desc[i].num = rm_res->desc[j].num;
}
if (rm_res->desc[j].num_sec) {
irq_res.desc[i].start_sec = rm_res->desc[j].start_sec +
- ud->soc_data->rchan_oes_offset;
+ ud->soc_data->oes.udma_rchan;
irq_res.desc[i].num_sec = rm_res->desc[j].num_sec;
}
}
@@ -3338,6 +4172,174 @@ static int udma_setup_resources(struct udma_dev *ud)
&rm_res->desc[i], "gp-rflow");
}
+ return 0;
+}
+
+static int bcdma_setup_resources(struct udma_dev *ud)
+{
+ int ret, i, j;
+ struct device *dev = ud->dev;
+ struct ti_sci_resource *rm_res, irq_res;
+ struct udma_tisci_rm *tisci_rm = &ud->tisci_rm;
+ const struct udma_oes_offsets *oes = &ud->soc_data->oes;
+
+ ud->bchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->bchan_cnt),
+ sizeof(unsigned long), GFP_KERNEL);
+ ud->bchans = devm_kcalloc(dev, ud->bchan_cnt, sizeof(*ud->bchans),
+ GFP_KERNEL);
+ ud->tchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->tchan_cnt),
+ sizeof(unsigned long), GFP_KERNEL);
+ ud->tchans = devm_kcalloc(dev, ud->tchan_cnt, sizeof(*ud->tchans),
+ GFP_KERNEL);
+ ud->rchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->rchan_cnt),
+ sizeof(unsigned long), GFP_KERNEL);
+ ud->rchans = devm_kcalloc(dev, ud->rchan_cnt, sizeof(*ud->rchans),
+ GFP_KERNEL);
+ /* BCDMA do not really have flows, but the driver expect it */
+ ud->rflow_in_use = devm_kcalloc(dev, BITS_TO_LONGS(ud->rchan_cnt),
+ sizeof(unsigned long),
+ GFP_KERNEL);
+ ud->rflows = devm_kcalloc(dev, ud->rchan_cnt, sizeof(*ud->rflows),
+ GFP_KERNEL);
+
+ if (!ud->bchan_map || !ud->tchan_map || !ud->rchan_map ||
+ !ud->rflow_in_use || !ud->bchans || !ud->tchans || !ud->rchans ||
+ !ud->rflows)
+ return -ENOMEM;
+
+ /* TPL is not yet supported for BCDMA */
+ ud->tpl_levels = 1;
+
+ /* Get resource ranges from tisci */
+ for (i = 0; i < RM_RANGE_LAST; i++) {
+ if (i == RM_RANGE_RFLOW)
+ continue;
+ if (i == RM_RANGE_BCHAN && ud->bchan_cnt == 0)
+ continue;
+ if (i == RM_RANGE_TCHAN && ud->tchan_cnt == 0)
+ continue;
+ if (i == RM_RANGE_RCHAN && ud->rchan_cnt == 0)
+ continue;
+
+ tisci_rm->rm_ranges[i] =
+ devm_ti_sci_get_of_resource(tisci_rm->tisci, dev,
+ tisci_rm->tisci_dev_id,
+ (char *)range_names[i]);
+ }
+
+ irq_res.sets = 0;
+
+ /* bchan ranges */
+ if (ud->bchan_cnt) {
+ rm_res = tisci_rm->rm_ranges[RM_RANGE_BCHAN];
+ if (IS_ERR(rm_res)) {
+ bitmap_zero(ud->bchan_map, ud->bchan_cnt);
+ } else {
+ bitmap_fill(ud->bchan_map, ud->bchan_cnt);
+ for (i = 0; i < rm_res->sets; i++)
+ udma_mark_resource_ranges(ud, ud->bchan_map,
+ &rm_res->desc[i],
+ "bchan");
+ }
+ irq_res.sets += rm_res->sets;
+ }
+
+ /* tchan ranges */
+ if (ud->tchan_cnt) {
+ rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN];
+ if (IS_ERR(rm_res)) {
+ bitmap_zero(ud->tchan_map, ud->tchan_cnt);
+ } else {
+ bitmap_fill(ud->tchan_map, ud->tchan_cnt);
+ for (i = 0; i < rm_res->sets; i++)
+ udma_mark_resource_ranges(ud, ud->tchan_map,
+ &rm_res->desc[i],
+ "tchan");
+ }
+ irq_res.sets += rm_res->sets * 2;
+ }
+
+ /* rchan ranges */
+ if (ud->rchan_cnt) {
+ rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN];
+ if (IS_ERR(rm_res)) {
+ bitmap_zero(ud->rchan_map, ud->rchan_cnt);
+ } else {
+ bitmap_fill(ud->rchan_map, ud->rchan_cnt);
+ for (i = 0; i < rm_res->sets; i++)
+ udma_mark_resource_ranges(ud, ud->rchan_map,
+ &rm_res->desc[i],
+ "rchan");
+ }
+ irq_res.sets += rm_res->sets * 2;
+ }
+
+ irq_res.desc = kcalloc(irq_res.sets, sizeof(*irq_res.desc), GFP_KERNEL);
+ if (ud->bchan_cnt) {
+ rm_res = tisci_rm->rm_ranges[RM_RANGE_BCHAN];
+ for (i = 0; i < rm_res->sets; i++) {
+ irq_res.desc[i].start = rm_res->desc[i].start +
+ oes->bcdma_bchan_ring;
+ irq_res.desc[i].num = rm_res->desc[i].num;
+ }
+ }
+ if (ud->tchan_cnt) {
+ rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN];
+ for (j = 0; j < rm_res->sets; j++, i += 2) {
+ irq_res.desc[i].start = rm_res->desc[j].start +
+ oes->bcdma_tchan_data;
+ irq_res.desc[i].num = rm_res->desc[j].num;
+
+ irq_res.desc[i + 1].start = rm_res->desc[j].start +
+ oes->bcdma_tchan_ring;
+ irq_res.desc[i + 1].num = rm_res->desc[j].num;
+ }
+ }
+ if (ud->rchan_cnt) {
+ rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN];
+ for (j = 0; j < rm_res->sets; j++, i += 2) {
+ irq_res.desc[i].start = rm_res->desc[j].start +
+ oes->bcdma_rchan_data;
+ irq_res.desc[i].num = rm_res->desc[j].num;
+
+ irq_res.desc[i + 1].start = rm_res->desc[j].start +
+ oes->bcdma_rchan_ring;
+ irq_res.desc[i + 1].num = rm_res->desc[j].num;
+ }
+ }
+
+ ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, &irq_res);
+ kfree(irq_res.desc);
+ if (ret) {
+ dev_err(ud->dev, "Failed to allocate MSI interrupts\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static int setup_resources(struct udma_dev *ud)
+{
+ struct device *dev = ud->dev;
+ int ch_count, ret;
+
+ switch (ud->match_data->type) {
+ case DMA_TYPE_UDMA:
+ ret = udma_setup_resources(ud);
+ break;
+ case DMA_TYPE_BCDMA:
+ ret = bcdma_setup_resources(ud);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (ret)
+ return ret;
+
+ ch_count = ud->bchan_cnt + ud->tchan_cnt + ud->rchan_cnt;
+ if (ud->bchan_cnt)
+ ch_count -= bitmap_weight(ud->bchan_map, ud->bchan_cnt);
ch_count -= bitmap_weight(ud->tchan_map, ud->tchan_cnt);
ch_count -= bitmap_weight(ud->rchan_map, ud->rchan_cnt);
if (!ch_count)
@@ -3348,12 +4350,32 @@ static int udma_setup_resources(struct udma_dev *ud)
if (!ud->channels)
return -ENOMEM;
- dev_info(dev, "Channels: %d (tchan: %u, rchan: %u, gp-rflow: %u)\n",
- ch_count,
- ud->tchan_cnt - bitmap_weight(ud->tchan_map, ud->tchan_cnt),
- ud->rchan_cnt - bitmap_weight(ud->rchan_map, ud->rchan_cnt),
- ud->rflow_cnt - bitmap_weight(ud->rflow_gp_map,
- ud->rflow_cnt));
+ switch (ud->match_data->type) {
+ case DMA_TYPE_UDMA:
+ dev_info(dev,
+ "Channels: %d (tchan: %u, rchan: %u, gp-rflow: %u)\n",
+ ch_count,
+ ud->tchan_cnt - bitmap_weight(ud->tchan_map,
+ ud->tchan_cnt),
+ ud->rchan_cnt - bitmap_weight(ud->rchan_map,
+ ud->rchan_cnt),
+ ud->rflow_cnt - bitmap_weight(ud->rflow_gp_map,
+ ud->rflow_cnt));
+ break;
+ case DMA_TYPE_BCDMA:
+ dev_info(dev,
+ "Channels: %d (bchan: %u, tchan: %u, rchan: %u)\n",
+ ch_count,
+ ud->bchan_cnt - bitmap_weight(ud->bchan_map,
+ ud->bchan_cnt),
+ ud->tchan_cnt - bitmap_weight(ud->tchan_map,
+ ud->tchan_cnt),
+ ud->rchan_cnt - bitmap_weight(ud->rchan_map,
+ ud->rchan_cnt));
+ break;
+ default:
+ break;
+ }
return ch_count;
}
@@ -3462,10 +4484,19 @@ static void udma_dbg_summary_show_chan(struct seq_file *s,
seq_printf(s, " %-13s| %s", dma_chan_name(chan),
chan->dbg_client_name ?: "in-use");
- seq_printf(s, " (%s, ", dmaengine_get_direction_text(uc->config.dir));
+ if (ucc->tr_trigger_type)
+ seq_puts(s, " (triggered, ");
+ else
+ seq_printf(s, " (%s, ",
+ dmaengine_get_direction_text(uc->config.dir));
switch (uc->config.dir) {
case DMA_MEM_TO_MEM:
+ if (uc->ud->match_data->type == DMA_TYPE_BCDMA) {
+ seq_printf(s, "bchan%d)\n", uc->bchan->id);
+ return;
+ }
+
seq_printf(s, "chan%d pair [0x%04x -> 0x%04x], ", uc->tchan->id,
ucc->src_thread, ucc->dst_thread);
break;
@@ -3537,6 +4568,23 @@ static int udma_probe(struct platform_device *pdev)
if (!ud)
return -ENOMEM;
+ match = of_match_node(udma_of_match, dev->of_node);
+ if (!match) {
+ match = of_match_node(bcdma_of_match, dev->of_node);
+ if (!match) {
+ dev_err(dev, "No compatible match found\n");
+ return -ENODEV;
+ }
+ }
+ ud->match_data = match->data;
+
+ soc = soc_device_match(k3_soc_devices);
+ if (!soc) {
+ dev_err(dev, "No compatible SoC found\n");
+ return -ENODEV;
+ }
+ ud->soc_data = soc->data;
+
ret = udma_get_mmrs(pdev, ud);
if (ret)
return ret;
@@ -3560,16 +4608,38 @@ static int udma_probe(struct platform_device *pdev)
return ret;
}
- ret = of_property_read_u32(dev->of_node, "ti,udma-atype", &ud->atype);
- if (!ret && ud->atype > 2) {
- dev_err(dev, "Invalid atype: %u\n", ud->atype);
- return -EINVAL;
+ if (ud->match_data->type == DMA_TYPE_UDMA) {
+ ret = of_property_read_u32(dev->of_node, "ti,udma-atype",
+ &ud->atype);
+ if (!ret && ud->atype > 2) {
+ dev_err(dev, "Invalid atype: %u\n", ud->atype);
+ return -EINVAL;
+ }
+ } else {
+ ret = of_property_read_u32(dev->of_node, "ti,asel",
+ &ud->asel);
+ if (!ret && ud->asel > 15) {
+ dev_err(dev, "Invalid asel: %u\n", ud->asel);
+ return -EINVAL;
+ }
}
ud->tisci_rm.tisci_udmap_ops = &ud->tisci_rm.tisci->ops.rm_udmap_ops;
ud->tisci_rm.tisci_psil_ops = &ud->tisci_rm.tisci->ops.rm_psil_ops;
- ud->ringacc = of_k3_ringacc_get_by_phandle(dev->of_node, "ti,ringacc");
+ if (ud->match_data->type == DMA_TYPE_UDMA) {
+ ud->ringacc = of_k3_ringacc_get_by_phandle(dev->of_node, "ti,ringacc");
+ } else {
+ struct k3_ringacc_init_data ring_init_data;
+
+ ring_init_data.tisci = ud->tisci_rm.tisci;
+ ring_init_data.tisci_dev_id = ud->tisci_rm.tisci_dev_id;
+ ring_init_data.num_rings = ud->bchan_cnt + ud->tchan_cnt +
+ ud->rchan_cnt;
+
+ ud->ringacc = k3_ringacc_dmarings_init(pdev, &ring_init_data);
+ }
+
if (IS_ERR(ud->ringacc))
return PTR_ERR(ud->ringacc);
@@ -3580,24 +4650,9 @@ static int udma_probe(struct platform_device *pdev)
return -EPROBE_DEFER;
}
- match = of_match_node(udma_of_match, dev->of_node);
- if (!match) {
- dev_err(dev, "No compatible match found\n");
- return -ENODEV;
- }
- ud->match_data = match->data;
-
- soc = soc_device_match(k3_soc_devices);
- if (!soc) {
- dev_err(dev, "No compatible SoC found\n");
- return -ENODEV;
- }
- ud->soc_data = soc->data;
-
dma_cap_set(DMA_SLAVE, ud->ddev.cap_mask);
dma_cap_set(DMA_CYCLIC, ud->ddev.cap_mask);
- ud->ddev.device_alloc_chan_resources = udma_alloc_chan_resources;
ud->ddev.device_config = udma_slave_config;
ud->ddev.device_prep_slave_sg = udma_prep_slave_sg;
ud->ddev.device_prep_dma_cyclic = udma_prep_dma_cyclic;
@@ -3611,7 +4666,21 @@ static int udma_probe(struct platform_device *pdev)
ud->ddev.dbg_summary_show = udma_dbg_summary_show;
#endif
+ switch (ud->match_data->type) {
+ case DMA_TYPE_UDMA:
+ ud->ddev.device_alloc_chan_resources =
+ udma_alloc_chan_resources;
+ break;
+ case DMA_TYPE_BCDMA:
+ ud->ddev.device_alloc_chan_resources =
+ bcdma_alloc_chan_resources;
+ ud->ddev.device_router_config = bcdma_router_config;
+ break;
+ default:
+ return -EINVAL;
+ }
ud->ddev.device_free_chan_resources = udma_free_chan_resources;
+
ud->ddev.src_addr_widths = TI_UDMAC_BUSWIDTHS;
ud->ddev.dst_addr_widths = TI_UDMAC_BUSWIDTHS;
ud->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
@@ -3619,7 +4688,8 @@ static int udma_probe(struct platform_device *pdev)
ud->ddev.copy_align = DMAENGINE_ALIGN_8_BYTES;
ud->ddev.desc_metadata_modes = DESC_METADATA_CLIENT |
DESC_METADATA_ENGINE;
- if (ud->match_data->enable_memcpy_support) {
+ if (ud->match_data->enable_memcpy_support &&
+ !(ud->match_data->type == DMA_TYPE_BCDMA && ud->bchan_cnt == 0)) {
dma_cap_set(DMA_MEMCPY, ud->ddev.cap_mask);
ud->ddev.device_prep_dma_memcpy = udma_prep_dma_memcpy;
ud->ddev.directions |= BIT(DMA_MEM_TO_MEM);
@@ -3632,7 +4702,7 @@ static int udma_probe(struct platform_device *pdev)
INIT_LIST_HEAD(&ud->ddev.channels);
INIT_LIST_HEAD(&ud->desc_to_purge);
- ch_count = udma_setup_resources(ud);
+ ch_count = setup_resources(ud);
if (ch_count <= 0)
return ch_count;
@@ -3647,6 +4717,13 @@ static int udma_probe(struct platform_device *pdev)
if (ret)
return ret;
+ for (i = 0; i < ud->bchan_cnt; i++) {
+ struct udma_bchan *bchan = &ud->bchans[i];
+
+ bchan->id = i;
+ bchan->reg_rt = ud->mmrs[MMR_BCHANRT] + i * 0x1000;
+ }
+
for (i = 0; i < ud->tchan_cnt; i++) {
struct udma_tchan *tchan = &ud->tchans[i];
@@ -3673,6 +4750,7 @@ static int udma_probe(struct platform_device *pdev)
uc->ud = ud;
uc->vc.desc_free = udma_desc_free;
uc->id = i;
+ uc->bchan = NULL;
uc->tchan = NULL;
uc->rchan = NULL;
uc->config.remote_thread_id = -1;
@@ -3715,5 +4793,15 @@ static struct platform_driver udma_driver = {
};
builtin_platform_driver(udma_driver);
+static struct platform_driver bcdma_driver = {
+ .driver = {
+ .name = "ti-bcdma",
+ .of_match_table = bcdma_of_match,
+ .suppress_bind_attrs = true,
+ },
+ .probe = udma_probe,
+};
+builtin_platform_driver(bcdma_driver);
+
/* Private interfaces to UDMA */
#include "k3-udma-private.c"
diff --git a/drivers/dma/ti/k3-udma.h b/drivers/dma/ti/k3-udma.h
index d1cace0cb43b..bf78ad94354a 100644
--- a/drivers/dma/ti/k3-udma.h
+++ b/drivers/dma/ti/k3-udma.h
@@ -18,7 +18,7 @@
#define UDMA_RX_FLOW_ID_FW_OES_REG 0x80
#define UDMA_RX_FLOW_ID_FW_STATUS_REG 0x88
-/* TCHANRT/RCHANRT registers */
+/* BCHANRT/TCHANRT/RCHANRT registers */
#define UDMA_CHAN_RT_CTL_REG 0x0
#define UDMA_CHAN_RT_SWTRIG_REG 0x8
#define UDMA_CHAN_RT_STDATA_REG 0x80
@@ -45,6 +45,10 @@
#define UDMA_CAP3_HCHAN_CNT(val) (((val) >> 14) & 0x1ff)
#define UDMA_CAP3_UCHAN_CNT(val) (((val) >> 23) & 0x1ff)
+#define BCDMA_CAP2_BCHAN_CNT(val) ((val) & 0x1ff)
+#define BCDMA_CAP2_TCHAN_CNT(val) (((val) >> 9) & 0x1ff)
+#define BCDMA_CAP2_RCHAN_CNT(val) (((val) >> 18) & 0x1ff)
+
/* UDMA_CHAN_RT_CTL_REG */
#define UDMA_CHAN_RT_CTL_EN BIT(31)
#define UDMA_CHAN_RT_CTL_TDOWN BIT(30)
@@ -82,13 +86,17 @@
*/
#define PDMA_STATIC_TR_Z(x, mask) ((x) & (mask))
+/* Address Space Select */
+#define K3_ADDRESS_ASEL_SHIFT 48
+
struct udma_dev;
struct udma_tchan;
struct udma_rchan;
struct udma_rflow;
enum udma_rm_range {
- RM_RANGE_TCHAN = 0,
+ RM_RANGE_BCHAN = 0,
+ RM_RANGE_TCHAN,
RM_RANGE_RCHAN,
RM_RANGE_RFLOW,
RM_RANGE_LAST,
--
Peter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
Rings in RING mode should be using the DMA device for DMA API as in this
mode the ringacc will not access the ring memory in any ways, but the DMA
is.
Fix up the ring configuration and set the dma_dev unconditionally and let
the ringacc driver to select the correct device to use for DMA API.
Signed-off-by: Peter Ujfalusi <[email protected]>
---
drivers/dma/ti/k3-udma-glue.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/dma/ti/k3-udma-glue.c b/drivers/dma/ti/k3-udma-glue.c
index a53bc4707ae8..f39825ce288a 100644
--- a/drivers/dma/ti/k3-udma-glue.c
+++ b/drivers/dma/ti/k3-udma-glue.c
@@ -280,6 +280,10 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev,
goto err;
}
+ /* Set the dma_dev for the rings to be configured */
+ cfg->tx_cfg.dma_dev = k3_udma_glue_tx_get_dma_device(tx_chn);
+ cfg->txcq_cfg.dma_dev = cfg->tx_cfg.dma_dev;
+
ret = k3_ringacc_ring_cfg(tx_chn->ringtx, &cfg->tx_cfg);
if (ret) {
dev_err(dev, "Failed to cfg ringtx %d\n", ret);
@@ -589,6 +593,10 @@ static int k3_udma_glue_cfg_rx_flow(struct k3_udma_glue_rx_channel *rx_chn,
goto err_rflow_put;
}
+ /* Set the dma_dev for the rings to be configured */
+ flow_cfg->rx_cfg.dma_dev = k3_udma_glue_rx_get_dma_device(rx_chn);
+ flow_cfg->rxfdq_cfg.dma_dev = flow_cfg->rx_cfg.dma_dev;
+
ret = k3_ringacc_ring_cfg(flow->ringrx, &flow_cfg->rx_cfg);
if (ret) {
dev_err(dev, "Failed to cfg ringrx %d\n", ret);
--
Peter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
One of the DMAs introduced with AM64 is the Packet DMA (PKTDMA).
It serves similar purpose as K3 UDMAP channels in packet mode, but with
notable differences, like tflow support and channels being allocated to
service specific peripherals.
The rings for the PKTDMA is integrated within the DMA itself instead of
using rings from the general purpose ringacc.
PKTDMA can be used to service PSI-L peripherals, similarly to
K3 UDMA channels.
Most of the driver code can be reused for PKTDMA tchan/rchan support but
new setup and allocation functions are needed to handle the differences
between the DMAs.
Signed-off-by: Peter Ujfalusi <[email protected]>
---
drivers/dma/ti/k3-udma-private.c | 9 +
drivers/dma/ti/k3-udma.c | 546 +++++++++++++++++++++++++++++--
drivers/dma/ti/k3-udma.h | 4 +
3 files changed, 534 insertions(+), 25 deletions(-)
diff --git a/drivers/dma/ti/k3-udma-private.c b/drivers/dma/ti/k3-udma-private.c
index 8ff7a264be03..f0cecd29cff1 100644
--- a/drivers/dma/ti/k3-udma-private.c
+++ b/drivers/dma/ti/k3-udma-private.c
@@ -82,6 +82,9 @@ EXPORT_SYMBOL(xudma_free_gp_rflow_range);
bool xudma_rflow_is_gp(struct udma_dev *ud, int id)
{
+ if (!ud->rflow_gp_map)
+ return false;
+
return !test_bit(id, ud->rflow_gp_map);
}
EXPORT_SYMBOL(xudma_rflow_is_gp);
@@ -113,6 +116,12 @@ void xudma_rflow_put(struct udma_dev *ud, struct udma_rflow *p)
}
EXPORT_SYMBOL(xudma_rflow_put);
+int xudma_get_rflow_ring_offset(struct udma_dev *ud)
+{
+ return ud->tflow_cnt;
+}
+EXPORT_SYMBOL(xudma_get_rflow_ring_offset);
+
#define XUDMA_GET_RESOURCE_ID(res) \
int xudma_##res##_get_id(struct udma_##res *p) \
{ \
diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
index 69f2c43354d4..bcec3d5c7be1 100644
--- a/drivers/dma/ti/k3-udma.c
+++ b/drivers/dma/ti/k3-udma.c
@@ -59,6 +59,7 @@ struct udma_chan;
enum k3_dma_type {
DMA_TYPE_UDMA = 0,
DMA_TYPE_BCDMA,
+ DMA_TYPE_PKTDMA,
};
enum udma_mmr {
@@ -82,6 +83,8 @@ struct udma_tchan {
int id;
struct k3_ring *t_ring; /* Transmit ring */
struct k3_ring *tc_ring; /* Transmit Completion ring */
+ int tflow_id; /* applicable only for PKTDMA */
+
};
#define udma_bchan udma_tchan
@@ -109,6 +112,10 @@ struct udma_oes_offsets {
u32 bcdma_tchan_ring;
u32 bcdma_rchan_data;
u32 bcdma_rchan_ring;
+
+ /* PKTDMA Output Event Offsets */
+ u32 pktdma_tchan_flow;
+ u32 pktdma_rchan_flow;
};
#define UDMA_FLAG_PDMA_ACC32 BIT(0)
@@ -179,12 +186,14 @@ struct udma_dev {
int echan_cnt;
int rchan_cnt;
int rflow_cnt;
+ int tflow_cnt;
unsigned long *bchan_map;
unsigned long *tchan_map;
unsigned long *rchan_map;
unsigned long *rflow_gp_map;
unsigned long *rflow_gp_map_allocated;
unsigned long *rflow_in_use;
+ unsigned long *tflow_map;
struct udma_bchan *bchans;
struct udma_tchan *tchans;
@@ -249,6 +258,11 @@ struct udma_chan_config {
u32 tr_trigger_type;
+ /* PKDMA mapped channel */
+ int mapped_channel_id;
+ /* PKTDMA default tflow or rflow for mapped channel */
+ int default_flow_id;
+
enum dma_transfer_direction dir;
};
@@ -426,6 +440,8 @@ static void udma_reset_uchan(struct udma_chan *uc)
{
memset(&uc->config, 0, sizeof(uc->config));
uc->config.remote_thread_id = -1;
+ uc->config.mapped_channel_id = -1;
+ uc->config.default_flow_id = -1;
uc->state = UDMA_CHAN_IS_IDLE;
}
@@ -815,10 +831,16 @@ static void udma_start_desc(struct udma_chan *uc)
{
struct udma_chan_config *ucc = &uc->config;
- if (ucc->pkt_mode && (uc->cyclic || ucc->dir == DMA_DEV_TO_MEM)) {
+ if (uc->ud->match_data->type == DMA_TYPE_UDMA && ucc->pkt_mode &&
+ (uc->cyclic || ucc->dir == DMA_DEV_TO_MEM)) {
int i;
- /* Push all descriptors to ring for packet mode cyclic or RX */
+ /*
+ * UDMA only: Push all descriptors to ring for packet mode
+ * cyclic or RX
+ * PKTDMA supports pre-linked descriptor and cyclic is not
+ * supported
+ */
for (i = 0; i < uc->desc->sglen; i++)
udma_push_to_ring(uc, i);
} else {
@@ -1250,10 +1272,12 @@ static struct udma_rflow *__udma_get_rflow(struct udma_dev *ud, int id)
if (test_bit(id, ud->rflow_in_use))
return ERR_PTR(-ENOENT);
- /* GP rflow has to be allocated first */
- if (!test_bit(id, ud->rflow_gp_map) &&
- !test_bit(id, ud->rflow_gp_map_allocated))
- return ERR_PTR(-EINVAL);
+ if (ud->rflow_gp_map) {
+ /* GP rflow has to be allocated first */
+ if (!test_bit(id, ud->rflow_gp_map) &&
+ !test_bit(id, ud->rflow_gp_map_allocated))
+ return ERR_PTR(-EINVAL);
+ }
dev_dbg(ud->dev, "get rflow%d\n", id);
set_bit(id, ud->rflow_in_use);
@@ -1343,9 +1367,39 @@ static int udma_get_tchan(struct udma_chan *uc)
return 0;
}
- uc->tchan = __udma_reserve_tchan(ud, uc->config.channel_tpl, -1);
+ /*
+ * mapped_channel_id is -1 for UDMA, BCDMA and PKTDMA unmapped channels.
+ * For PKTDMA mapped channels it is configured to a channel which must
+ * be used to service the peripheral.
+ */
+ uc->tchan = __udma_reserve_tchan(ud, uc->config.channel_tpl,
+ uc->config.mapped_channel_id);
+ if (IS_ERR(uc->tchan))
+ return PTR_ERR(uc->tchan);
+
+ if (ud->tflow_cnt) {
+ int tflow_id;
+
+ /* Only PKTDMA have support for tx flows */
+ if (uc->config.default_flow_id >= 0)
+ tflow_id = uc->config.default_flow_id;
+ else
+ tflow_id = uc->tchan->id;
+
+ if (test_bit(tflow_id, ud->tflow_map)) {
+ dev_err(ud->dev, "tflow%d is in use\n", tflow_id);
+ clear_bit(uc->tchan->id, ud->tchan_map);
+ uc->tchan = NULL;
+ return -ENOENT;
+ }
+
+ uc->tchan->tflow_id = tflow_id;
+ set_bit(tflow_id, ud->tflow_map);
+ } else {
+ uc->tchan->tflow_id = -1;
+ }
- return PTR_ERR_OR_ZERO(uc->tchan);
+ return 0;
}
static int udma_get_rchan(struct udma_chan *uc)
@@ -1358,7 +1412,13 @@ static int udma_get_rchan(struct udma_chan *uc)
return 0;
}
- uc->rchan = __udma_reserve_rchan(ud, uc->config.channel_tpl, -1);
+ /*
+ * mapped_channel_id is -1 for UDMA, BCDMA and PKTDMA unmapped channels.
+ * For PKTDMA mapped channels it is configured to a channel which must
+ * be used to service the peripheral.
+ */
+ uc->rchan = __udma_reserve_rchan(ud, uc->config.channel_tpl,
+ uc->config.mapped_channel_id);
return PTR_ERR_OR_ZERO(uc->rchan);
}
@@ -1405,6 +1465,9 @@ static int udma_get_chan_pair(struct udma_chan *uc)
uc->tchan = &ud->tchans[chan_id];
uc->rchan = &ud->rchans[chan_id];
+ /* UDMA does not use tx flows */
+ uc->tchan->tflow_id = -1;
+
return 0;
}
@@ -1461,6 +1524,10 @@ static void udma_put_tchan(struct udma_chan *uc)
dev_dbg(ud->dev, "chan%d: put tchan%d\n", uc->id,
uc->tchan->id);
clear_bit(uc->tchan->id, ud->tchan_map);
+
+ if (uc->tchan->tflow_id >= 0)
+ clear_bit(uc->tchan->tflow_id, ud->tflow_map);
+
uc->tchan = NULL;
}
}
@@ -1561,7 +1628,10 @@ static int udma_alloc_tx_resources(struct udma_chan *uc)
return ret;
tchan = uc->tchan;
- ring_idx = ud->bchan_cnt + tchan->id;
+ if (tchan->tflow_id >= 0)
+ ring_idx = tchan->tflow_id;
+ else
+ ring_idx = ud->bchan_cnt + tchan->id;
ret = k3_ringacc_request_rings_pair(ud->ringacc, ring_idx, -1,
&tchan->t_ring,
@@ -1638,15 +1708,23 @@ static int udma_alloc_rx_resources(struct udma_chan *uc)
if (uc->config.dir == DMA_MEM_TO_MEM)
return 0;
- ret = udma_get_rflow(uc, uc->rchan->id);
+ if (uc->config.default_flow_id >= 0)
+ ret = udma_get_rflow(uc, uc->config.default_flow_id);
+ else
+ ret = udma_get_rflow(uc, uc->rchan->id);
+
if (ret) {
ret = -EBUSY;
goto err_rflow;
}
rflow = uc->rflow;
- fd_ring_id = ud->bchan_cnt + ud->tchan_cnt + ud->echan_cnt +
- uc->rchan->id;
+ if (ud->tflow_cnt)
+ fd_ring_id = ud->tflow_cnt + rflow->id;
+ else
+ fd_ring_id = ud->bchan_cnt + ud->tchan_cnt + ud->echan_cnt +
+ uc->rchan->id;
+
ret = k3_ringacc_request_rings_pair(ud->ringacc, fd_ring_id, -1,
&rflow->fd_ring, &rflow->r_ring);
if (ret) {
@@ -1862,6 +1940,8 @@ static int bcdma_tisci_tx_channel_config(struct udma_chan *uc)
return ret;
}
+#define pktdma_tisci_tx_channel_config bcdma_tisci_tx_channel_config
+
static int udma_tisci_rx_channel_config(struct udma_chan *uc)
{
struct udma_dev *ud = uc->ud;
@@ -1963,6 +2043,52 @@ static int bcdma_tisci_rx_channel_config(struct udma_chan *uc)
return ret;
}
+static int pktdma_tisci_rx_channel_config(struct udma_chan *uc)
+{
+ struct udma_dev *ud = uc->ud;
+ struct udma_tisci_rm *tisci_rm = &ud->tisci_rm;
+ const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops;
+ struct ti_sci_msg_rm_udmap_rx_ch_cfg req_rx = { 0 };
+ struct ti_sci_msg_rm_udmap_flow_cfg flow_req = { 0 };
+ int ret = 0;
+
+ req_rx.valid_params = TISCI_BCDMA_RCHAN_VALID_PARAMS;
+ req_rx.nav_id = tisci_rm->tisci_dev_id;
+ req_rx.index = uc->rchan->id;
+
+ ret = tisci_ops->rx_ch_cfg(tisci_rm->tisci, &req_rx);
+ if (ret) {
+ dev_err(ud->dev, "rchan%d cfg failed %d\n", uc->rchan->id, ret);
+ return ret;
+ }
+
+ flow_req.valid_params =
+ TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_EINFO_PRESENT_VALID |
+ TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_PSINFO_PRESENT_VALID |
+ TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_ERROR_HANDLING_VALID;
+
+ flow_req.nav_id = tisci_rm->tisci_dev_id;
+ flow_req.flow_index = uc->rflow->id;
+
+ if (uc->config.needs_epib)
+ flow_req.rx_einfo_present = 1;
+ else
+ flow_req.rx_einfo_present = 0;
+ if (uc->config.psd_size)
+ flow_req.rx_psinfo_present = 1;
+ else
+ flow_req.rx_psinfo_present = 0;
+ flow_req.rx_error_handling = 1;
+
+ ret = tisci_ops->rx_flow_cfg(tisci_rm->tisci, &flow_req);
+
+ if (ret)
+ dev_err(ud->dev, "flow%d config failed: %d\n", uc->rflow->id,
+ ret);
+
+ return ret;
+}
+
static int udma_alloc_chan_resources(struct dma_chan *chan)
{
struct udma_chan *uc = to_udma_chan(chan);
@@ -2381,6 +2507,157 @@ static int bcdma_router_config(struct dma_chan *chan)
return router_data->set_event(router_data->priv, trigger_event);
}
+static int pktdma_alloc_chan_resources(struct dma_chan *chan)
+{
+ struct udma_chan *uc = to_udma_chan(chan);
+ struct udma_dev *ud = to_udma_dev(chan->device);
+ const struct udma_oes_offsets *oes = &ud->soc_data->oes;
+ u32 irq_ring_idx;
+ int ret;
+
+ /*
+ * Make sure that the completion is in a known state:
+ * No teardown, the channel is idle
+ */
+ reinit_completion(&uc->teardown_completed);
+ complete_all(&uc->teardown_completed);
+ uc->state = UDMA_CHAN_IS_IDLE;
+
+ switch (uc->config.dir) {
+ case DMA_MEM_TO_DEV:
+ /* Slave transfer synchronized - mem to dev (TX) trasnfer */
+ dev_dbg(uc->ud->dev, "%s: chan%d as MEM-to-DEV\n", __func__,
+ uc->id);
+
+ ret = udma_alloc_tx_resources(uc);
+ if (ret) {
+ uc->config.remote_thread_id = -1;
+ return ret;
+ }
+
+ uc->config.src_thread = ud->psil_base + uc->tchan->id;
+ uc->config.dst_thread = uc->config.remote_thread_id;
+ uc->config.dst_thread |= K3_PSIL_DST_THREAD_ID_OFFSET;
+
+ irq_ring_idx = uc->tchan->tflow_id + oes->pktdma_tchan_flow;
+
+ ret = pktdma_tisci_tx_channel_config(uc);
+ break;
+ case DMA_DEV_TO_MEM:
+ /* Slave transfer synchronized - dev to mem (RX) trasnfer */
+ dev_dbg(uc->ud->dev, "%s: chan%d as DEV-to-MEM\n", __func__,
+ uc->id);
+
+ ret = udma_alloc_rx_resources(uc);
+ if (ret) {
+ uc->config.remote_thread_id = -1;
+ return ret;
+ }
+
+ uc->config.src_thread = uc->config.remote_thread_id;
+ uc->config.dst_thread = (ud->psil_base + uc->rchan->id) |
+ K3_PSIL_DST_THREAD_ID_OFFSET;
+
+ irq_ring_idx = uc->rflow->id + oes->pktdma_rchan_flow;
+
+ ret = pktdma_tisci_rx_channel_config(uc);
+ break;
+ default:
+ /* Can not happen */
+ dev_err(uc->ud->dev, "%s: chan%d invalid direction (%u)\n",
+ __func__, uc->id, uc->config.dir);
+ return -EINVAL;
+ }
+
+ /* check if the channel configuration was successful */
+ if (ret)
+ goto err_res_free;
+
+ if (udma_is_chan_running(uc)) {
+ dev_warn(ud->dev, "chan%d: is running!\n", uc->id);
+ udma_reset_chan(uc, false);
+ if (udma_is_chan_running(uc)) {
+ dev_err(ud->dev, "chan%d: won't stop!\n", uc->id);
+ ret = -EBUSY;
+ goto err_res_free;
+ }
+ }
+
+ uc->dma_dev = dmaengine_get_dma_device(chan);
+ uc->hdesc_pool = dma_pool_create(uc->name, uc->dma_dev,
+ uc->config.hdesc_size, ud->desc_align,
+ 0);
+ if (!uc->hdesc_pool) {
+ dev_err(ud->ddev.dev,
+ "Descriptor pool allocation failed\n");
+ uc->use_dma_pool = false;
+ ret = -ENOMEM;
+ goto err_res_free;
+ }
+
+ uc->use_dma_pool = true;
+
+ /* PSI-L pairing */
+ ret = navss_psil_pair(ud, uc->config.src_thread, uc->config.dst_thread);
+ if (ret) {
+ dev_err(ud->dev, "PSI-L pairing failed: 0x%04x -> 0x%04x\n",
+ uc->config.src_thread, uc->config.dst_thread);
+ goto err_res_free;
+ }
+
+ uc->psil_paired = true;
+
+ uc->irq_num_ring = ti_sci_inta_msi_get_virq(ud->dev, irq_ring_idx);
+ if (uc->irq_num_ring <= 0) {
+ dev_err(ud->dev, "Failed to get ring irq (index: %u)\n",
+ irq_ring_idx);
+ ret = -EINVAL;
+ goto err_psi_free;
+ }
+
+ ret = request_irq(uc->irq_num_ring, udma_ring_irq_handler,
+ IRQF_TRIGGER_HIGH, uc->name, uc);
+ if (ret) {
+ dev_err(ud->dev, "chan%d: ring irq request failed\n", uc->id);
+ goto err_irq_free;
+ }
+
+ uc->irq_num_udma = 0;
+
+ udma_reset_rings(uc);
+
+ INIT_DELAYED_WORK_ONSTACK(&uc->tx_drain.work,
+ udma_check_tx_completion);
+
+ if (uc->tchan)
+ dev_dbg(ud->dev,
+ "chan%d: tchan%d, tflow%d, Remote thread: 0x%04x\n",
+ uc->id, uc->tchan->id, uc->tchan->tflow_id,
+ uc->config.remote_thread_id);
+ else if (uc->rchan)
+ dev_dbg(ud->dev,
+ "chan%d: rchan%d, rflow%d, Remote thread: 0x%04x\n",
+ uc->id, uc->rchan->id, uc->rflow->id,
+ uc->config.remote_thread_id);
+ return 0;
+
+err_irq_free:
+ uc->irq_num_ring = 0;
+err_psi_free:
+ navss_psil_unpair(ud, uc->config.src_thread, uc->config.dst_thread);
+ uc->psil_paired = false;
+err_res_free:
+ udma_free_tx_resources(uc);
+ udma_free_rx_resources(uc);
+
+ udma_reset_uchan(uc);
+
+ dma_pool_destroy(uc->hdesc_pool);
+ uc->use_dma_pool = false;
+
+ return ret;
+}
+
static int udma_slave_config(struct dma_chan *chan,
struct dma_slave_config *cfg)
{
@@ -2859,6 +3136,7 @@ udma_prep_slave_sg_pkt(struct udma_chan *uc, struct scatterlist *sgl,
struct udma_desc *d;
u32 ring_id;
unsigned int i;
+ u64 asel;
d = kzalloc(struct_size(d, hwdesc, sglen), GFP_NOWAIT);
if (!d)
@@ -2872,6 +3150,11 @@ udma_prep_slave_sg_pkt(struct udma_chan *uc, struct scatterlist *sgl,
else
ring_id = k3_ringacc_get_ring_id(uc->tchan->tc_ring);
+ if (uc->ud->match_data->type == DMA_TYPE_UDMA)
+ asel = 0;
+ else
+ asel = (u64)uc->config.asel << K3_ADDRESS_ASEL_SHIFT;
+
for_each_sg(sgl, sgent, sglen, i) {
struct udma_hwdesc *hwdesc = &d->hwdesc[i];
dma_addr_t sg_addr = sg_dma_address(sgent);
@@ -2906,14 +3189,16 @@ udma_prep_slave_sg_pkt(struct udma_chan *uc, struct scatterlist *sgl,
}
/* attach the sg buffer to the descriptor */
+ sg_addr |= asel;
cppi5_hdesc_attach_buf(desc, sg_addr, sg_len, sg_addr, sg_len);
/* Attach link as host buffer descriptor */
if (h_desc)
cppi5_hdesc_link_hbdesc(h_desc,
- hwdesc->cppi5_desc_paddr);
+ hwdesc->cppi5_desc_paddr | asel);
- if (dir == DMA_MEM_TO_DEV)
+ if (uc->ud->match_data->type == DMA_TYPE_PKTDMA ||
+ dir == DMA_MEM_TO_DEV)
h_desc = desc;
}
@@ -3192,6 +3477,9 @@ udma_prep_dma_cyclic_pkt(struct udma_chan *uc, dma_addr_t buf_addr,
else
ring_id = k3_ringacc_get_ring_id(uc->tchan->tc_ring);
+ if (uc->ud->match_data->type != DMA_TYPE_UDMA)
+ buf_addr |= (u64)uc->config.asel << K3_ADDRESS_ASEL_SHIFT;
+
for (i = 0; i < periods; i++) {
struct udma_hwdesc *hwdesc = &d->hwdesc[i];
dma_addr_t period_addr = buf_addr + (period_len * i);
@@ -3708,6 +3996,7 @@ static void udma_free_chan_resources(struct dma_chan *chan)
static struct platform_driver udma_driver;
static struct platform_driver bcdma_driver;
+static struct platform_driver pktdma_driver;
struct udma_filter_param {
int remote_thread_id;
@@ -3725,7 +4014,8 @@ static bool udma_dma_filter_fn(struct dma_chan *chan, void *param)
struct udma_dev *ud;
if (chan->device->dev->driver != &udma_driver.driver &&
- chan->device->dev->driver != &bcdma_driver.driver)
+ chan->device->dev->driver != &bcdma_driver.driver &&
+ chan->device->dev->driver != &pktdma_driver.driver)
return false;
uc = to_udma_chan(chan);
@@ -3787,6 +4077,15 @@ static bool udma_dma_filter_fn(struct dma_chan *chan, void *param)
ucc->notdpkt = ep_config->notdpkt;
ucc->ep_type = ep_config->ep_type;
+ if (ud->match_data->type == DMA_TYPE_PKTDMA &&
+ ep_config->mapped_channel_id >= 0) {
+ ucc->mapped_channel_id = ep_config->mapped_channel_id;
+ ucc->default_flow_id = ep_config->default_flow_id;
+ } else {
+ ucc->mapped_channel_id = -1;
+ ucc->default_flow_id = -1;
+ }
+
if (ucc->ep_type != PSIL_EP_NATIVE) {
const struct udma_match_data *match_data = ud->match_data;
@@ -3903,6 +4202,14 @@ static struct udma_match_data am64_bcdma_data = {
.statictr_z_mask = GENMASK(23, 0),
};
+static struct udma_match_data am64_pktdma_data = {
+ .type = DMA_TYPE_PKTDMA,
+ .psil_base = 0x1000,
+ .enable_memcpy_support = false, /* PKTDMA does not support MEM_TO_MEM */
+ .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE,
+ .statictr_z_mask = GENMASK(23, 0),
+};
+
static const struct of_device_id udma_of_match[] = {
{
.compatible = "ti,am654-navss-main-udmap",
@@ -3929,6 +4236,14 @@ static const struct of_device_id bcdma_of_match[] = {
{ /* Sentinel */ },
};
+static const struct of_device_id pktdma_of_match[] = {
+ {
+ .compatible = "ti,am64-dmss-pktdma",
+ .data = &am64_pktdma_data,
+ },
+ { /* Sentinel */ },
+};
+
static struct udma_soc_data am654_soc_data = {
.oes = {
.udma_rchan = 0x200,
@@ -3955,6 +4270,8 @@ static struct udma_soc_data am64_soc_data = {
.bcdma_tchan_ring = 0x2a00,
.bcdma_rchan_data = 0x2e00,
.bcdma_rchan_ring = 0x3000,
+ .pktdma_tchan_flow = 0x1200,
+ .pktdma_rchan_flow = 0x1600,
},
.bcdma_trigger_event_offset = 0xc400,
};
@@ -3970,7 +4287,7 @@ static const struct soc_device_attribute k3_soc_devices[] = {
static int udma_get_mmrs(struct platform_device *pdev, struct udma_dev *ud)
{
struct resource *res;
- u32 cap2, cap3;
+ u32 cap2, cap3, cap4;
int i;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
@@ -3994,6 +4311,13 @@ static int udma_get_mmrs(struct platform_device *pdev, struct udma_dev *ud)
ud->tchan_cnt = BCDMA_CAP2_TCHAN_CNT(cap2);
ud->rchan_cnt = BCDMA_CAP2_RCHAN_CNT(cap2);
break;
+ case DMA_TYPE_PKTDMA:
+ cap4 = udma_read(ud->mmrs[MMR_GCFG], 0x30);
+ ud->tchan_cnt = UDMA_CAP2_TCHAN_CNT(cap2);
+ ud->rchan_cnt = UDMA_CAP2_RCHAN_CNT(cap2);
+ ud->rflow_cnt = UDMA_CAP3_RFLOW_CNT(cap3);
+ ud->tflow_cnt = PKTDMA_CAP4_TFLOW_CNT(cap4);
+ break;
default:
return -EINVAL;
}
@@ -4031,7 +4355,8 @@ static const char * const range_names[] = {
[RM_RANGE_BCHAN] = "ti,sci-rm-range-bchan",
[RM_RANGE_TCHAN] = "ti,sci-rm-range-tchan",
[RM_RANGE_RCHAN] = "ti,sci-rm-range-rchan",
- [RM_RANGE_RFLOW] = "ti,sci-rm-range-rflow"
+ [RM_RANGE_RFLOW] = "ti,sci-rm-range-rflow",
+ [RM_RANGE_TFLOW] = "ti,sci-rm-range-tflow",
};
static int udma_setup_resources(struct udma_dev *ud)
@@ -4106,7 +4431,7 @@ static int udma_setup_resources(struct udma_dev *ud)
/* Get resource ranges from tisci */
for (i = 0; i < RM_RANGE_LAST; i++) {
- if (i == RM_RANGE_BCHAN)
+ if (i == RM_RANGE_BCHAN || i == RM_RANGE_TFLOW)
continue;
tisci_rm->rm_ranges[i] =
@@ -4256,7 +4581,7 @@ static int bcdma_setup_resources(struct udma_dev *ud)
/* Get resource ranges from tisci */
for (i = 0; i < RM_RANGE_LAST; i++) {
- if (i == RM_RANGE_RFLOW)
+ if (i == RM_RANGE_RFLOW || i == RM_RANGE_TFLOW)
continue;
if (i == RM_RANGE_BCHAN && ud->bchan_cnt == 0)
continue;
@@ -4362,6 +4687,135 @@ static int bcdma_setup_resources(struct udma_dev *ud)
return 0;
}
+static int pktdma_setup_resources(struct udma_dev *ud)
+{
+ int ret, i, j;
+ struct device *dev = ud->dev;
+ struct ti_sci_resource *rm_res, irq_res;
+ struct udma_tisci_rm *tisci_rm = &ud->tisci_rm;
+ const struct udma_oes_offsets *oes = &ud->soc_data->oes;
+ u32 cap3;
+
+ /* Set up the throughput level start indexes */
+ cap3 = udma_read(ud->mmrs[MMR_GCFG], 0x2c);
+ if (UDMA_CAP3_UCHAN_CNT(cap3)) {
+ ud->tchan_tpl.levels = 3;
+ ud->tchan_tpl.start_idx[1] = UDMA_CAP3_UCHAN_CNT(cap3);
+ ud->tchan_tpl.start_idx[0] = ud->tchan_tpl.start_idx[1] +
+ UDMA_CAP3_HCHAN_CNT(cap3);
+ } else if (UDMA_CAP3_HCHAN_CNT(cap3)) {
+ ud->tchan_tpl.levels = 2;
+ ud->tchan_tpl.start_idx[0] = UDMA_CAP3_HCHAN_CNT(cap3);
+ } else {
+ ud->tchan_tpl.levels = 1;
+ }
+
+ ud->tchan_tpl.levels = ud->tchan_tpl.levels;
+ ud->tchan_tpl.start_idx[0] = ud->tchan_tpl.start_idx[0];
+ ud->tchan_tpl.start_idx[1] = ud->tchan_tpl.start_idx[1];
+
+ ud->tchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->tchan_cnt),
+ sizeof(unsigned long), GFP_KERNEL);
+ ud->tchans = devm_kcalloc(dev, ud->tchan_cnt, sizeof(*ud->tchans),
+ GFP_KERNEL);
+ ud->rchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->rchan_cnt),
+ sizeof(unsigned long), GFP_KERNEL);
+ ud->rchans = devm_kcalloc(dev, ud->rchan_cnt, sizeof(*ud->rchans),
+ GFP_KERNEL);
+ ud->rflow_in_use = devm_kcalloc(dev, BITS_TO_LONGS(ud->rflow_cnt),
+ sizeof(unsigned long),
+ GFP_KERNEL);
+ ud->rflows = devm_kcalloc(dev, ud->rflow_cnt, sizeof(*ud->rflows),
+ GFP_KERNEL);
+ ud->tflow_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->tflow_cnt),
+ sizeof(unsigned long), GFP_KERNEL);
+
+ if (!ud->tchan_map || !ud->rchan_map || !ud->tflow_map || !ud->tchans ||
+ !ud->rchans || !ud->rflows || !ud->rflow_in_use)
+ return -ENOMEM;
+
+ /* Get resource ranges from tisci */
+ for (i = 0; i < RM_RANGE_LAST; i++) {
+ if (i == RM_RANGE_BCHAN)
+ continue;
+
+ tisci_rm->rm_ranges[i] =
+ devm_ti_sci_get_of_resource(tisci_rm->tisci, dev,
+ tisci_rm->tisci_dev_id,
+ (char *)range_names[i]);
+ }
+
+ /* tchan ranges */
+ rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN];
+ if (IS_ERR(rm_res)) {
+ bitmap_zero(ud->tchan_map, ud->tchan_cnt);
+ } else {
+ bitmap_fill(ud->tchan_map, ud->tchan_cnt);
+ for (i = 0; i < rm_res->sets; i++)
+ udma_mark_resource_ranges(ud, ud->tchan_map,
+ &rm_res->desc[i], "tchan");
+ }
+
+ /* rchan ranges */
+ rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN];
+ if (IS_ERR(rm_res)) {
+ bitmap_zero(ud->rchan_map, ud->rchan_cnt);
+ } else {
+ bitmap_fill(ud->rchan_map, ud->rchan_cnt);
+ for (i = 0; i < rm_res->sets; i++)
+ udma_mark_resource_ranges(ud, ud->rchan_map,
+ &rm_res->desc[i], "rchan");
+ }
+
+ /* rflow ranges */
+ rm_res = tisci_rm->rm_ranges[RM_RANGE_RFLOW];
+ if (IS_ERR(rm_res)) {
+ /* all rflows are assigned exclusively to Linux */
+ bitmap_zero(ud->rflow_in_use, ud->rflow_cnt);
+ } else {
+ bitmap_fill(ud->rflow_in_use, ud->rflow_cnt);
+ for (i = 0; i < rm_res->sets; i++)
+ udma_mark_resource_ranges(ud, ud->rflow_in_use,
+ &rm_res->desc[i], "rflow");
+ }
+ irq_res.sets = rm_res->sets;
+
+ /* tflow ranges */
+ rm_res = tisci_rm->rm_ranges[RM_RANGE_TFLOW];
+ if (IS_ERR(rm_res)) {
+ /* all tflows are assigned exclusively to Linux */
+ bitmap_zero(ud->tflow_map, ud->tflow_cnt);
+ } else {
+ bitmap_fill(ud->tflow_map, ud->tflow_cnt);
+ for (i = 0; i < rm_res->sets; i++)
+ udma_mark_resource_ranges(ud, ud->tflow_map,
+ &rm_res->desc[i], "tflow");
+ }
+ irq_res.sets += rm_res->sets;
+
+ irq_res.desc = kcalloc(irq_res.sets, sizeof(*irq_res.desc), GFP_KERNEL);
+ rm_res = tisci_rm->rm_ranges[RM_RANGE_TFLOW];
+ for (i = 0; i < rm_res->sets; i++) {
+ irq_res.desc[i].start = rm_res->desc[i].start +
+ oes->pktdma_tchan_flow;
+ irq_res.desc[i].num = rm_res->desc[i].num;
+ }
+ rm_res = tisci_rm->rm_ranges[RM_RANGE_RFLOW];
+ for (j = 0; j < rm_res->sets; j++, i++) {
+ irq_res.desc[i].start = rm_res->desc[j].start +
+ oes->pktdma_rchan_flow;
+ irq_res.desc[i].num = rm_res->desc[j].num;
+ }
+ ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, &irq_res);
+ kfree(irq_res.desc);
+ if (ret) {
+ dev_err(ud->dev, "Failed to allocate MSI interrupts\n");
+ return ret;
+ }
+
+ return 0;
+}
+
static int setup_resources(struct udma_dev *ud)
{
struct device *dev = ud->dev;
@@ -4374,6 +4828,9 @@ static int setup_resources(struct udma_dev *ud)
case DMA_TYPE_BCDMA:
ret = bcdma_setup_resources(ud);
break;
+ case DMA_TYPE_PKTDMA:
+ ret = pktdma_setup_resources(ud);
+ break;
default:
return -EINVAL;
}
@@ -4417,6 +4874,14 @@ static int setup_resources(struct udma_dev *ud)
ud->rchan_cnt - bitmap_weight(ud->rchan_map,
ud->rchan_cnt));
break;
+ case DMA_TYPE_PKTDMA:
+ dev_info(dev,
+ "Channels: %d (tchan: %u, rchan: %u)\n",
+ ch_count,
+ ud->tchan_cnt - bitmap_weight(ud->tchan_map,
+ ud->tchan_cnt),
+ ud->rchan_cnt - bitmap_weight(ud->rchan_map,
+ ud->rchan_cnt));
default:
break;
}
@@ -4547,10 +5012,14 @@ static void udma_dbg_summary_show_chan(struct seq_file *s,
case DMA_DEV_TO_MEM:
seq_printf(s, "rchan%d [0x%04x -> 0x%04x], ", uc->rchan->id,
ucc->src_thread, ucc->dst_thread);
+ if (uc->ud->match_data->type == DMA_TYPE_PKTDMA)
+ seq_printf(s, "rflow%d, ", uc->rflow->id);
break;
case DMA_MEM_TO_DEV:
seq_printf(s, "tchan%d [0x%04x -> 0x%04x], ", uc->tchan->id,
ucc->src_thread, ucc->dst_thread);
+ if (uc->ud->match_data->type == DMA_TYPE_PKTDMA)
+ seq_printf(s, "tflow%d, ", uc->tchan->tflow_id);
break;
default:
seq_printf(s, ")\n");
@@ -4613,8 +5082,10 @@ static int udma_probe(struct platform_device *pdev)
return -ENOMEM;
match = of_match_node(udma_of_match, dev->of_node);
- if (!match) {
+ if (!match)
match = of_match_node(bcdma_of_match, dev->of_node);
+ if (!match) {
+ match = of_match_node(pktdma_of_match, dev->of_node);
if (!match) {
dev_err(dev, "No compatible match found\n");
return -ENODEV;
@@ -4678,8 +5149,14 @@ static int udma_probe(struct platform_device *pdev)
ring_init_data.tisci = ud->tisci_rm.tisci;
ring_init_data.tisci_dev_id = ud->tisci_rm.tisci_dev_id;
- ring_init_data.num_rings = ud->bchan_cnt + ud->tchan_cnt +
- ud->rchan_cnt;
+ if (ud->match_data->type == DMA_TYPE_BCDMA) {
+ ring_init_data.num_rings = ud->bchan_cnt +
+ ud->tchan_cnt +
+ ud->rchan_cnt;
+ } else {
+ ring_init_data.num_rings = ud->rflow_cnt +
+ ud->tflow_cnt;
+ }
ud->ringacc = k3_ringacc_dmarings_init(pdev, &ring_init_data);
}
@@ -4695,11 +5172,14 @@ static int udma_probe(struct platform_device *pdev)
}
dma_cap_set(DMA_SLAVE, ud->ddev.cap_mask);
- dma_cap_set(DMA_CYCLIC, ud->ddev.cap_mask);
+ /* cyclic operation is not supported via PKTDMA */
+ if (ud->match_data->type != DMA_TYPE_PKTDMA) {
+ dma_cap_set(DMA_CYCLIC, ud->ddev.cap_mask);
+ ud->ddev.device_prep_dma_cyclic = udma_prep_dma_cyclic;
+ }
ud->ddev.device_config = udma_slave_config;
ud->ddev.device_prep_slave_sg = udma_prep_slave_sg;
- ud->ddev.device_prep_dma_cyclic = udma_prep_dma_cyclic;
ud->ddev.device_issue_pending = udma_issue_pending;
ud->ddev.device_tx_status = udma_tx_status;
ud->ddev.device_pause = udma_pause;
@@ -4720,6 +5200,10 @@ static int udma_probe(struct platform_device *pdev)
bcdma_alloc_chan_resources;
ud->ddev.device_router_config = bcdma_router_config;
break;
+ case DMA_TYPE_PKTDMA:
+ ud->ddev.device_alloc_chan_resources =
+ pktdma_alloc_chan_resources;
+ break;
default:
return -EINVAL;
}
@@ -4798,6 +5282,8 @@ static int udma_probe(struct platform_device *pdev)
uc->tchan = NULL;
uc->rchan = NULL;
uc->config.remote_thread_id = -1;
+ uc->config.mapped_channel_id = -1;
+ uc->config.default_flow_id = -1;
uc->config.dir = DMA_MEM_TO_MEM;
uc->name = devm_kasprintf(dev, GFP_KERNEL, "%s chan%d",
dev_name(dev), i);
@@ -4847,5 +5333,15 @@ static struct platform_driver bcdma_driver = {
};
builtin_platform_driver(bcdma_driver);
+static struct platform_driver pktdma_driver = {
+ .driver = {
+ .name = "ti-pktdma",
+ .of_match_table = pktdma_of_match,
+ .suppress_bind_attrs = true,
+ },
+ .probe = udma_probe,
+};
+builtin_platform_driver(pktdma_driver);
+
/* Private interfaces to UDMA */
#include "k3-udma-private.c"
diff --git a/drivers/dma/ti/k3-udma.h b/drivers/dma/ti/k3-udma.h
index 8cb32681afaf..078cc3aa4126 100644
--- a/drivers/dma/ti/k3-udma.h
+++ b/drivers/dma/ti/k3-udma.h
@@ -55,6 +55,8 @@
#define BCDMA_CAP4_HTCHAN_CNT(val) (((val) >> 16) & 0xff)
#define BCDMA_CAP4_UTCHAN_CNT(val) (((val) >> 24) & 0xff)
+#define PKTDMA_CAP4_TFLOW_CNT(val) ((val) & 0x3fff)
+
/* UDMA_CHAN_RT_CTL_REG */
#define UDMA_CHAN_RT_CTL_EN BIT(31)
#define UDMA_CHAN_RT_CTL_TDOWN BIT(30)
@@ -105,6 +107,7 @@ enum udma_rm_range {
RM_RANGE_TCHAN,
RM_RANGE_RCHAN,
RM_RANGE_RFLOW,
+ RM_RANGE_TFLOW,
RM_RANGE_LAST,
};
@@ -151,5 +154,6 @@ void xudma_tchanrt_write(struct udma_tchan *tchan, int reg, u32 val);
u32 xudma_rchanrt_read(struct udma_rchan *rchan, int reg);
void xudma_rchanrt_write(struct udma_rchan *rchan, int reg, u32 val);
bool xudma_rflow_is_gp(struct udma_dev *ud, int id);
+int xudma_get_rflow_ring_offset(struct udma_dev *ud);
#endif /* K3_UDMA_H_ */
--
Peter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
From: Vignesh Raghavendra <[email protected]>
This commit adds support for PKTDMA in k3-udma glue driver. Use new
psil_endpoint_config struct to get static data for a given channel or a
flow during setup. Make sure that the RX flows being mapped to a RX
channel is within the range of flows that is been allocated to that RX
channel.
Signed-off-by: Vignesh Raghavendra <[email protected]>
Signed-off-by: Peter Ujfalusi <[email protected]>
---
drivers/dma/ti/k3-udma-glue.c | 272 ++++++++++++++++++++++++++-----
drivers/dma/ti/k3-udma-private.c | 24 +++
drivers/dma/ti/k3-udma.h | 4 +
include/linux/dma/k3-udma-glue.h | 8 +
4 files changed, 270 insertions(+), 38 deletions(-)
diff --git a/drivers/dma/ti/k3-udma-glue.c b/drivers/dma/ti/k3-udma-glue.c
index f39825ce288a..6730bc296043 100644
--- a/drivers/dma/ti/k3-udma-glue.c
+++ b/drivers/dma/ti/k3-udma-glue.c
@@ -22,6 +22,7 @@
struct k3_udma_glue_common {
struct device *dev;
+ struct device chan_dev;
struct udma_dev *udmax;
const struct udma_tisci_rm *tisci_rm;
struct k3_ringacc *ringacc;
@@ -32,7 +33,8 @@ struct k3_udma_glue_common {
bool epib;
u32 psdata_size;
u32 swdata_size;
- u32 atype;
+ u32 atype_asel;
+ struct psil_endpoint_config *ep_config;
};
struct k3_udma_glue_tx_channel {
@@ -53,6 +55,8 @@ struct k3_udma_glue_tx_channel {
bool tx_filt_einfo;
bool tx_filt_pswords;
bool tx_supr_tdpkt;
+
+ int udma_tflow_id;
};
struct k3_udma_glue_rx_flow {
@@ -104,7 +108,6 @@ static int of_k3_udma_glue_parse_chn(struct device_node *chn_np,
const char *name, struct k3_udma_glue_common *common,
bool tx_chn)
{
- struct psil_endpoint_config *ep_config;
struct of_phandle_args dma_spec;
u32 thread_id;
int ret = 0;
@@ -121,15 +124,26 @@ static int of_k3_udma_glue_parse_chn(struct device_node *chn_np,
&dma_spec))
return -ENOENT;
+ ret = of_k3_udma_glue_parse(dma_spec.np, common);
+ if (ret)
+ goto out_put_spec;
+
thread_id = dma_spec.args[0];
if (dma_spec.args_count == 2) {
- if (dma_spec.args[1] > 2) {
+ if (dma_spec.args[1] > 2 && !xudma_is_pktdma(common->udmax)) {
dev_err(common->dev, "Invalid channel atype: %u\n",
dma_spec.args[1]);
ret = -EINVAL;
goto out_put_spec;
}
- common->atype = dma_spec.args[1];
+ if (dma_spec.args[1] > 15 && xudma_is_pktdma(common->udmax)) {
+ dev_err(common->dev, "Invalid channel asel: %u\n",
+ dma_spec.args[1]);
+ ret = -EINVAL;
+ goto out_put_spec;
+ }
+
+ common->atype_asel = dma_spec.args[1];
}
if (tx_chn && !(thread_id & K3_PSIL_DST_THREAD_ID_OFFSET)) {
@@ -143,25 +157,23 @@ static int of_k3_udma_glue_parse_chn(struct device_node *chn_np,
}
/* get psil endpoint config */
- ep_config = psil_get_ep_config(thread_id);
- if (IS_ERR(ep_config)) {
+ common->ep_config = psil_get_ep_config(thread_id);
+ if (IS_ERR(common->ep_config)) {
dev_err(common->dev,
"No configuration for psi-l thread 0x%04x\n",
thread_id);
- ret = PTR_ERR(ep_config);
+ ret = PTR_ERR(common->ep_config);
goto out_put_spec;
}
- common->epib = ep_config->needs_epib;
- common->psdata_size = ep_config->psd_size;
+ common->epib = common->ep_config->needs_epib;
+ common->psdata_size = common->ep_config->psd_size;
if (tx_chn)
common->dst_thread = thread_id;
else
common->src_thread = thread_id;
- ret = of_k3_udma_glue_parse(dma_spec.np, common);
-
out_put_spec:
of_node_put(dma_spec.np);
return ret;
@@ -227,7 +239,7 @@ static int k3_udma_glue_cfg_tx_chn(struct k3_udma_glue_tx_channel *tx_chn)
req.tx_supr_tdpkt = 1;
req.tx_fetch_size = tx_chn->common.hdesc_size >> 2;
req.txcq_qnum = k3_ringacc_get_ring_id(tx_chn->ringtxcq);
- req.tx_atype = tx_chn->common.atype;
+ req.tx_atype = tx_chn->common.atype_asel;
return tisci_rm->tisci_udmap_ops->tx_ch_cfg(tisci_rm->tisci, &req);
}
@@ -259,8 +271,14 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev,
tx_chn->common.psdata_size,
tx_chn->common.swdata_size);
+ if (xudma_is_pktdma(tx_chn->common.udmax))
+ tx_chn->udma_tchan_id = tx_chn->common.ep_config->mapped_channel_id;
+ else
+ tx_chn->udma_tchan_id = -1;
+
/* request and cfg UDMAP TX channel */
- tx_chn->udma_tchanx = xudma_tchan_get(tx_chn->common.udmax, -1);
+ tx_chn->udma_tchanx = xudma_tchan_get(tx_chn->common.udmax,
+ tx_chn->udma_tchan_id);
if (IS_ERR(tx_chn->udma_tchanx)) {
ret = PTR_ERR(tx_chn->udma_tchanx);
dev_err(dev, "UDMAX tchanx get err %d\n", ret);
@@ -268,11 +286,33 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev,
}
tx_chn->udma_tchan_id = xudma_tchan_get_id(tx_chn->udma_tchanx);
+ tx_chn->common.chan_dev.parent = xudma_get_device(tx_chn->common.udmax);
+ dev_set_name(&tx_chn->common.chan_dev, "tchan%d-0x%04x",
+ tx_chn->udma_tchan_id, tx_chn->common.dst_thread);
+ ret = device_register(&tx_chn->common.chan_dev);
+ if (ret) {
+ dev_err(dev, "Channel Device registration failed %d\n", ret);
+ tx_chn->common.chan_dev.parent = NULL;
+ goto err;
+ }
+
+ if (xudma_is_pktdma(tx_chn->common.udmax)) {
+ /* prepare the channel device as coherent */
+ tx_chn->common.chan_dev.dma_coherent = true;
+ dma_coerce_mask_and_coherent(&tx_chn->common.chan_dev,
+ DMA_BIT_MASK(48));
+ }
+
atomic_set(&tx_chn->free_pkts, cfg->txcq_cfg.size);
+ if (xudma_is_pktdma(tx_chn->common.udmax))
+ tx_chn->udma_tflow_id = tx_chn->common.ep_config->default_flow_id;
+ else
+ tx_chn->udma_tflow_id = tx_chn->udma_tchan_id;
+
/* request and cfg rings */
ret = k3_ringacc_request_rings_pair(tx_chn->common.ringacc,
- tx_chn->udma_tchan_id, -1,
+ tx_chn->udma_tflow_id, -1,
&tx_chn->ringtx,
&tx_chn->ringtxcq);
if (ret) {
@@ -284,6 +324,12 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev,
cfg->tx_cfg.dma_dev = k3_udma_glue_tx_get_dma_device(tx_chn);
cfg->txcq_cfg.dma_dev = cfg->tx_cfg.dma_dev;
+ /* Set the ASEL value for DMA rings of PKTDMA */
+ if (xudma_is_pktdma(tx_chn->common.udmax)) {
+ cfg->tx_cfg.asel = tx_chn->common.atype_asel;
+ cfg->txcq_cfg.asel = tx_chn->common.atype_asel;
+ }
+
ret = k3_ringacc_ring_cfg(tx_chn->ringtx, &cfg->tx_cfg);
if (ret) {
dev_err(dev, "Failed to cfg ringtx %d\n", ret);
@@ -348,6 +394,11 @@ void k3_udma_glue_release_tx_chn(struct k3_udma_glue_tx_channel *tx_chn)
if (tx_chn->ringtx)
k3_ringacc_ring_free(tx_chn->ringtx);
+
+ if (tx_chn->common.chan_dev.parent) {
+ device_unregister(&tx_chn->common.chan_dev);
+ tx_chn->common.chan_dev.parent = NULL;
+ }
}
EXPORT_SYMBOL_GPL(k3_udma_glue_release_tx_chn);
@@ -441,13 +492,10 @@ void k3_udma_glue_reset_tx_chn(struct k3_udma_glue_tx_channel *tx_chn,
void *data,
void (*cleanup)(void *data, dma_addr_t desc_dma))
{
+ struct device *dev = tx_chn->common.dev;
dma_addr_t desc_dma;
int occ_tx, i, ret;
- /* reset TXCQ as it is not input for udma - expected to be empty */
- if (tx_chn->ringtxcq)
- k3_ringacc_ring_reset(tx_chn->ringtxcq);
-
/*
* TXQ reset need to be special way as it is input for udma and its
* state cached by udma, so:
@@ -456,17 +504,20 @@ void k3_udma_glue_reset_tx_chn(struct k3_udma_glue_tx_channel *tx_chn,
* 3) reset TXQ in a special way
*/
occ_tx = k3_ringacc_ring_get_occ(tx_chn->ringtx);
- dev_dbg(tx_chn->common.dev, "TX reset occ_tx %u\n", occ_tx);
+ dev_dbg(dev, "TX reset occ_tx %u\n", occ_tx);
for (i = 0; i < occ_tx; i++) {
ret = k3_ringacc_ring_pop(tx_chn->ringtx, &desc_dma);
if (ret) {
- dev_err(tx_chn->common.dev, "TX reset pop %d\n", ret);
+ if (ret != -ENODATA)
+ dev_err(dev, "TX reset pop %d\n", ret);
break;
}
cleanup(data, desc_dma);
}
+ /* reset TXCQ as it is not input for udma - expected to be empty */
+ k3_ringacc_ring_reset(tx_chn->ringtxcq);
k3_ringacc_ring_reset_dma(tx_chn->ringtx, occ_tx);
}
EXPORT_SYMBOL_GPL(k3_udma_glue_reset_tx_chn);
@@ -485,7 +536,12 @@ EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_txcq_id);
int k3_udma_glue_tx_get_irq(struct k3_udma_glue_tx_channel *tx_chn)
{
- tx_chn->virq = k3_ringacc_get_ring_irq_num(tx_chn->ringtxcq);
+ if (xudma_is_pktdma(tx_chn->common.udmax)) {
+ tx_chn->virq = xudma_pktdma_tflow_get_irq(tx_chn->common.udmax,
+ tx_chn->udma_tflow_id);
+ } else {
+ tx_chn->virq = k3_ringacc_get_ring_irq_num(tx_chn->ringtxcq);
+ }
return tx_chn->virq;
}
@@ -494,10 +550,36 @@ EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_irq);
struct device *
k3_udma_glue_tx_get_dma_device(struct k3_udma_glue_tx_channel *tx_chn)
{
+ if (xudma_is_pktdma(tx_chn->common.udmax) &&
+ (tx_chn->common.atype_asel == 14 || tx_chn->common.atype_asel == 15))
+ return &tx_chn->common.chan_dev;
+
return xudma_get_device(tx_chn->common.udmax);
}
EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_dma_device);
+void k3_udma_glue_tx_dma_to_cppi5_addr(struct k3_udma_glue_tx_channel *tx_chn,
+ dma_addr_t *addr)
+{
+ if (!xudma_is_pktdma(tx_chn->common.udmax) ||
+ !tx_chn->common.atype_asel)
+ return;
+
+ *addr |= (u64)tx_chn->common.atype_asel << K3_ADDRESS_ASEL_SHIFT;
+}
+EXPORT_SYMBOL_GPL(k3_udma_glue_tx_dma_to_cppi5_addr);
+
+void k3_udma_glue_tx_cppi5_to_dma_addr(struct k3_udma_glue_tx_channel *tx_chn,
+ dma_addr_t *addr)
+{
+ if (!xudma_is_pktdma(tx_chn->common.udmax) ||
+ !tx_chn->common.atype_asel)
+ return;
+
+ *addr &= (u64)GENMASK(K3_ADDRESS_ASEL_SHIFT - 1, 0);
+}
+EXPORT_SYMBOL_GPL(k3_udma_glue_tx_cppi5_to_dma_addr);
+
static int k3_udma_glue_cfg_rx_chn(struct k3_udma_glue_rx_channel *rx_chn)
{
const struct udma_tisci_rm *tisci_rm = rx_chn->common.tisci_rm;
@@ -509,8 +591,6 @@ static int k3_udma_glue_cfg_rx_chn(struct k3_udma_glue_rx_channel *rx_chn)
req.valid_params = TI_SCI_MSG_VALUE_RM_UDMAP_CH_FETCH_SIZE_VALID |
TI_SCI_MSG_VALUE_RM_UDMAP_CH_CQ_QNUM_VALID |
TI_SCI_MSG_VALUE_RM_UDMAP_CH_CHAN_TYPE_VALID |
- TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_START_VALID |
- TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_CNT_VALID |
TI_SCI_MSG_VALUE_RM_UDMAP_CH_ATYPE_VALID;
req.nav_id = tisci_rm->tisci_dev_id;
@@ -522,13 +602,16 @@ static int k3_udma_glue_cfg_rx_chn(struct k3_udma_glue_rx_channel *rx_chn)
* req.rxcq_qnum = k3_ringacc_get_ring_id(rx_chn->flows[0].ringrx);
*/
req.rxcq_qnum = 0xFFFF;
- if (rx_chn->flow_num && rx_chn->flow_id_base != rx_chn->udma_rchan_id) {
+ if (!xudma_is_pktdma(rx_chn->common.udmax) && rx_chn->flow_num &&
+ rx_chn->flow_id_base != rx_chn->udma_rchan_id) {
/* Default flow + extra ones */
+ req.valid_params |= TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_START_VALID |
+ TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_CNT_VALID;
req.flowid_start = rx_chn->flow_id_base;
req.flowid_cnt = rx_chn->flow_num;
}
req.rx_chan_type = TI_SCI_RM_UDMAP_CHAN_TYPE_PKT_PBRR;
- req.rx_atype = rx_chn->common.atype;
+ req.rx_atype = rx_chn->common.atype_asel;
ret = tisci_rm->tisci_udmap_ops->rx_ch_cfg(tisci_rm->tisci, &req);
if (ret)
@@ -582,10 +665,18 @@ static int k3_udma_glue_cfg_rx_flow(struct k3_udma_glue_rx_channel *rx_chn,
goto err_rflow_put;
}
+ if (xudma_is_pktdma(rx_chn->common.udmax)) {
+ rx_ring_id = flow->udma_rflow_id +
+ xudma_get_rflow_ring_offset(rx_chn->common.udmax);
+ rx_ringfdq_id = 0;
+ } else {
+ rx_ring_id = flow_cfg->ring_rxq_id;
+ rx_ringfdq_id = flow_cfg->ring_rxfdq0_id;
+ }
+
/* request and cfg rings */
ret = k3_ringacc_request_rings_pair(rx_chn->common.ringacc,
- flow_cfg->ring_rxfdq0_id,
- flow_cfg->ring_rxq_id,
+ rx_ringfdq_id, rx_ring_id,
&flow->ringrxfdq,
&flow->ringrx);
if (ret) {
@@ -597,6 +688,12 @@ static int k3_udma_glue_cfg_rx_flow(struct k3_udma_glue_rx_channel *rx_chn,
flow_cfg->rx_cfg.dma_dev = k3_udma_glue_rx_get_dma_device(rx_chn);
flow_cfg->rxfdq_cfg.dma_dev = flow_cfg->rx_cfg.dma_dev;
+ /* Set the ASEL value for DMA rings of PKTDMA */
+ if (xudma_is_pktdma(rx_chn->common.udmax)) {
+ flow_cfg->rx_cfg.asel = rx_chn->common.atype_asel;
+ flow_cfg->rxfdq_cfg.asel = rx_chn->common.atype_asel;
+ }
+
ret = k3_ringacc_ring_cfg(flow->ringrx, &flow_cfg->rx_cfg);
if (ret) {
dev_err(dev, "Failed to cfg ringrx %d\n", ret);
@@ -755,6 +852,7 @@ k3_udma_glue_request_rx_chn_priv(struct device *dev, const char *name,
struct k3_udma_glue_rx_channel_cfg *cfg)
{
struct k3_udma_glue_rx_channel *rx_chn;
+ struct psil_endpoint_config *ep_cfg;
int ret, i;
if (cfg->flow_id_num <= 0)
@@ -782,8 +880,16 @@ k3_udma_glue_request_rx_chn_priv(struct device *dev, const char *name,
rx_chn->common.psdata_size,
rx_chn->common.swdata_size);
+ ep_cfg = rx_chn->common.ep_config;
+
+ if (xudma_is_pktdma(rx_chn->common.udmax))
+ rx_chn->udma_rchan_id = ep_cfg->mapped_channel_id;
+ else
+ rx_chn->udma_rchan_id = -1;
+
/* request and cfg UDMAP RX channel */
- rx_chn->udma_rchanx = xudma_rchan_get(rx_chn->common.udmax, -1);
+ rx_chn->udma_rchanx = xudma_rchan_get(rx_chn->common.udmax,
+ rx_chn->udma_rchan_id);
if (IS_ERR(rx_chn->udma_rchanx)) {
ret = PTR_ERR(rx_chn->udma_rchanx);
dev_err(dev, "UDMAX rchanx get err %d\n", ret);
@@ -791,12 +897,47 @@ k3_udma_glue_request_rx_chn_priv(struct device *dev, const char *name,
}
rx_chn->udma_rchan_id = xudma_rchan_get_id(rx_chn->udma_rchanx);
- rx_chn->flow_num = cfg->flow_id_num;
- rx_chn->flow_id_base = cfg->flow_id_base;
+ rx_chn->common.chan_dev.parent = xudma_get_device(rx_chn->common.udmax);
+ dev_set_name(&rx_chn->common.chan_dev, "rchan%d-0x%04x",
+ rx_chn->udma_rchan_id, rx_chn->common.src_thread);
+ ret = device_register(&rx_chn->common.chan_dev);
+ if (ret) {
+ dev_err(dev, "Channel Device registration failed %d\n", ret);
+ rx_chn->common.chan_dev.parent = NULL;
+ goto err;
+ }
- /* Use RX channel id as flow id: target dev can't generate flow_id */
- if (cfg->flow_id_use_rxchan_id)
- rx_chn->flow_id_base = rx_chn->udma_rchan_id;
+ if (xudma_is_pktdma(rx_chn->common.udmax)) {
+ /* prepare the channel device as coherent */
+ rx_chn->common.chan_dev.dma_coherent = true;
+ dma_coerce_mask_and_coherent(&rx_chn->common.chan_dev,
+ DMA_BIT_MASK(48));
+ }
+
+ if (xudma_is_pktdma(rx_chn->common.udmax)) {
+ int flow_start = cfg->flow_id_base;
+ int flow_end;
+
+ if (flow_start == -1)
+ flow_start = ep_cfg->flow_start;
+
+ flow_end = flow_start + cfg->flow_id_num - 1;
+ if (flow_start < ep_cfg->flow_start ||
+ flow_end > (ep_cfg->flow_start + ep_cfg->flow_num - 1)) {
+ dev_err(dev, "Invalid flow range requested\n");
+ ret = -EINVAL;
+ goto err;
+ }
+ rx_chn->flow_id_base = flow_start;
+ } else {
+ rx_chn->flow_id_base = cfg->flow_id_base;
+
+ /* Use RX channel id as flow id: target dev can't generate flow_id */
+ if (cfg->flow_id_use_rxchan_id)
+ rx_chn->flow_id_base = rx_chn->udma_rchan_id;
+ }
+
+ rx_chn->flow_num = cfg->flow_id_num;
rx_chn->flows = devm_kcalloc(dev, rx_chn->flow_num,
sizeof(*rx_chn->flows), GFP_KERNEL);
@@ -899,6 +1040,23 @@ k3_udma_glue_request_remote_rx_chn(struct device *dev, const char *name,
goto err;
}
+ rx_chn->common.chan_dev.parent = xudma_get_device(rx_chn->common.udmax);
+ dev_set_name(&rx_chn->common.chan_dev, "rchan_remote-0x%04x",
+ rx_chn->common.src_thread);
+ ret = device_register(&rx_chn->common.chan_dev);
+ if (ret) {
+ dev_err(dev, "Channel Device registration failed %d\n", ret);
+ rx_chn->common.chan_dev.parent = NULL;
+ goto err;
+ }
+
+ if (xudma_is_pktdma(rx_chn->common.udmax)) {
+ /* prepare the channel device as coherent */
+ rx_chn->common.chan_dev.dma_coherent = true;
+ dma_coerce_mask_and_coherent(&rx_chn->common.chan_dev,
+ DMA_BIT_MASK(48));
+ }
+
ret = k3_udma_glue_allocate_rx_flows(rx_chn, cfg);
if (ret)
goto err;
@@ -951,6 +1109,11 @@ void k3_udma_glue_release_rx_chn(struct k3_udma_glue_rx_channel *rx_chn)
if (!IS_ERR_OR_NULL(rx_chn->udma_rchanx))
xudma_rchan_put(rx_chn->common.udmax,
rx_chn->udma_rchanx);
+
+ if (rx_chn->common.chan_dev.parent) {
+ device_unregister(&rx_chn->common.chan_dev);
+ rx_chn->common.chan_dev.parent = NULL;
+ }
}
EXPORT_SYMBOL_GPL(k3_udma_glue_release_rx_chn);
@@ -1143,12 +1306,10 @@ void k3_udma_glue_reset_rx_chn(struct k3_udma_glue_rx_channel *rx_chn,
/* reset RXCQ as it is not input for udma - expected to be empty */
occ_rx = k3_ringacc_ring_get_occ(flow->ringrx);
dev_dbg(dev, "RX reset flow %u occ_rx %u\n", flow_num, occ_rx);
- if (flow->ringrx)
- k3_ringacc_ring_reset(flow->ringrx);
/* Skip RX FDQ in case one FDQ is used for the set of flows */
if (skip_fdq)
- return;
+ goto do_reset;
/*
* RX FDQ reset need to be special way as it is input for udma and its
@@ -1163,13 +1324,17 @@ void k3_udma_glue_reset_rx_chn(struct k3_udma_glue_rx_channel *rx_chn,
for (i = 0; i < occ_rx; i++) {
ret = k3_ringacc_ring_pop(flow->ringrxfdq, &desc_dma);
if (ret) {
- dev_err(dev, "RX reset pop %d\n", ret);
+ if (ret != -ENODATA)
+ dev_err(dev, "RX reset pop %d\n", ret);
break;
}
cleanup(data, desc_dma);
}
k3_ringacc_ring_reset_dma(flow->ringrxfdq, occ_rx);
+
+do_reset:
+ k3_ringacc_ring_reset(flow->ringrx);
}
EXPORT_SYMBOL_GPL(k3_udma_glue_reset_rx_chn);
@@ -1199,7 +1364,12 @@ int k3_udma_glue_rx_get_irq(struct k3_udma_glue_rx_channel *rx_chn,
flow = &rx_chn->flows[flow_num];
- flow->virq = k3_ringacc_get_ring_irq_num(flow->ringrx);
+ if (xudma_is_pktdma(rx_chn->common.udmax)) {
+ flow->virq = xudma_pktdma_rflow_get_irq(rx_chn->common.udmax,
+ flow->udma_rflow_id);
+ } else {
+ flow->virq = k3_ringacc_get_ring_irq_num(flow->ringrx);
+ }
return flow->virq;
}
@@ -1208,6 +1378,32 @@ EXPORT_SYMBOL_GPL(k3_udma_glue_rx_get_irq);
struct device *
k3_udma_glue_rx_get_dma_device(struct k3_udma_glue_rx_channel *rx_chn)
{
+ if (xudma_is_pktdma(rx_chn->common.udmax) &&
+ (rx_chn->common.atype_asel == 14 || rx_chn->common.atype_asel == 15))
+ return &rx_chn->common.chan_dev;
+
return xudma_get_device(rx_chn->common.udmax);
}
EXPORT_SYMBOL_GPL(k3_udma_glue_rx_get_dma_device);
+
+void k3_udma_glue_rx_dma_to_cppi5_addr(struct k3_udma_glue_rx_channel *rx_chn,
+ dma_addr_t *addr)
+{
+ if (!xudma_is_pktdma(rx_chn->common.udmax) ||
+ !rx_chn->common.atype_asel)
+ return;
+
+ *addr |= (u64)rx_chn->common.atype_asel << K3_ADDRESS_ASEL_SHIFT;
+}
+EXPORT_SYMBOL_GPL(k3_udma_glue_rx_dma_to_cppi5_addr);
+
+void k3_udma_glue_rx_cppi5_to_dma_addr(struct k3_udma_glue_rx_channel *rx_chn,
+ dma_addr_t *addr)
+{
+ if (!xudma_is_pktdma(rx_chn->common.udmax) ||
+ !rx_chn->common.atype_asel)
+ return;
+
+ *addr &= (u64)GENMASK(K3_ADDRESS_ASEL_SHIFT - 1, 0);
+}
+EXPORT_SYMBOL_GPL(k3_udma_glue_rx_cppi5_to_dma_addr);
diff --git a/drivers/dma/ti/k3-udma-private.c b/drivers/dma/ti/k3-udma-private.c
index f0cecd29cff1..cef4890cfa42 100644
--- a/drivers/dma/ti/k3-udma-private.c
+++ b/drivers/dma/ti/k3-udma-private.c
@@ -151,3 +151,27 @@ void xudma_##res##rt_write(struct udma_##res *p, int reg, u32 val) \
EXPORT_SYMBOL(xudma_##res##rt_write)
XUDMA_RT_IO_FUNCTIONS(tchan);
XUDMA_RT_IO_FUNCTIONS(rchan);
+
+int xudma_is_pktdma(struct udma_dev *ud)
+{
+ return ud->match_data->type == DMA_TYPE_PKTDMA;
+}
+EXPORT_SYMBOL(xudma_is_pktdma);
+
+int xudma_pktdma_tflow_get_irq(struct udma_dev *ud, int udma_tflow_id)
+{
+ const struct udma_oes_offsets *oes = &ud->soc_data->oes;
+
+ return ti_sci_inta_msi_get_virq(ud->dev, udma_tflow_id +
+ oes->pktdma_tchan_flow);
+}
+EXPORT_SYMBOL(xudma_pktdma_tflow_get_irq);
+
+int xudma_pktdma_rflow_get_irq(struct udma_dev *ud, int udma_rflow_id)
+{
+ const struct udma_oes_offsets *oes = &ud->soc_data->oes;
+
+ return ti_sci_inta_msi_get_virq(ud->dev, udma_rflow_id +
+ oes->pktdma_rchan_flow);
+}
+EXPORT_SYMBOL(xudma_pktdma_rflow_get_irq);
diff --git a/drivers/dma/ti/k3-udma.h b/drivers/dma/ti/k3-udma.h
index 078cc3aa4126..c02080bb5866 100644
--- a/drivers/dma/ti/k3-udma.h
+++ b/drivers/dma/ti/k3-udma.h
@@ -156,4 +156,8 @@ void xudma_rchanrt_write(struct udma_rchan *rchan, int reg, u32 val);
bool xudma_rflow_is_gp(struct udma_dev *ud, int id);
int xudma_get_rflow_ring_offset(struct udma_dev *ud);
+int xudma_is_pktdma(struct udma_dev *ud);
+
+int xudma_pktdma_tflow_get_irq(struct udma_dev *ud, int udma_tflow_id);
+int xudma_pktdma_rflow_get_irq(struct udma_dev *ud, int udma_rflow_id);
#endif /* K3_UDMA_H_ */
diff --git a/include/linux/dma/k3-udma-glue.h b/include/linux/dma/k3-udma-glue.h
index d7c12f31377c..e443be4d3b4b 100644
--- a/include/linux/dma/k3-udma-glue.h
+++ b/include/linux/dma/k3-udma-glue.h
@@ -43,6 +43,10 @@ u32 k3_udma_glue_tx_get_txcq_id(struct k3_udma_glue_tx_channel *tx_chn);
int k3_udma_glue_tx_get_irq(struct k3_udma_glue_tx_channel *tx_chn);
struct device *
k3_udma_glue_tx_get_dma_device(struct k3_udma_glue_tx_channel *tx_chn);
+void k3_udma_glue_tx_dma_to_cppi5_addr(struct k3_udma_glue_tx_channel *tx_chn,
+ dma_addr_t *addr);
+void k3_udma_glue_tx_cppi5_to_dma_addr(struct k3_udma_glue_tx_channel *tx_chn,
+ dma_addr_t *addr);
enum {
K3_UDMA_GLUE_SRC_TAG_LO_KEEP = 0,
@@ -134,5 +138,9 @@ int k3_udma_glue_rx_flow_disable(struct k3_udma_glue_rx_channel *rx_chn,
u32 flow_idx);
struct device *
k3_udma_glue_rx_get_dma_device(struct k3_udma_glue_rx_channel *rx_chn);
+void k3_udma_glue_rx_dma_to_cppi5_addr(struct k3_udma_glue_rx_channel *rx_chn,
+ dma_addr_t *addr);
+void k3_udma_glue_rx_cppi5_to_dma_addr(struct k3_udma_glue_rx_channel *rx_chn,
+ dma_addr_t *addr);
#endif /* K3_UDMA_GLUE_H_ */
--
Peter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
New binding document for
Texas Instruments K3 Block Copy DMA (BCDMA).
BCDMA is introduced as part of AM64.
Signed-off-by: Peter Ujfalusi <[email protected]>
---
.../devicetree/bindings/dma/ti/k3-bcdma.yaml | 183 ++++++++++++++++++
1 file changed, 183 insertions(+)
create mode 100644 Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml
diff --git a/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml b/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml
new file mode 100644
index 000000000000..c84fb641738f
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml
@@ -0,0 +1,183 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/dma/ti/k3-bcdma.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Texas Instruments K3 DMSS BCDMA Device Tree Bindings
+
+maintainers:
+ - Peter Ujfalusi <[email protected]>
+
+description: |
+ The Block Copy DMA (BCDMA) is intended to perform similar functions as the TR
+ mode channels of K3 UDMA-P.
+ BCDMA includes block copy channels and Split channels.
+
+ Block copy channels mainly used for memory to memory transfers, but with
+ optional triggers a block copy channel can service peripherals by accessing
+ directly to memory mapped registers or area.
+
+ Split channels can be used to service PSI-L based peripherals.
+ The peripherals can be PSI-L native or legacy, non PSI-L native peripherals
+ with PDMAs. PDMA is tasked to act as a bridge between the PSI-L fabric and the
+ legacy peripheral.
+
+ PDMAs can be configured via BCDMA split channel's peer registers to match with
+ the configuration of the legacy peripheral.
+
+allOf:
+ - $ref: /schemas/dma/dma-controller.yaml#
+
+properties:
+ "#dma-cells":
+ const: 3
+ description: |
+ cell 1: type of the BCDMA channel to be used to service the peripheral:
+ 0 - split channel
+ 1 - block copy channel using global trigger 1
+ 2 - block copy channel using global trigger 2
+ 3 - block copy channel using local trigger
+
+ cell 2: parameter for the channel:
+ if cell 1 is 0 (split channel):
+ PSI-L thread ID of the remote (to BCDMA) end.
+ Valid ranges for thread ID depends on the data movement direction:
+ for source thread IDs (rx): 0 - 0x7fff
+ for destination thread IDs (tx): 0x8000 - 0xffff
+
+ Please refer to the device documentation for the PSI-L thread map and
+ also the PSI-L peripheral chapter for the correct thread ID.
+ if cell 1 is 1 or 2 (block copy channel using global trigger):
+ Unused, ignored
+
+ The trigger must be configured for the channel externally to BCDMA,
+ channels using global triggers should not be requested directly, but
+ via DMA event router.
+ if cell 1 is 3 (block copy channel using local trigger):
+ bchan number of the locally triggered channel
+
+ cell 3: ASEL value for the channel
+
+ compatible:
+ enum:
+ - ti,am64-dmss-bcdma
+
+ "#address-cells":
+ const: 2
+
+ "#size-cells":
+ const: 2
+
+ reg:
+ maxItems: 5
+
+ reg-names:
+ items:
+ - const: gcfg
+ - const: bchanrt
+ - const: rchanrt
+ - const: tchanrt
+ - const: ringrt
+
+ msi-parent: true
+
+ ti,sci:
+ description: phandle to TI-SCI compatible System controller node
+ allOf:
+ - $ref: /schemas/types.yaml#/definitions/phandle
+
+ ti,sci-dev-id:
+ description: TI-SCI device id of BCDMA
+ allOf:
+ - $ref: /schemas/types.yaml#/definitions/uint32
+
+ ti,asel:
+ description: ASEL value for non slave channels
+ allOf:
+ - $ref: /schemas/types.yaml#/definitions/uint32
+
+ ti,sci-rm-range-bchan:
+ description: |
+ Array of BCDMA block-copy channel resource subtypes for resource
+ allocation for this host
+ allOf:
+ - $ref: /schemas/types.yaml#/definitions/uint32-array
+ minItems: 1
+ # Should be enough
+ maxItems: 255
+
+ ti,sci-rm-range-tchan:
+ description: |
+ Array of BCDMA split tx channel resource subtypes for resource allocation
+ for this host
+ allOf:
+ - $ref: /schemas/types.yaml#/definitions/uint32-array
+ minItems: 1
+ # Should be enough
+ maxItems: 255
+
+ ti,sci-rm-range-rchan:
+ description: |
+ Array of BCDMA split rx channel resource subtypes for resource allocation
+ for this host
+ allOf:
+ - $ref: /schemas/types.yaml#/definitions/uint32-array
+ minItems: 1
+ # Should be enough
+ maxItems: 255
+
+required:
+ - compatible
+ - "#address-cells"
+ - "#size-cells"
+ - "#dma-cells"
+ - reg
+ - reg-names
+ - msi-parent
+ - ti,sci
+ - ti,sci-dev-id
+ - ti,sci-rm-range-bchan
+ - ti,sci-rm-range-tchan
+ - ti,sci-rm-range-rchan
+
+additionalProperties: false
+
+examples:
+ - |+
+ cbass_main {
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ main_dmss {
+ compatible = "simple-mfd";
+ #address-cells = <2>;
+ #size-cells = <2>;
+ dma-ranges;
+ ranges;
+
+ ti,sci-dev-id = <25>;
+
+ main_bcdma: dma-controller@485c0100 {
+ compatible = "ti,am64-dmss-bcdma";
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ reg = <0x0 0x485c0100 0x0 0x100>,
+ <0x0 0x4c000000 0x0 0x20000>,
+ <0x0 0x4a820000 0x0 0x20000>,
+ <0x0 0x4aa40000 0x0 0x20000>,
+ <0x0 0x4bc00000 0x0 0x100000>;
+ reg-names = "gcfg", "bchanrt", "rchanrt", "tchanrt", "ringrt";
+ msi-parent = <&inta_main_dmss>;
+ #dma-cells = <3>;
+
+ ti,sci = <&dmsc>;
+ ti,sci-dev-id = <26>;
+
+ ti,sci-rm-range-bchan = <0x20>; /* BLOCK_COPY_CHAN */
+ ti,sci-rm-range-rchan = <0x21>; /* SPLIT_TR_RX_CHAN */
+ ti,sci-rm-range-tchan = <0x22>; /* SPLIT_TR_TX_CHAN */
+ };
+ };
+ };
--
Peter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
Resource allocation via sysfw can use up to two ranges per resource subtype
to support more complex resource assignment, mainly for DMA channels.
Take the second range also into consideration when setting up the maps for
available resources.
Signed-off-by: Peter Ujfalusi <[email protected]>
---
drivers/dma/ti/k3-udma.c | 55 ++++++++++++++++++++++------------------
1 file changed, 31 insertions(+), 24 deletions(-)
diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
index 3f798993a8b1..1ae5d09e2059 100644
--- a/drivers/dma/ti/k3-udma.c
+++ b/drivers/dma/ti/k3-udma.c
@@ -3179,12 +3179,22 @@ static int udma_get_mmrs(struct platform_device *pdev, struct udma_dev *ud)
return 0;
}
+static void udma_mark_resource_ranges(struct udma_dev *ud, unsigned long *map,
+ struct ti_sci_resource_desc *rm_desc,
+ char *name)
+{
+ bitmap_clear(map, rm_desc->start, rm_desc->num);
+ bitmap_clear(map, rm_desc->start_sec, rm_desc->num_sec);
+ dev_dbg(ud->dev, "ti_sci resource range for %s: %d:%d | %d:%d\n", name,
+ rm_desc->start, rm_desc->num, rm_desc->start_sec,
+ rm_desc->num_sec);
+}
+
static int udma_setup_resources(struct udma_dev *ud)
{
struct device *dev = ud->dev;
int ch_count, ret, i, j;
u32 cap2, cap3;
- struct ti_sci_resource_desc *rm_desc;
struct ti_sci_resource *rm_res, irq_res;
struct udma_tisci_rm *tisci_rm = &ud->tisci_rm;
static const char * const range_names[] = { "ti,sci-rm-range-tchan",
@@ -3270,13 +3280,9 @@ static int udma_setup_resources(struct udma_dev *ud)
bitmap_zero(ud->tchan_map, ud->tchan_cnt);
} else {
bitmap_fill(ud->tchan_map, ud->tchan_cnt);
- for (i = 0; i < rm_res->sets; i++) {
- rm_desc = &rm_res->desc[i];
- bitmap_clear(ud->tchan_map, rm_desc->start,
- rm_desc->num);
- dev_dbg(dev, "ti-sci-res: tchan: %d:%d\n",
- rm_desc->start, rm_desc->num);
- }
+ for (i = 0; i < rm_res->sets; i++)
+ udma_mark_resource_ranges(ud, ud->tchan_map,
+ &rm_res->desc[i], "tchan");
}
irq_res.sets = rm_res->sets;
@@ -3286,13 +3292,9 @@ static int udma_setup_resources(struct udma_dev *ud)
bitmap_zero(ud->rchan_map, ud->rchan_cnt);
} else {
bitmap_fill(ud->rchan_map, ud->rchan_cnt);
- for (i = 0; i < rm_res->sets; i++) {
- rm_desc = &rm_res->desc[i];
- bitmap_clear(ud->rchan_map, rm_desc->start,
- rm_desc->num);
- dev_dbg(dev, "ti-sci-res: rchan: %d:%d\n",
- rm_desc->start, rm_desc->num);
- }
+ for (i = 0; i < rm_res->sets; i++)
+ udma_mark_resource_ranges(ud, ud->rchan_map,
+ &rm_res->desc[i], "rchan");
}
irq_res.sets += rm_res->sets;
@@ -3301,12 +3303,21 @@ static int udma_setup_resources(struct udma_dev *ud)
for (i = 0; i < rm_res->sets; i++) {
irq_res.desc[i].start = rm_res->desc[i].start;
irq_res.desc[i].num = rm_res->desc[i].num;
+ irq_res.desc[i].start_sec = rm_res->desc[i].start_sec;
+ irq_res.desc[i].num_sec = rm_res->desc[i].num_sec;
}
rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN];
for (j = 0; j < rm_res->sets; j++, i++) {
- irq_res.desc[i].start = rm_res->desc[j].start +
+ if (rm_res->desc[j].num) {
+ irq_res.desc[i].start = rm_res->desc[j].start +
ud->soc_data->rchan_oes_offset;
- irq_res.desc[i].num = rm_res->desc[j].num;
+ irq_res.desc[i].num = rm_res->desc[j].num;
+ }
+ if (rm_res->desc[j].num_sec) {
+ irq_res.desc[i].start_sec = rm_res->desc[j].start_sec +
+ ud->soc_data->rchan_oes_offset;
+ irq_res.desc[i].num_sec = rm_res->desc[j].num_sec;
+ }
}
ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, &irq_res);
kfree(irq_res.desc);
@@ -3322,13 +3333,9 @@ static int udma_setup_resources(struct udma_dev *ud)
bitmap_clear(ud->rflow_gp_map, ud->rchan_cnt,
ud->rflow_cnt - ud->rchan_cnt);
} else {
- for (i = 0; i < rm_res->sets; i++) {
- rm_desc = &rm_res->desc[i];
- bitmap_clear(ud->rflow_gp_map, rm_desc->start,
- rm_desc->num);
- dev_dbg(dev, "ti-sci-res: rflow: %d:%d\n",
- rm_desc->start, rm_desc->num);
- }
+ for (i = 0; i < rm_res->sets; i++)
+ udma_mark_resource_ranges(ud, ud->rflow_gp_map,
+ &rm_res->desc[i], "gp-rflow");
}
ch_count -= bitmap_weight(ud->tchan_map, ud->tchan_cnt);
--
Peter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
Glue layer users should use the device of the DMA for DMA mapping and
allocations as it is the DMA which accesses to descriptors and buffers,
not the clients
Signed-off-by: Peter Ujfalusi <[email protected]>
---
drivers/dma/ti/k3-udma-glue.c | 14 ++++++++++++++
drivers/dma/ti/k3-udma-private.c | 6 ++++++
drivers/dma/ti/k3-udma.h | 1 +
include/linux/dma/k3-udma-glue.h | 4 ++++
4 files changed, 25 insertions(+)
diff --git a/drivers/dma/ti/k3-udma-glue.c b/drivers/dma/ti/k3-udma-glue.c
index a367584f0d7b..a53bc4707ae8 100644
--- a/drivers/dma/ti/k3-udma-glue.c
+++ b/drivers/dma/ti/k3-udma-glue.c
@@ -487,6 +487,13 @@ int k3_udma_glue_tx_get_irq(struct k3_udma_glue_tx_channel *tx_chn)
}
EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_irq);
+struct device *
+ k3_udma_glue_tx_get_dma_device(struct k3_udma_glue_tx_channel *tx_chn)
+{
+ return xudma_get_device(tx_chn->common.udmax);
+}
+EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_dma_device);
+
static int k3_udma_glue_cfg_rx_chn(struct k3_udma_glue_rx_channel *rx_chn)
{
const struct udma_tisci_rm *tisci_rm = rx_chn->common.tisci_rm;
@@ -1189,3 +1196,10 @@ int k3_udma_glue_rx_get_irq(struct k3_udma_glue_rx_channel *rx_chn,
return flow->virq;
}
EXPORT_SYMBOL_GPL(k3_udma_glue_rx_get_irq);
+
+struct device *
+ k3_udma_glue_rx_get_dma_device(struct k3_udma_glue_rx_channel *rx_chn)
+{
+ return xudma_get_device(rx_chn->common.udmax);
+}
+EXPORT_SYMBOL_GPL(k3_udma_glue_rx_get_dma_device);
diff --git a/drivers/dma/ti/k3-udma-private.c b/drivers/dma/ti/k3-udma-private.c
index aa24e554f7b4..8ff7a264be03 100644
--- a/drivers/dma/ti/k3-udma-private.c
+++ b/drivers/dma/ti/k3-udma-private.c
@@ -50,6 +50,12 @@ struct udma_dev *of_xudma_dev_get(struct device_node *np, const char *property)
}
EXPORT_SYMBOL(of_xudma_dev_get);
+struct device *xudma_get_device(struct udma_dev *ud)
+{
+ return ud->dev;
+}
+EXPORT_SYMBOL(xudma_get_device);
+
u32 xudma_dev_get_psil_base(struct udma_dev *ud)
{
return ud->psil_base;
diff --git a/drivers/dma/ti/k3-udma.h b/drivers/dma/ti/k3-udma.h
index 09c4529e013d..d1cace0cb43b 100644
--- a/drivers/dma/ti/k3-udma.h
+++ b/drivers/dma/ti/k3-udma.h
@@ -112,6 +112,7 @@ int xudma_navss_psil_unpair(struct udma_dev *ud, u32 src_thread,
u32 dst_thread);
struct udma_dev *of_xudma_dev_get(struct device_node *np, const char *property);
+struct device *xudma_get_device(struct udma_dev *ud);
void xudma_dev_put(struct udma_dev *ud);
u32 xudma_dev_get_psil_base(struct udma_dev *ud);
struct udma_tisci_rm *xudma_dev_get_tisci_rm(struct udma_dev *ud);
diff --git a/include/linux/dma/k3-udma-glue.h b/include/linux/dma/k3-udma-glue.h
index 5eb34ad973a7..d7c12f31377c 100644
--- a/include/linux/dma/k3-udma-glue.h
+++ b/include/linux/dma/k3-udma-glue.h
@@ -41,6 +41,8 @@ void k3_udma_glue_reset_tx_chn(struct k3_udma_glue_tx_channel *tx_chn,
u32 k3_udma_glue_tx_get_hdesc_size(struct k3_udma_glue_tx_channel *tx_chn);
u32 k3_udma_glue_tx_get_txcq_id(struct k3_udma_glue_tx_channel *tx_chn);
int k3_udma_glue_tx_get_irq(struct k3_udma_glue_tx_channel *tx_chn);
+struct device *
+ k3_udma_glue_tx_get_dma_device(struct k3_udma_glue_tx_channel *tx_chn);
enum {
K3_UDMA_GLUE_SRC_TAG_LO_KEEP = 0,
@@ -130,5 +132,7 @@ int k3_udma_glue_rx_flow_enable(struct k3_udma_glue_rx_channel *rx_chn,
u32 flow_idx);
int k3_udma_glue_rx_flow_disable(struct k3_udma_glue_rx_channel *rx_chn,
u32 flow_idx);
+struct device *
+ k3_udma_glue_rx_get_dma_device(struct k3_udma_glue_rx_channel *rx_chn);
#endif /* K3_UDMA_GLUE_H_ */
--
Peter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
Unlike UDMAP the BCDMA defines the channel TPL levels per channel type.
In UDMAP the number of high and ultra-high channels applies to both tchan
and rchan.
BCDMA defines the TPL per channel types: bchan, tchan and rchan can have
different number of high and ultra-high channels.
In order to support BCDMA channel TPL we need to move the tpl information
as per channel type property for the DMAs.
Signed-off-by: Peter Ujfalusi <[email protected]>
---
drivers/dma/ti/k3-udma.c | 122 ++++++++++++++++++++++++++-------------
drivers/dma/ti/k3-udma.h | 6 ++
2 files changed, 89 insertions(+), 39 deletions(-)
diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
index a342e89a4bae..69f2c43354d4 100644
--- a/drivers/dma/ti/k3-udma.c
+++ b/drivers/dma/ti/k3-udma.c
@@ -146,6 +146,11 @@ struct udma_rx_flush {
dma_addr_t buffer_paddr;
};
+struct udma_tpl {
+ u8 levels;
+ u32 start_idx[3];
+};
+
struct udma_dev {
struct dma_device ddev;
struct device *dev;
@@ -153,8 +158,9 @@ struct udma_dev {
const struct udma_match_data *match_data;
const struct udma_soc_data *soc_data;
- u8 tpl_levels;
- u32 tpl_start_idx[3];
+ struct udma_tpl bchan_tpl;
+ struct udma_tpl tchan_tpl;
+ struct udma_tpl rchan_tpl;
size_t desc_align; /* alignment to use for descriptors */
@@ -1278,10 +1284,10 @@ static struct udma_##res *__udma_reserve_##res(struct udma_dev *ud, \
} else { \
int start; \
\
- if (tpl >= ud->tpl_levels) \
- tpl = ud->tpl_levels - 1; \
+ if (tpl >= ud->res##_tpl.levels) \
+ tpl = ud->res##_tpl.levels - 1; \
\
- start = ud->tpl_start_idx[tpl]; \
+ start = ud->res##_tpl.start_idx[tpl]; \
\
id = find_next_zero_bit(ud->res##_map, ud->res##_cnt, \
start); \
@@ -1294,29 +1300,14 @@ static struct udma_##res *__udma_reserve_##res(struct udma_dev *ud, \
return &ud->res##s[id]; \
}
+UDMA_RESERVE_RESOURCE(bchan);
UDMA_RESERVE_RESOURCE(tchan);
UDMA_RESERVE_RESOURCE(rchan);
-static struct udma_bchan *__bcdma_reserve_bchan(struct udma_dev *ud, int id)
-{
- if (id >= 0) {
- if (test_bit(id, ud->bchan_map)) {
- dev_err(ud->dev, "bchan%d is in use\n", id);
- return ERR_PTR(-ENOENT);
- }
- } else {
- id = find_next_zero_bit(ud->bchan_map, ud->bchan_cnt, 0);
- if (id == ud->bchan_cnt)
- return ERR_PTR(-ENOENT);
- }
-
- set_bit(id, ud->bchan_map);
- return &ud->bchans[id];
-}
-
static int bcdma_get_bchan(struct udma_chan *uc)
{
struct udma_dev *ud = uc->ud;
+ enum udma_tp_level tpl;
if (uc->bchan) {
dev_dbg(ud->dev, "chan%d: already have bchan%d allocated\n",
@@ -1324,7 +1315,16 @@ static int bcdma_get_bchan(struct udma_chan *uc)
return 0;
}
- uc->bchan = __bcdma_reserve_bchan(ud, -1);
+ /*
+ * Use normal channels for peripherals, and highest TPL channel for
+ * mem2mem
+ */
+ if (uc->config.tr_trigger_type)
+ tpl = 0;
+ else
+ tpl = ud->bchan_tpl.levels - 1;
+
+ uc->bchan = __udma_reserve_bchan(ud, tpl, -1);
if (IS_ERR(uc->bchan))
return PTR_ERR(uc->bchan);
@@ -1386,8 +1386,11 @@ static int udma_get_chan_pair(struct udma_chan *uc)
/* Can be optimized, but let's have it like this for now */
end = min(ud->tchan_cnt, ud->rchan_cnt);
- /* Try to use the highest TPL channel pair for MEM_TO_MEM channels */
- chan_id = ud->tpl_start_idx[ud->tpl_levels - 1];
+ /*
+ * Try to use the highest TPL channel pair for MEM_TO_MEM channels
+ * Note: in UDMAP the channel TPL is symmetric between tchan and rchan
+ */
+ chan_id = ud->tchan_tpl.start_idx[ud->tchan_tpl.levels - 1];
for (; chan_id < end; chan_id++) {
if (!test_bit(chan_id, ud->tchan_map) &&
!test_bit(chan_id, ud->rchan_map))
@@ -4043,24 +4046,28 @@ static int udma_setup_resources(struct udma_dev *ud)
cap3 = udma_read(ud->mmrs[MMR_GCFG], 0x2c);
if (of_device_is_compatible(dev->of_node,
"ti,am654-navss-main-udmap")) {
- ud->tpl_levels = 2;
- ud->tpl_start_idx[0] = 8;
+ ud->tchan_tpl.levels = 2;
+ ud->tchan_tpl.start_idx[0] = 8;
} else if (of_device_is_compatible(dev->of_node,
"ti,am654-navss-mcu-udmap")) {
- ud->tpl_levels = 2;
- ud->tpl_start_idx[0] = 2;
+ ud->tchan_tpl.levels = 2;
+ ud->tchan_tpl.start_idx[0] = 2;
} else if (UDMA_CAP3_UCHAN_CNT(cap3)) {
- ud->tpl_levels = 3;
- ud->tpl_start_idx[1] = UDMA_CAP3_UCHAN_CNT(cap3);
- ud->tpl_start_idx[0] = ud->tpl_start_idx[1] +
- UDMA_CAP3_HCHAN_CNT(cap3);
+ ud->tchan_tpl.levels = 3;
+ ud->tchan_tpl.start_idx[1] = UDMA_CAP3_UCHAN_CNT(cap3);
+ ud->tchan_tpl.start_idx[0] = ud->tchan_tpl.start_idx[1] +
+ UDMA_CAP3_HCHAN_CNT(cap3);
} else if (UDMA_CAP3_HCHAN_CNT(cap3)) {
- ud->tpl_levels = 2;
- ud->tpl_start_idx[0] = UDMA_CAP3_HCHAN_CNT(cap3);
+ ud->tchan_tpl.levels = 2;
+ ud->tchan_tpl.start_idx[0] = UDMA_CAP3_HCHAN_CNT(cap3);
} else {
- ud->tpl_levels = 1;
+ ud->tchan_tpl.levels = 1;
}
+ ud->rchan_tpl.levels = ud->tchan_tpl.levels;
+ ud->rchan_tpl.start_idx[0] = ud->tchan_tpl.start_idx[0];
+ ud->rchan_tpl.start_idx[1] = ud->tchan_tpl.start_idx[1];
+
ud->tchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->tchan_cnt),
sizeof(unsigned long), GFP_KERNEL);
ud->tchans = devm_kcalloc(dev, ud->tchan_cnt, sizeof(*ud->tchans),
@@ -4182,6 +4189,46 @@ static int bcdma_setup_resources(struct udma_dev *ud)
struct ti_sci_resource *rm_res, irq_res;
struct udma_tisci_rm *tisci_rm = &ud->tisci_rm;
const struct udma_oes_offsets *oes = &ud->soc_data->oes;
+ u32 cap;
+
+ /* Set up the throughput level start indexes */
+ cap = udma_read(ud->mmrs[MMR_GCFG], 0x2c);
+ if (BCDMA_CAP3_UBCHAN_CNT(cap)) {
+ ud->bchan_tpl.levels = 3;
+ ud->bchan_tpl.start_idx[1] = BCDMA_CAP3_UBCHAN_CNT(cap);
+ ud->bchan_tpl.start_idx[0] = ud->bchan_tpl.start_idx[0] +
+ BCDMA_CAP3_HBCHAN_CNT(cap);
+ } else if (BCDMA_CAP3_HBCHAN_CNT(cap)) {
+ ud->bchan_tpl.levels = 2;
+ ud->bchan_tpl.start_idx[0] = BCDMA_CAP3_HBCHAN_CNT(cap);
+ } else {
+ ud->bchan_tpl.levels = 1;
+ }
+
+ cap = udma_read(ud->mmrs[MMR_GCFG], 0x30);
+ if (BCDMA_CAP4_URCHAN_CNT(cap)) {
+ ud->rchan_tpl.levels = 3;
+ ud->rchan_tpl.start_idx[1] = BCDMA_CAP4_URCHAN_CNT(cap);
+ ud->rchan_tpl.start_idx[0] = ud->rchan_tpl.start_idx[1] +
+ BCDMA_CAP4_HRCHAN_CNT(cap);
+ } else if (BCDMA_CAP4_HRCHAN_CNT(cap)) {
+ ud->rchan_tpl.levels = 2;
+ ud->rchan_tpl.start_idx[0] = BCDMA_CAP4_HRCHAN_CNT(cap);
+ } else {
+ ud->rchan_tpl.levels = 1;
+ }
+
+ if (BCDMA_CAP4_UTCHAN_CNT(cap)) {
+ ud->tchan_tpl.levels = 3;
+ ud->tchan_tpl.start_idx[1] = BCDMA_CAP4_UTCHAN_CNT(cap);
+ ud->tchan_tpl.start_idx[0] = ud->tchan_tpl.start_idx[1] +
+ BCDMA_CAP4_HTCHAN_CNT(cap);
+ } else if (BCDMA_CAP4_HTCHAN_CNT(cap)) {
+ ud->tchan_tpl.levels = 2;
+ ud->tchan_tpl.start_idx[0] = BCDMA_CAP4_HTCHAN_CNT(cap);
+ } else {
+ ud->tchan_tpl.levels = 1;
+ }
ud->bchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->bchan_cnt),
sizeof(unsigned long), GFP_KERNEL);
@@ -4207,9 +4254,6 @@ static int bcdma_setup_resources(struct udma_dev *ud)
!ud->rflows)
return -ENOMEM;
- /* TPL is not yet supported for BCDMA */
- ud->tpl_levels = 1;
-
/* Get resource ranges from tisci */
for (i = 0; i < RM_RANGE_LAST; i++) {
if (i == RM_RANGE_RFLOW)
diff --git a/drivers/dma/ti/k3-udma.h b/drivers/dma/ti/k3-udma.h
index bf78ad94354a..8cb32681afaf 100644
--- a/drivers/dma/ti/k3-udma.h
+++ b/drivers/dma/ti/k3-udma.h
@@ -48,6 +48,12 @@
#define BCDMA_CAP2_BCHAN_CNT(val) ((val) & 0x1ff)
#define BCDMA_CAP2_TCHAN_CNT(val) (((val) >> 9) & 0x1ff)
#define BCDMA_CAP2_RCHAN_CNT(val) (((val) >> 18) & 0x1ff)
+#define BCDMA_CAP3_HBCHAN_CNT(val) (((val) >> 14) & 0x1ff)
+#define BCDMA_CAP3_UBCHAN_CNT(val) (((val) >> 23) & 0x1ff)
+#define BCDMA_CAP4_HRCHAN_CNT(val) ((val) & 0xff)
+#define BCDMA_CAP4_URCHAN_CNT(val) (((val) >> 8) & 0xff)
+#define BCDMA_CAP4_HTCHAN_CNT(val) (((val) >> 16) & 0xff)
+#define BCDMA_CAP4_UTCHAN_CNT(val) (((val) >> 24) & 0xff)
/* UDMA_CHAN_RT_CTL_REG */
#define UDMA_CHAN_RT_CTL_EN BIT(31)
--
Peter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
Add initial PSI-L map file for AM64.
Signed-off-by: Peter Ujfalusi <[email protected]>
---
drivers/dma/ti/Makefile | 3 +-
drivers/dma/ti/k3-psil-am64.c | 75 +++++++++++++++++++++++++++++++++++
drivers/dma/ti/k3-psil-priv.h | 1 +
drivers/dma/ti/k3-psil.c | 1 +
4 files changed, 79 insertions(+), 1 deletion(-)
create mode 100644 drivers/dma/ti/k3-psil-am64.c
diff --git a/drivers/dma/ti/Makefile b/drivers/dma/ti/Makefile
index 0c67254caee6..bd496efadff7 100644
--- a/drivers/dma/ti/Makefile
+++ b/drivers/dma/ti/Makefile
@@ -7,5 +7,6 @@ obj-$(CONFIG_TI_K3_UDMA_GLUE_LAYER) += k3-udma-glue.o
obj-$(CONFIG_TI_K3_PSIL) += k3-psil.o \
k3-psil-am654.o \
k3-psil-j721e.o \
- k3-psil-j7200.o
+ k3-psil-j7200.o \
+ k3-psil-am64.o
obj-$(CONFIG_TI_DMA_CROSSBAR) += dma-crossbar.o
diff --git a/drivers/dma/ti/k3-psil-am64.c b/drivers/dma/ti/k3-psil-am64.c
new file mode 100644
index 000000000000..e88f57a36ac1
--- /dev/null
+++ b/drivers/dma/ti/k3-psil-am64.c
@@ -0,0 +1,75 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Texas Instruments Incorporated - https://www.ti.com
+ * Author: Peter Ujfalusi <[email protected]>
+ */
+
+#include <linux/kernel.h>
+
+#include "k3-psil-priv.h"
+
+#define PSIL_ETHERNET(x, ch, flow_base, flow_cnt) \
+ { \
+ .thread_id = x, \
+ .ep_config = { \
+ .ep_type = PSIL_EP_NATIVE, \
+ .pkt_mode = 1, \
+ .needs_epib = 1, \
+ .psd_size = 16, \
+ .mapped_channel_id = ch, \
+ .flow_start = flow_base, \
+ .flow_num = flow_cnt, \
+ .default_flow_id = flow_base, \
+ }, \
+ }
+
+#define PSIL_SAUL(x, ch, flow_base, flow_cnt, default_flow, tx) \
+ { \
+ .thread_id = x, \
+ .ep_config = { \
+ .ep_type = PSIL_EP_NATIVE, \
+ .pkt_mode = 1, \
+ .needs_epib = 1, \
+ .psd_size = 64, \
+ .mapped_channel_id = ch, \
+ .flow_start = flow_base, \
+ .flow_num = flow_cnt, \
+ .default_flow_id = default_flow, \
+ .notdpkt = tx, \
+ }, \
+ }
+
+/* PSI-L source thread IDs, used for RX (DMA_DEV_TO_MEM) */
+static struct psil_ep am64_src_ep_map[] = {
+ /* SA2UL */
+ PSIL_SAUL(0x4000, 17, 32, 8, 32, 0),
+ PSIL_SAUL(0x4001, 18, 32, 8, 33, 0),
+ PSIL_SAUL(0x4002, 19, 40, 8, 40, 0),
+ PSIL_SAUL(0x4003, 20, 40, 8, 41, 0),
+ /* CPSW3G */
+ PSIL_ETHERNET(0x4500, 16, 16, 16),
+};
+
+/* PSI-L destination thread IDs, used for TX (DMA_MEM_TO_DEV) */
+static struct psil_ep am64_dst_ep_map[] = {
+ /* SA2UL */
+ PSIL_SAUL(0xc000, 24, 80, 8, 80, 1),
+ PSIL_SAUL(0xc001, 25, 88, 8, 88, 1),
+ /* CPSW3G */
+ PSIL_ETHERNET(0xc500, 16, 16, 8),
+ PSIL_ETHERNET(0xc501, 17, 24, 8),
+ PSIL_ETHERNET(0xc502, 18, 32, 8),
+ PSIL_ETHERNET(0xc503, 19, 40, 8),
+ PSIL_ETHERNET(0xc504, 20, 48, 8),
+ PSIL_ETHERNET(0xc505, 21, 56, 8),
+ PSIL_ETHERNET(0xc506, 22, 64, 8),
+ PSIL_ETHERNET(0xc507, 23, 72, 8),
+};
+
+struct psil_ep_map am64_ep_map = {
+ .name = "am64",
+ .src = am64_src_ep_map,
+ .src_count = ARRAY_SIZE(am64_src_ep_map),
+ .dst = am64_dst_ep_map,
+ .dst_count = ARRAY_SIZE(am64_dst_ep_map),
+};
diff --git a/drivers/dma/ti/k3-psil-priv.h b/drivers/dma/ti/k3-psil-priv.h
index b4b0fb359eff..b74e192e3c2d 100644
--- a/drivers/dma/ti/k3-psil-priv.h
+++ b/drivers/dma/ti/k3-psil-priv.h
@@ -40,5 +40,6 @@ struct psil_endpoint_config *psil_get_ep_config(u32 thread_id);
extern struct psil_ep_map am654_ep_map;
extern struct psil_ep_map j721e_ep_map;
extern struct psil_ep_map j7200_ep_map;
+extern struct psil_ep_map am64_ep_map;
#endif /* K3_PSIL_PRIV_H_ */
diff --git a/drivers/dma/ti/k3-psil.c b/drivers/dma/ti/k3-psil.c
index 837853aab95a..9da3027a6fbb 100644
--- a/drivers/dma/ti/k3-psil.c
+++ b/drivers/dma/ti/k3-psil.c
@@ -20,6 +20,7 @@ static const struct soc_device_attribute k3_soc_devices[] = {
{ .family = "AM65X", .data = &am654_ep_map },
{ .family = "J721E", .data = &j721e_ep_map },
{ .family = "J7200", .data = &j7200_ep_map },
+ { .family = "AM64", .data = &am64_ep_map },
{ /* sentinel */ }
};
--
Peter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
Hi,
On 30/09/2020 12.13, Peter Ujfalusi wrote:
> Hi,
for some reason I have missed Grygorii from the TO, sorry.
Grygorii: the series in lore:
https://lore.kernel.org/lkml/[email protected]/
> The series have build dependency on ti_sci/soc series (v1):
> https://lore.kernel.org/lkml/[email protected]/
>
> The unmapped event handling in INTA is also needed, but it is not a build
> dependency (v2):
> https://lore.kernel.org/lkml/[email protected]/
>
> The DMSS introduced within AM64 as a simplified Data movement engine is built
> on similar grounds as the K3 NAVSS and UDMAP, but with significant architectural
> changes.
>
> - Rings are built into the DMAs
> The DMAs no longer use the general purpose ringacc, all rings has been moved
> inside of the DMAs. The new rings within the DMAs are simplified to be dual
> directional compared to the uni-directional rings in ringacc.
> There is no more of a concept of generic purpose rings, all rings are assigned
> to specific channels or flows.
>
> - Per channel coherency support
> The DMAs use the 'ASEL' bits to select data and configuration fetch path. The
> ASEL bits are placed at the unused parts of any address field used by the
> DMAs (pointers to descriptors, addresses in descriptors, ring base addresses).
> The ASEL is not part of the address (the DMAs can address 48bits).
> Individual channels can be configured to be coherent (via ACP port) or non
> coherent individually by configuring the ASEL to appropriate value.
>
> - Two different DMAs (well, three actually)
> PKTDMA
> Similar to UDMAP channels configured in packet mode.
> The flow configuration of the channels has changed significantly in a way that
> each channel have at least one flow assigned at design time and each flow is
> directly mapped to corresponding ring.
> When multiple flows are set, the channel can only use the flows within it's
> assigned range.
> PKTDMA also introduced multiple tflows which did not existed in UDMAP.
>
> BCDMA
> It has two types of channels:
> - split channels (tchan/rchan): Similar to UDMAP channels configured in TR mode.
> - Block copy channels (bchan): Similar to EDMA or traditional DMA channels, they
> can be used for mem2mem type of transfers or to service peripherals not
> accessible via PSI-L by using external triggers for the TR.
> BCDMA channels do not have support for multiple flows
>
> With the introduction of the new DMAs (especially the BCDMA) we also need to
> update the resource manager code to support the second range from sysfw for
> UDMA channels.
>
> The two outstanding change in the series in my view is
> the handling of the DMAs sideband signal of ASEL to select path to provide
> coherency or non coherency.
>
> the smaller one is the device_router_config callback to allow the configuration
> of the triggers when BCDMA is servicing a triggering peripheral to solve a
> chicken-egg situation:
> The router needs to know the event number to send which in turn depends on the
> channel we got for servicing the peripheral.
>
> I'm sending this series as early as possible to have time for review and
> changes.
>
> When all things resolved, it would be nice if Santosh could create an immutable
> branch with the ti_sci/soc patches for Vinod to use for this series.
>
> Regards,
> Peter
> ---
> Grygorii Strashko (1):
> soc: ti: k3-ringacc: add AM64 DMA rings support.
>
> Peter Ujfalusi (16):
> dmaengine: of-dma: Add support for optional router configuration
> callback
> dmaengine: Add support for per channel coherency handling
> dmaengine: doc: client: Update for dmaengine_get_dma_device() usage
> dmaengine: dmatest: Use dmaengine_get_dma_device
> dmaengine: ti: k3-udma: Wait for peer teardown completion if supported
> dmaengine: ti: k3-udma: Add support for second resource range from
> sysfw
> dmaengine: ti: k3-udma-glue: Add function to get device pointer for
> DMA API
> dmaengine: ti: k3-udma-glue: Configure the dma_dev for rings
> dt-bindings: dma: ti: Add document for K3 BCDMA
> dt-bindings: dma: ti: Add document for K3 PKTDMA
> dmaengine: ti: k3-psil: Extend psil_endpoint_config for K3 PKTDMA
> dmaengine: ti: k3-psil: Add initial map for AM64
> dmaengine: ti: Add support for k3 event routers
> dmaengine: ti: k3-udma: Initial support for K3 BCDMA
> dmaengine: ti: k3-udma: Add support for BCDMA channel TPL handling
> dmaengine: ti: k3-udma: Initial support for K3 PKTDMA
>
> Vignesh Raghavendra (1):
> dmaengine: ti: k3-udma-glue: Add support for K3 PKTDMA
>
> .../devicetree/bindings/dma/ti/k3-bcdma.yaml | 183 ++
> .../devicetree/bindings/dma/ti/k3-pktdma.yaml | 189 ++
> Documentation/driver-api/dmaengine/client.rst | 4 +-
> drivers/dma/dmatest.c | 13 +-
> drivers/dma/of-dma.c | 10 +
> drivers/dma/ti/Makefile | 3 +-
> drivers/dma/ti/k3-psil-am64.c | 75 +
> drivers/dma/ti/k3-psil-priv.h | 1 +
> drivers/dma/ti/k3-psil.c | 1 +
> drivers/dma/ti/k3-udma-glue.c | 294 ++-
> drivers/dma/ti/k3-udma-private.c | 39 +
> drivers/dma/ti/k3-udma.c | 1975 +++++++++++++++--
> drivers/dma/ti/k3-udma.h | 27 +-
> drivers/soc/ti/k3-ringacc.c | 325 ++-
> include/linux/dma/k3-event-router.h | 16 +
> include/linux/dma/k3-psil.h | 16 +
> include/linux/dma/k3-udma-glue.h | 12 +
> include/linux/dmaengine.h | 14 +
> include/linux/soc/ti/k3-ringacc.h | 17 +
> 19 files changed, 2994 insertions(+), 220 deletions(-)
> create mode 100644 Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml
> create mode 100644 Documentation/devicetree/bindings/dma/ti/k3-pktdma.yaml
> create mode 100644 drivers/dma/ti/k3-psil-am64.c
> create mode 100644 include/linux/dma/k3-event-router.h
>
- Péter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
Hi Rob,
On 30/09/2020 12.14, Peter Ujfalusi wrote:
> New binding document for
> Texas Instruments K3 Block Copy DMA (BCDMA).
>
> BCDMA is introduced as part of AM64.
>
> Signed-off-by: Peter Ujfalusi <[email protected]>
> ---
> .../devicetree/bindings/dma/ti/k3-bcdma.yaml | 183 ++++++++++++++++++
> 1 file changed, 183 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml
>
> diff --git a/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml b/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml
> new file mode 100644
> index 000000000000..c84fb641738f
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml
> @@ -0,0 +1,183 @@
...
> + compatible:
> + enum:
> + - ti,am64-dmss-bcdma
Would it be OK if I use ti,am64x-dmss-bcdma or should I stick with
am64-dmss-bcdma.
The TRM refers to the family as AM64x, but having the 'x' in the
compatible did not sounded right.
- Péter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
On Thu, Oct 01, 2020 at 09:49:43AM +0300, Peter Ujfalusi wrote:
> Hi Rob,
>
> On 30/09/2020 12.14, Peter Ujfalusi wrote:
> > New binding document for
> > Texas Instruments K3 Block Copy DMA (BCDMA).
> >
> > BCDMA is introduced as part of AM64.
> >
> > Signed-off-by: Peter Ujfalusi <[email protected]>
> > ---
> > .../devicetree/bindings/dma/ti/k3-bcdma.yaml | 183 ++++++++++++++++++
> > 1 file changed, 183 insertions(+)
> > create mode 100644 Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml
> >
> > diff --git a/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml b/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml
> > new file mode 100644
> > index 000000000000..c84fb641738f
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml
> > @@ -0,0 +1,183 @@
>
> ...
>
> > + compatible:
> > + enum:
> > + - ti,am64-dmss-bcdma
>
> Would it be OK if I use ti,am64x-dmss-bcdma or should I stick with
> am64-dmss-bcdma.
'ti,am654.*' was used pretty consistently, is this family different?
> The TRM refers to the family as AM64x, but having the 'x' in the
> compatible did not sounded right.
We generally don't want wildcards, but if the last digit is just pinout
or fusing differences, then it's fine IMO.
Bottomline, just be consistent across all the compatible strings for
this SoC.
Rob
On Wed, Sep 30, 2020 at 12:14:03PM +0300, Peter Ujfalusi wrote:
> New binding document for
> Texas Instruments K3 Block Copy DMA (BCDMA).
>
> BCDMA is introduced as part of AM64.
>
> Signed-off-by: Peter Ujfalusi <[email protected]>
> ---
> .../devicetree/bindings/dma/ti/k3-bcdma.yaml | 183 ++++++++++++++++++
> 1 file changed, 183 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml
>
> diff --git a/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml b/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml
> new file mode 100644
> index 000000000000..c84fb641738f
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml
> @@ -0,0 +1,183 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/dma/ti/k3-bcdma.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Texas Instruments K3 DMSS BCDMA Device Tree Bindings
> +
> +maintainers:
> + - Peter Ujfalusi <[email protected]>
> +
> +description: |
> + The Block Copy DMA (BCDMA) is intended to perform similar functions as the TR
> + mode channels of K3 UDMA-P.
> + BCDMA includes block copy channels and Split channels.
> +
> + Block copy channels mainly used for memory to memory transfers, but with
> + optional triggers a block copy channel can service peripherals by accessing
> + directly to memory mapped registers or area.
> +
> + Split channels can be used to service PSI-L based peripherals.
> + The peripherals can be PSI-L native or legacy, non PSI-L native peripherals
> + with PDMAs. PDMA is tasked to act as a bridge between the PSI-L fabric and the
> + legacy peripheral.
> +
> + PDMAs can be configured via BCDMA split channel's peer registers to match with
> + the configuration of the legacy peripheral.
> +
> +allOf:
> + - $ref: /schemas/dma/dma-controller.yaml#
> +
> +properties:
> + "#dma-cells":
> + const: 3
> + description: |
> + cell 1: type of the BCDMA channel to be used to service the peripheral:
> + 0 - split channel
> + 1 - block copy channel using global trigger 1
> + 2 - block copy channel using global trigger 2
> + 3 - block copy channel using local trigger
> +
> + cell 2: parameter for the channel:
> + if cell 1 is 0 (split channel):
> + PSI-L thread ID of the remote (to BCDMA) end.
> + Valid ranges for thread ID depends on the data movement direction:
> + for source thread IDs (rx): 0 - 0x7fff
> + for destination thread IDs (tx): 0x8000 - 0xffff
> +
> + Please refer to the device documentation for the PSI-L thread map and
> + also the PSI-L peripheral chapter for the correct thread ID.
> + if cell 1 is 1 or 2 (block copy channel using global trigger):
> + Unused, ignored
> +
> + The trigger must be configured for the channel externally to BCDMA,
> + channels using global triggers should not be requested directly, but
> + via DMA event router.
> + if cell 1 is 3 (block copy channel using local trigger):
> + bchan number of the locally triggered channel
> +
> + cell 3: ASEL value for the channel
> +
> + compatible:
> + enum:
> + - ti,am64-dmss-bcdma
> +
> + "#address-cells":
> + const: 2
> +
> + "#size-cells":
> + const: 2
> +
> + reg:
> + maxItems: 5
> +
> + reg-names:
> + items:
> + - const: gcfg
> + - const: bchanrt
> + - const: rchanrt
> + - const: tchanrt
> + - const: ringrt
> +
> + msi-parent: true
> +
> + ti,sci:
> + description: phandle to TI-SCI compatible System controller node
> + allOf:
> + - $ref: /schemas/types.yaml#/definitions/phandle
> +
> + ti,sci-dev-id:
> + description: TI-SCI device id of BCDMA
> + allOf:
> + - $ref: /schemas/types.yaml#/definitions/uint32
We have a common definition for these.
> +
> + ti,asel:
> + description: ASEL value for non slave channels
> + allOf:
You no longer need 'allOf' here.
> + - $ref: /schemas/types.yaml#/definitions/uint32
> +
> + ti,sci-rm-range-bchan:
> + description: |
> + Array of BCDMA block-copy channel resource subtypes for resource
> + allocation for this host
> + allOf:
> + - $ref: /schemas/types.yaml#/definitions/uint32-array
> + minItems: 1
> + # Should be enough
> + maxItems: 255
Are there constraints for the individual elements?
> +
> + ti,sci-rm-range-tchan:
> + description: |
> + Array of BCDMA split tx channel resource subtypes for resource allocation
> + for this host
> + allOf:
> + - $ref: /schemas/types.yaml#/definitions/uint32-array
> + minItems: 1
> + # Should be enough
> + maxItems: 255
> +
> + ti,sci-rm-range-rchan:
> + description: |
> + Array of BCDMA split rx channel resource subtypes for resource allocation
> + for this host
> + allOf:
> + - $ref: /schemas/types.yaml#/definitions/uint32-array
> + minItems: 1
> + # Should be enough
> + maxItems: 255
> +
> +required:
> + - compatible
> + - "#address-cells"
> + - "#size-cells"
> + - "#dma-cells"
> + - reg
> + - reg-names
> + - msi-parent
> + - ti,sci
> + - ti,sci-dev-id
> + - ti,sci-rm-range-bchan
> + - ti,sci-rm-range-tchan
> + - ti,sci-rm-range-rchan
> +
> +additionalProperties: false
> +
> +examples:
> + - |+
> + cbass_main {
> + #address-cells = <2>;
> + #size-cells = <2>;
> +
> + main_dmss {
> + compatible = "simple-mfd";
IMO, if it is memory-mapped, then you should be using 'simple-bus'.
> + #address-cells = <2>;
> + #size-cells = <2>;
> + dma-ranges;
> + ranges;
> +
> + ti,sci-dev-id = <25>;
> +
> + main_bcdma: dma-controller@485c0100 {
> + compatible = "ti,am64-dmss-bcdma";
> + #address-cells = <2>;
> + #size-cells = <2>;
> +
> + reg = <0x0 0x485c0100 0x0 0x100>,
> + <0x0 0x4c000000 0x0 0x20000>,
> + <0x0 0x4a820000 0x0 0x20000>,
> + <0x0 0x4aa40000 0x0 0x20000>,
> + <0x0 0x4bc00000 0x0 0x100000>;
> + reg-names = "gcfg", "bchanrt", "rchanrt", "tchanrt", "ringrt";
> + msi-parent = <&inta_main_dmss>;
> + #dma-cells = <3>;
> +
> + ti,sci = <&dmsc>;
> + ti,sci-dev-id = <26>;
> +
> + ti,sci-rm-range-bchan = <0x20>; /* BLOCK_COPY_CHAN */
> + ti,sci-rm-range-rchan = <0x21>; /* SPLIT_TR_RX_CHAN */
> + ti,sci-rm-range-tchan = <0x22>; /* SPLIT_TR_TX_CHAN */
> + };
> + };
> + };
> --
> Peter
>
> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
>
Hi Peter,
On 30-09-20, 12:13, Peter Ujfalusi wrote:
> Additional configuration for the DMA event router might be needed for a
> channel which can not be done during device_alloc_chan_resources callback
> since the router information is not yet present for the drivers.
>
> If there is a need for additional configuration for the channel if DMA
> router is in use, then the driver can implement the device_router_config
> callback.
So what is the additional information you need, I am looking at the code
below and xlate invokes device_router_config() which driver will
implement..
Are you using this to configure channels based on info from DT?
>
> Signed-off-by: Peter Ujfalusi <[email protected]>
> ---
> drivers/dma/of-dma.c | 10 ++++++++++
> include/linux/dmaengine.h | 2 ++
> 2 files changed, 12 insertions(+)
>
> diff --git a/drivers/dma/of-dma.c b/drivers/dma/of-dma.c
> index 8a4f608904b9..ec00b20ae8e4 100644
> --- a/drivers/dma/of-dma.c
> +++ b/drivers/dma/of-dma.c
> @@ -75,8 +75,18 @@ static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec,
> ofdma->dma_router->route_free(ofdma->dma_router->dev,
> route_data);
> } else {
> + int ret = 0;
> +
> chan->router = ofdma->dma_router;
> chan->route_data = route_data;
> +
> + if (chan->device->device_router_config)
> + ret = chan->device->device_router_config(chan);
> +
> + if (ret) {
> + dma_release_channel(chan);
> + chan = ERR_PTR(ret);
> + }
> }
>
> /*
> diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
> index dd357a747780..d6197fe875af 100644
> --- a/include/linux/dmaengine.h
> +++ b/include/linux/dmaengine.h
> @@ -800,6 +800,7 @@ struct dma_filter {
> * by tx_status
> * @device_alloc_chan_resources: allocate resources and return the
> * number of allocated descriptors
> + * @device_router_config: optional callback for DMA router configuration
> * @device_free_chan_resources: release DMA channel's resources
> * @device_prep_dma_memcpy: prepares a memcpy operation
> * @device_prep_dma_xor: prepares a xor operation
> @@ -874,6 +875,7 @@ struct dma_device {
> enum dma_residue_granularity residue_granularity;
>
> int (*device_alloc_chan_resources)(struct dma_chan *chan);
> + int (*device_router_config)(struct dma_chan *chan);
> void (*device_free_chan_resources)(struct dma_chan *chan);
>
> struct dma_async_tx_descriptor *(*device_prep_dma_memcpy)(
> --
> Peter
>
> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
--
~Vinod
On 30-09-20, 12:14, Peter Ujfalusi wrote:
> Glue layer users should use the device of the DMA for DMA mapping and
> allocations as it is the DMA which accesses to descriptors and buffers,
> not the clients
>
> Signed-off-by: Peter Ujfalusi <[email protected]>
> ---
> drivers/dma/ti/k3-udma-glue.c | 14 ++++++++++++++
> drivers/dma/ti/k3-udma-private.c | 6 ++++++
> drivers/dma/ti/k3-udma.h | 1 +
> include/linux/dma/k3-udma-glue.h | 4 ++++
> 4 files changed, 25 insertions(+)
>
> diff --git a/drivers/dma/ti/k3-udma-glue.c b/drivers/dma/ti/k3-udma-glue.c
> index a367584f0d7b..a53bc4707ae8 100644
> --- a/drivers/dma/ti/k3-udma-glue.c
> +++ b/drivers/dma/ti/k3-udma-glue.c
> @@ -487,6 +487,13 @@ int k3_udma_glue_tx_get_irq(struct k3_udma_glue_tx_channel *tx_chn)
> }
> EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_irq);
>
> +struct device *
> + k3_udma_glue_tx_get_dma_device(struct k3_udma_glue_tx_channel *tx_chn)
How about..
struct device *
k3_udma_glue_tx_get_dma_device(struct k3_udma_glue_tx_channel *tx_chn)
> +{
> + return xudma_get_device(tx_chn->common.udmax);
> +}
> +EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_dma_device);
Hmm why would you need to export this device.. Can you please outline
all the devices involved here... why not use dmaI_dev->dev or chan->dev?
--
~Vinod
On 06/10/2020 22.29, Rob Herring wrote:
> On Wed, Sep 30, 2020 at 12:14:03PM +0300, Peter Ujfalusi wrote:
>> New binding document for
>> Texas Instruments K3 Block Copy DMA (BCDMA).
>>
>> BCDMA is introduced as part of AM64.
>>
...
>
>> + ti,sci:
>> + description: phandle to TI-SCI compatible System controller node
>> + allOf:
>> + - $ref: /schemas/types.yaml#/definitions/phandle
>> +
>> + ti,sci-dev-id:
>> + description: TI-SCI device id of BCDMA
>> + allOf:
>> + - $ref: /schemas/types.yaml#/definitions/uint32
>
> We have a common definition for these.
Yes, in arm/keystone/ti,k3-sci-common.yaml, but I could not get to use
that as reference.
I can not list it under the topmost allOf and drop the ti,sci and
ti,sci-dev-id like this:
allOf:
- $ref: /schemas/dma/dma-controller.yaml#
- $ref: /schemas/arm/keystone/ti,k3-sci-common.yaml#
It results:
CHKDT Documentation/devicetree/bindings/processed-schema-examples.json
DTEX Documentation/devicetree/bindings/dma/ti/k3-bcdma.example.dts
SCHEMA Documentation/devicetree/bindings/processed-schema-examples.json
DTC Documentation/devicetree/bindings/dma/ti/k3-bcdma.example.dt.yaml
CHECK Documentation/devicetree/bindings/dma/ti/k3-bcdma.example.dt.yaml
Documentation/devicetree/bindings/dma/ti/k3-bcdma.example.dt.yaml:
dma-controller@485c0100: 'ti,sci', 'ti,sci-dev-id' do not match any of
the regexes: 'pinctrl-[0-9]+'
From schema: Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml
If I remove the "additionalProperties: false" from the schema file, then
it compiles fine.
>
>> +
>> + ti,asel:
>> + description: ASEL value for non slave channels
>> + allOf:
>
> You no longer need 'allOf' here.
OK, I changed it in all instances.
>
>> + - $ref: /schemas/types.yaml#/definitions/uint32
>> +
>> + ti,sci-rm-range-bchan:
>> + description: |
>> + Array of BCDMA block-copy channel resource subtypes for resource
>> + allocation for this host
>> + allOf:
>> + - $ref: /schemas/types.yaml#/definitions/uint32-array
>> + minItems: 1
>> + # Should be enough
>> + maxItems: 255
>
> Are there constraints for the individual elements?
In practice the subtype ID is 6bits number.
Should I add limits to individual elements?
>> +
>> + ti,sci-rm-range-tchan:
>> + description: |
>> + Array of BCDMA split tx channel resource subtypes for resource allocation
>> + for this host
>> + allOf:
>> + - $ref: /schemas/types.yaml#/definitions/uint32-array
>> + minItems: 1
>> + # Should be enough
>> + maxItems: 255
>> +
>> + ti,sci-rm-range-rchan:
>> + description: |
>> + Array of BCDMA split rx channel resource subtypes for resource allocation
>> + for this host
>> + allOf:
>> + - $ref: /schemas/types.yaml#/definitions/uint32-array
>> + minItems: 1
>> + # Should be enough
>> + maxItems: 255
>> +
>> +required:
>> + - compatible
>> + - "#address-cells"
>> + - "#size-cells"
>> + - "#dma-cells"
>> + - reg
>> + - reg-names
>> + - msi-parent
>> + - ti,sci
>> + - ti,sci-dev-id
>> + - ti,sci-rm-range-bchan
>> + - ti,sci-rm-range-tchan
>> + - ti,sci-rm-range-rchan
>> +
>> +additionalProperties: false
>> +
>> +examples:
>> + - |+
>> + cbass_main {
>> + #address-cells = <2>;
>> + #size-cells = <2>;
>> +
>> + main_dmss {
>> + compatible = "simple-mfd";
>
> IMO, if it is memory-mapped, then you should be using 'simple-bus'.
We had the same discussion when I introduced the k3-udma binding and we
have concluded on the simple-mfd as DMSS is not a bus, but contains
different peripherals.
>
>> + #address-cells = <2>;
>> + #size-cells = <2>;
>> + dma-ranges;
>> + ranges;
>> +
>> + ti,sci-dev-id = <25>;
>> +
>> + main_bcdma: dma-controller@485c0100 {
>> + compatible = "ti,am64-dmss-bcdma";
>> + #address-cells = <2>;
>> + #size-cells = <2>;
>> +
>> + reg = <0x0 0x485c0100 0x0 0x100>,
>> + <0x0 0x4c000000 0x0 0x20000>,
>> + <0x0 0x4a820000 0x0 0x20000>,
>> + <0x0 0x4aa40000 0x0 0x20000>,
>> + <0x0 0x4bc00000 0x0 0x100000>;
>> + reg-names = "gcfg", "bchanrt", "rchanrt", "tchanrt", "ringrt";
>> + msi-parent = <&inta_main_dmss>;
>> + #dma-cells = <3>;
>> +
>> + ti,sci = <&dmsc>;
>> + ti,sci-dev-id = <26>;
>> +
>> + ti,sci-rm-range-bchan = <0x20>; /* BLOCK_COPY_CHAN */
>> + ti,sci-rm-range-rchan = <0x21>; /* SPLIT_TR_RX_CHAN */
>> + ti,sci-rm-range-tchan = <0x22>; /* SPLIT_TR_TX_CHAN */
>> + };
>> + };
>> + };
>> --
>> Peter
>>
>> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
>> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
>>
- Péter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
On 07/10/2020 9.53, Vinod Koul wrote:
> On 30-09-20, 12:14, Peter Ujfalusi wrote:
>> Glue layer users should use the device of the DMA for DMA mapping and
>> allocations as it is the DMA which accesses to descriptors and buffers,
>> not the clients
>>
>> Signed-off-by: Peter Ujfalusi <[email protected]>
>> ---
>> drivers/dma/ti/k3-udma-glue.c | 14 ++++++++++++++
>> drivers/dma/ti/k3-udma-private.c | 6 ++++++
>> drivers/dma/ti/k3-udma.h | 1 +
>> include/linux/dma/k3-udma-glue.h | 4 ++++
>> 4 files changed, 25 insertions(+)
>>
>> diff --git a/drivers/dma/ti/k3-udma-glue.c b/drivers/dma/ti/k3-udma-glue.c
>> index a367584f0d7b..a53bc4707ae8 100644
>> --- a/drivers/dma/ti/k3-udma-glue.c
>> +++ b/drivers/dma/ti/k3-udma-glue.c
>> @@ -487,6 +487,13 @@ int k3_udma_glue_tx_get_irq(struct k3_udma_glue_tx_channel *tx_chn)
>> }
>> EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_irq);
>>
>> +struct device *
>> + k3_udma_glue_tx_get_dma_device(struct k3_udma_glue_tx_channel *tx_chn)
>
> How about..
>
> struct device *
> k3_udma_glue_tx_get_dma_device(struct k3_udma_glue_tx_channel *tx_chn)
OK.
>
>> +{
>> + return xudma_get_device(tx_chn->common.udmax);
>> +}
>> +EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_dma_device);
>
> Hmm why would you need to export this device.. Can you please outline
> all the devices involved here...
In upstream we have one user of the udma-glue layer:
drivers/net/ethernet/ti/am65-cpsw-nuss.c
It is allocating memory to be used with DMA (descriptor pool), it needs
to use correct device for DMA API.
The cpsw atm using it's own dev for allocation, which is wrong, but it
worked fine as am654/j721e/j7200 is all coherent.
> why not use dmaI_dev->dev or chan->dev?
The glue layer does not use DMAengine API to request a channel as it
require special resource setup compared to what is possible via generic
API. We have kept the DMAengine and Glue layer as separate until I have
time to extend the core to support the features we would need to remove
the Glue layer.
- Péter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
On 07-10-20, 11:08, Peter Ujfalusi wrote:
> Not really. In DT an event triggered channel can be requested via router
> (when this is used) for example:
>
> dmas = <&inta_l2g a b c>;
> a - the input number of the DMA request in l2g
> b - edge or level trigger to be selected
> c - ASEL number for the channel for coherency
>
> The l2g router driver then translate this to:
> <&main_bcdma 1 0 c>
> 1 - Global trigger 0 is used by the DMA
> 0 - ignored
> c - ASEL number.
>
> The router needs to send an event which is going to be received by the
> channel we have picked up, this event number can only be known when we
> do have the channel.
>
> So the flow in this case:
> router converts the dma_spec for the DMA, but it does not yet know what
> is the event number it has to use.
> The BCDMA driver will pick an available bchan and notes that the
> transfers will be triggered by global event 0.
> When we have the channel, the core saves the router information and
> calls the device_router_config of BCDMA.
> In there we call back to the router and give the event number it has to
> use to send the trigger for the channel.
Ah that is intresting, so you would call router driver foo_set_event()
and would send the event number, why not call that API from alloc
channel or even xlate? Why do you need new callback?
Or did i miss something..
--
~Vinod
On Wed, Oct 07, 2020 at 12:09:06PM +0300, Peter Ujfalusi wrote:
>
>
> On 06/10/2020 22.29, Rob Herring wrote:
> > On Wed, Sep 30, 2020 at 12:14:03PM +0300, Peter Ujfalusi wrote:
> >> New binding document for
> >> Texas Instruments K3 Block Copy DMA (BCDMA).
> >>
> >> BCDMA is introduced as part of AM64.
> >>
>
> ...
>
> >
> >> + ti,sci:
> >> + description: phandle to TI-SCI compatible System controller node
> >> + allOf:
> >> + - $ref: /schemas/types.yaml#/definitions/phandle
> >> +
> >> + ti,sci-dev-id:
> >> + description: TI-SCI device id of BCDMA
> >> + allOf:
> >> + - $ref: /schemas/types.yaml#/definitions/uint32
> >
> > We have a common definition for these.
>
> Yes, in arm/keystone/ti,k3-sci-common.yaml, but I could not get to use
> that as reference.
>
> I can not list it under the topmost allOf and drop the ti,sci and
> ti,sci-dev-id like this:
>
> allOf:
> - $ref: /schemas/dma/dma-controller.yaml#
> - $ref: /schemas/arm/keystone/ti,k3-sci-common.yaml#
>
> It results:
> CHKDT Documentation/devicetree/bindings/processed-schema-examples.json
> DTEX Documentation/devicetree/bindings/dma/ti/k3-bcdma.example.dts
> SCHEMA Documentation/devicetree/bindings/processed-schema-examples.json
> DTC Documentation/devicetree/bindings/dma/ti/k3-bcdma.example.dt.yaml
> CHECK Documentation/devicetree/bindings/dma/ti/k3-bcdma.example.dt.yaml
> Documentation/devicetree/bindings/dma/ti/k3-bcdma.example.dt.yaml:
> dma-controller@485c0100: 'ti,sci', 'ti,sci-dev-id' do not match any of
> the regexes: 'pinctrl-[0-9]+'
> From schema: Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml
>
> If I remove the "additionalProperties: false" from the schema file, then
> it compiles fine.
Yeah, you have to do 'unevaluatedProperties: false' which doesn't
actually do anything yet, but can 'see' into $ref's.
> >> + ti,asel:
> >> + description: ASEL value for non slave channels
> >> + allOf:
> >
> > You no longer need 'allOf' here.
>
> OK, I changed it in all instances.
>
> >
> >> + - $ref: /schemas/types.yaml#/definitions/uint32
> >> +
> >> + ti,sci-rm-range-bchan:
> >> + description: |
> >> + Array of BCDMA block-copy channel resource subtypes for resource
> >> + allocation for this host
> >> + allOf:
> >> + - $ref: /schemas/types.yaml#/definitions/uint32-array
> >> + minItems: 1
> >> + # Should be enough
> >> + maxItems: 255
> >
> > Are there constraints for the individual elements?
>
> In practice the subtype ID is 6bits number.
> Should I add limits to individual elements?
Yes:
items:
maximum: 0x3f
>
> >> +
> >> + ti,sci-rm-range-tchan:
> >> + description: |
> >> + Array of BCDMA split tx channel resource subtypes for resource allocation
> >> + for this host
> >> + allOf:
> >> + - $ref: /schemas/types.yaml#/definitions/uint32-array
> >> + minItems: 1
> >> + # Should be enough
> >> + maxItems: 255
> >> +
> >> + ti,sci-rm-range-rchan:
> >> + description: |
> >> + Array of BCDMA split rx channel resource subtypes for resource allocation
> >> + for this host
> >> + allOf:
> >> + - $ref: /schemas/types.yaml#/definitions/uint32-array
> >> + minItems: 1
> >> + # Should be enough
> >> + maxItems: 255
> >> +
> >> +required:
> >> + - compatible
> >> + - "#address-cells"
> >> + - "#size-cells"
> >> + - "#dma-cells"
> >> + - reg
> >> + - reg-names
> >> + - msi-parent
> >> + - ti,sci
> >> + - ti,sci-dev-id
> >> + - ti,sci-rm-range-bchan
> >> + - ti,sci-rm-range-tchan
> >> + - ti,sci-rm-range-rchan
> >> +
> >> +additionalProperties: false
> >> +
> >> +examples:
> >> + - |+
> >> + cbass_main {
> >> + #address-cells = <2>;
> >> + #size-cells = <2>;
> >> +
> >> + main_dmss {
> >> + compatible = "simple-mfd";
> >
> > IMO, if it is memory-mapped, then you should be using 'simple-bus'.
>
> We had the same discussion when I introduced the k3-udma binding and we
> have concluded on the simple-mfd as DMSS is not a bus, but contains
> different peripherals.
Ok.
Rob
On 07/10/2020 18.55, Vinod Koul wrote:
> On 07-10-20, 11:08, Peter Ujfalusi wrote:
>
>> Not really. In DT an event triggered channel can be requested via router
>> (when this is used) for example:
>>
>> dmas = <&inta_l2g a b c>;
>> a - the input number of the DMA request in l2g
>> b - edge or level trigger to be selected
>> c - ASEL number for the channel for coherency
>>
>> The l2g router driver then translate this to:
>> <&main_bcdma 1 0 c>
>> 1 - Global trigger 0 is used by the DMA
>> 0 - ignored
>> c - ASEL number.
>>
>> The router needs to send an event which is going to be received by the
>> channel we have picked up, this event number can only be known when we
>> do have the channel.
>>
>> So the flow in this case:
>> router converts the dma_spec for the DMA, but it does not yet know what
>> is the event number it has to use.
>> The BCDMA driver will pick an available bchan and notes that the
>> transfers will be triggered by global event 0.
>> When we have the channel, the core saves the router information and
>> calls the device_router_config of BCDMA.
>> In there we call back to the router and give the event number it has to
>> use to send the trigger for the channel.
>
> Ah that is intresting, so you would call router driver foo_set_event()
> and would send the event number
Yes, that's correct.
> why not call that API from alloc
> channel or even xlate?
at alloc / xlate time the DMA driver does not have information about
router. The alloc/xlate will result the channel, but in my case it will
result a broken setup as the router does not know which event to send.
> Why do you need new callback?
When I added the DMA event router support, it was designed in a way that
the DMA driver itself must not know anything about the router, it has to
be transparent. One can just add a router on front of any DMA and
everything will work.
This is the right thing to do, and it works for existing setups.
> Or did i miss something..
The BCDMA triggered channel setup is a chicken-egg setup.
For this case the channel can be triggered by a global event. A channel
can receive two global event, but this is not a concern atm.
The event number depends on the channel we use, for simplicity let's
say: bchan_id + trigger_offset = bchan_trigger_evt.
of_dma_router_xlate does this:
1. calls the dma router's of_dma_route_allocate callback to allocate a
route and craft a dma_spec for the DMA to configure a channel.
2. using this crafted dma_spec we request a channel via of_dma_xlate
callback
3. if we got the channel, we save the router information, so it can be
deallocated when the channel is disabled.
I need a fourth step to do a final configuration since only at this time
(after it has been allocated) the channel has information about possible
router.
In the new optional callback the DMA driver can figure out the event
number which must be used by the router to send the event to the desired
global event target of the channel.
Other DMAs might need something different, but imho if there is going to
be a need for such post alloc router config, then it is most likely will
come from the need to feed back some sort of channel information to the
router. Or take parameter from the router itself for the channel.
To summarize:
In of_dma_route_allocate() the router does not yet know the channel we
are going to get.
In of_dma_xlate() the DMA driver does not yet know if the channel will
use router or not.
I need to tell the router the event number it has to send, which is
based on the channel number I got.
- Péter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
On 07/10/2020 8.44, Vinod Koul wrote:
> Hi Peter,
>
> On 30-09-20, 12:13, Peter Ujfalusi wrote:
>> Additional configuration for the DMA event router might be needed for a
>> channel which can not be done during device_alloc_chan_resources callback
>> since the router information is not yet present for the drivers.
>>
>> If there is a need for additional configuration for the channel if DMA
>> router is in use, then the driver can implement the device_router_config
>> callback.
>
> So what is the additional information you need, I am looking at the code
> below and xlate invokes device_router_config() which driver will
> implement..
The router driver is not yet ready due to external dependencies, it will
come a bit later.
> Are you using this to configure channels based on info from DT?
Not really. In DT an event triggered channel can be requested via router
(when this is used) for example:
dmas = <&inta_l2g a b c>;
a - the input number of the DMA request in l2g
b - edge or level trigger to be selected
c - ASEL number for the channel for coherency
The l2g router driver then translate this to:
<&main_bcdma 1 0 c>
1 - Global trigger 0 is used by the DMA
0 - ignored
c - ASEL number.
The router needs to send an event which is going to be received by the
channel we have picked up, this event number can only be known when we
do have the channel.
So the flow in this case:
router converts the dma_spec for the DMA, but it does not yet know what
is the event number it has to use.
The BCDMA driver will pick an available bchan and notes that the
transfers will be triggered by global event 0.
When we have the channel, the core saves the router information and
calls the device_router_config of BCDMA.
In there we call back to the router and give the event number it has to
use to send the trigger for the channel.
>>
>> Signed-off-by: Peter Ujfalusi <[email protected]>
>> ---
>> drivers/dma/of-dma.c | 10 ++++++++++
>> include/linux/dmaengine.h | 2 ++
>> 2 files changed, 12 insertions(+)
>>
>> diff --git a/drivers/dma/of-dma.c b/drivers/dma/of-dma.c
>> index 8a4f608904b9..ec00b20ae8e4 100644
>> --- a/drivers/dma/of-dma.c
>> +++ b/drivers/dma/of-dma.c
>> @@ -75,8 +75,18 @@ static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec,
>> ofdma->dma_router->route_free(ofdma->dma_router->dev,
>> route_data);
>> } else {
>> + int ret = 0;
>> +
>> chan->router = ofdma->dma_router;
>> chan->route_data = route_data;
>> +
>> + if (chan->device->device_router_config)
>> + ret = chan->device->device_router_config(chan);
>> +
>> + if (ret) {
>> + dma_release_channel(chan);
>> + chan = ERR_PTR(ret);
>> + }
>> }
>>
>> /*
>> diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
>> index dd357a747780..d6197fe875af 100644
>> --- a/include/linux/dmaengine.h
>> +++ b/include/linux/dmaengine.h
>> @@ -800,6 +800,7 @@ struct dma_filter {
>> * by tx_status
>> * @device_alloc_chan_resources: allocate resources and return the
>> * number of allocated descriptors
>> + * @device_router_config: optional callback for DMA router configuration
>> * @device_free_chan_resources: release DMA channel's resources
>> * @device_prep_dma_memcpy: prepares a memcpy operation
>> * @device_prep_dma_xor: prepares a xor operation
>> @@ -874,6 +875,7 @@ struct dma_device {
>> enum dma_residue_granularity residue_granularity;
>>
>> int (*device_alloc_chan_resources)(struct dma_chan *chan);
>> + int (*device_router_config)(struct dma_chan *chan);
>> void (*device_free_chan_resources)(struct dma_chan *chan);
>>
>> struct dma_async_tx_descriptor *(*device_prep_dma_memcpy)(
>> --
>> Peter
>>
>> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
>> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
>
- Péter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
On 07/10/2020 18.46, Rob Herring wrote:
> On Wed, Oct 07, 2020 at 12:09:06PM +0300, Peter Ujfalusi wrote:
>>
>>
>> On 06/10/2020 22.29, Rob Herring wrote:
>>> On Wed, Sep 30, 2020 at 12:14:03PM +0300, Peter Ujfalusi wrote:
>>>> New binding document for
>>>> Texas Instruments K3 Block Copy DMA (BCDMA).
>>>>
>>>> BCDMA is introduced as part of AM64.
>>>>
>>
>> ...
>>
>>>
>>>> + ti,sci:
>>>> + description: phandle to TI-SCI compatible System controller node
>>>> + allOf:
>>>> + - $ref: /schemas/types.yaml#/definitions/phandle
>>>> +
>>>> + ti,sci-dev-id:
>>>> + description: TI-SCI device id of BCDMA
>>>> + allOf:
>>>> + - $ref: /schemas/types.yaml#/definitions/uint32
>>>
>>> We have a common definition for these.
>>
>> Yes, in arm/keystone/ti,k3-sci-common.yaml, but I could not get to use
>> that as reference.
>>
>> I can not list it under the topmost allOf and drop the ti,sci and
>> ti,sci-dev-id like this:
>>
>> allOf:
>> - $ref: /schemas/dma/dma-controller.yaml#
>> - $ref: /schemas/arm/keystone/ti,k3-sci-common.yaml#
>>
>> It results:
>> CHKDT Documentation/devicetree/bindings/processed-schema-examples.json
>> DTEX Documentation/devicetree/bindings/dma/ti/k3-bcdma.example.dts
>> SCHEMA Documentation/devicetree/bindings/processed-schema-examples.json
>> DTC Documentation/devicetree/bindings/dma/ti/k3-bcdma.example.dt.yaml
>> CHECK Documentation/devicetree/bindings/dma/ti/k3-bcdma.example.dt.yaml
>> Documentation/devicetree/bindings/dma/ti/k3-bcdma.example.dt.yaml:
>> dma-controller@485c0100: 'ti,sci', 'ti,sci-dev-id' do not match any of
>> the regexes: 'pinctrl-[0-9]+'
>> From schema: Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml
>>
>> If I remove the "additionalProperties: false" from the schema file, then
>> it compiles fine.
>
> Yeah, you have to do 'unevaluatedProperties: false' which doesn't
> actually do anything yet, but can 'see' into $ref's.
I see, but even if I add the unevaluatedProperties: false I will have
the same error as long as I have additionalProperties: false
If I remove the additionalProperties then it makes no difference if I
have the unevaluatedProperties: false or I don't.
>
>>>> + ti,asel:
>>>> + description: ASEL value for non slave channels
>>>> + allOf:
>>>
>>> You no longer need 'allOf' here.
>>
>> OK, I changed it in all instances.
>>
>>>
>>>> + - $ref: /schemas/types.yaml#/definitions/uint32
>>>> +
>>>> + ti,sci-rm-range-bchan:
>>>> + description: |
>>>> + Array of BCDMA block-copy channel resource subtypes for resource
>>>> + allocation for this host
>>>> + allOf:
>>>> + - $ref: /schemas/types.yaml#/definitions/uint32-array
>>>> + minItems: 1
>>>> + # Should be enough
>>>> + maxItems: 255
>>>
>>> Are there constraints for the individual elements?
>>
>> In practice the subtype ID is 6bits number.
>> Should I add limits to individual elements?
>
> Yes:
>
> items:
> maximum: 0x3f
Right, I can just omit the minimum.
It would be nice if I could use definitions for these ranges to avoid
duplicated lines by adding
definitions:
ti,rm-range:
$ref: /schemas/types.yaml#/definitions/uint32-array
minItems: 1
# Should be enough
maxItems: 255
items:
minimum: 0
maximum: 0x3f
to schemas/arm/keystone/ti,k3-sci-common.yaml
and only have:
ti,sci-rm-range-bchan:
$ref:
/schemas/arm/keystone/ti,k3-sci-common.yaml#/definitions/ti,rm-range
description: |
Array of BCDMA block-copy channel resource subtypes for resource
allocation for this host
but it results:
Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml:
properties:ti,sci-rm-range-bchan: {'$ref':
'/schemas/arm/keystone/ti,k3-sci-common.yaml#/definitions/ti,rm-range',
'description': 'Array of BCDMA block-copy channel resource subtypes for
resource\nallocation for this host\n'} is not valid under any of the
given schemas (Possible causes of the failure):
Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml:
properties:ti,sci-rm-range-bchan: 'not' is a required property
Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml:
properties:ti,sci-rm-range-bchan:$ref:
'/schemas/arm/keystone/ti,k3-sci-common.yaml#/definitions/ti,rm-range'
does not match 'types.yaml#[/]{0,1}definitions/.*'
SCHEMA Documentation/devicetree/bindings/processed-schema-examples.json
Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml: ignoring, error
in schema: properties: ti,sci-rm-range-bchan
warning: no schema found in file:
Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml
So, obviously I'm looking at it from a wrong angle. It is not urgent, I
can spend time to figure out later and switch all cases where the RM
ranges are used.
>
>>
>>>> +
>>>> + ti,sci-rm-range-tchan:
>>>> + description: |
>>>> + Array of BCDMA split tx channel resource subtypes for resource allocation
>>>> + for this host
>>>> + allOf:
>>>> + - $ref: /schemas/types.yaml#/definitions/uint32-array
>>>> + minItems: 1
>>>> + # Should be enough
>>>> + maxItems: 255
>>>> +
>>>> + ti,sci-rm-range-rchan:
>>>> + description: |
>>>> + Array of BCDMA split rx channel resource subtypes for resource allocation
>>>> + for this host
>>>> + allOf:
>>>> + - $ref: /schemas/types.yaml#/definitions/uint32-array
>>>> + minItems: 1
>>>> + # Should be enough
>>>> + maxItems: 255
>>>> +
>>>> +required:
>>>> + - compatible
>>>> + - "#address-cells"
>>>> + - "#size-cells"
>>>> + - "#dma-cells"
>>>> + - reg
>>>> + - reg-names
>>>> + - msi-parent
>>>> + - ti,sci
>>>> + - ti,sci-dev-id
>>>> + - ti,sci-rm-range-bchan
>>>> + - ti,sci-rm-range-tchan
>>>> + - ti,sci-rm-range-rchan
>>>> +
>>>> +additionalProperties: false
>>>> +
>>>> +examples:
>>>> + - |+
>>>> + cbass_main {
>>>> + #address-cells = <2>;
>>>> + #size-cells = <2>;
>>>> +
>>>> + main_dmss {
>>>> + compatible = "simple-mfd";
>>>
>>> IMO, if it is memory-mapped, then you should be using 'simple-bus'.
>>
>> We had the same discussion when I introduced the k3-udma binding and we
>> have concluded on the simple-mfd as DMSS is not a bus, but contains
>> different peripherals.
>
> Ok.
>
> Rob
>
- Péter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
On Thu, Oct 8, 2020 at 3:40 AM Peter Ujfalusi <[email protected]> wrote:
>
>
>
> On 07/10/2020 18.46, Rob Herring wrote:
> > On Wed, Oct 07, 2020 at 12:09:06PM +0300, Peter Ujfalusi wrote:
> >>
> >>
> >> On 06/10/2020 22.29, Rob Herring wrote:
> >>> On Wed, Sep 30, 2020 at 12:14:03PM +0300, Peter Ujfalusi wrote:
> >>>> New binding document for
> >>>> Texas Instruments K3 Block Copy DMA (BCDMA).
> >>>>
> >>>> BCDMA is introduced as part of AM64.
> >>>>
> >>
> >> ...
> >>
> >>>
> >>>> + ti,sci:
> >>>> + description: phandle to TI-SCI compatible System controller node
> >>>> + allOf:
> >>>> + - $ref: /schemas/types.yaml#/definitions/phandle
> >>>> +
> >>>> + ti,sci-dev-id:
> >>>> + description: TI-SCI device id of BCDMA
> >>>> + allOf:
> >>>> + - $ref: /schemas/types.yaml#/definitions/uint32
> >>>
> >>> We have a common definition for these.
> >>
> >> Yes, in arm/keystone/ti,k3-sci-common.yaml, but I could not get to use
> >> that as reference.
> >>
> >> I can not list it under the topmost allOf and drop the ti,sci and
> >> ti,sci-dev-id like this:
> >>
> >> allOf:
> >> - $ref: /schemas/dma/dma-controller.yaml#
> >> - $ref: /schemas/arm/keystone/ti,k3-sci-common.yaml#
> >>
> >> It results:
> >> CHKDT Documentation/devicetree/bindings/processed-schema-examples.json
> >> DTEX Documentation/devicetree/bindings/dma/ti/k3-bcdma.example.dts
> >> SCHEMA Documentation/devicetree/bindings/processed-schema-examples.json
> >> DTC Documentation/devicetree/bindings/dma/ti/k3-bcdma.example.dt.yaml
> >> CHECK Documentation/devicetree/bindings/dma/ti/k3-bcdma.example.dt.yaml
> >> Documentation/devicetree/bindings/dma/ti/k3-bcdma.example.dt.yaml:
> >> dma-controller@485c0100: 'ti,sci', 'ti,sci-dev-id' do not match any of
> >> the regexes: 'pinctrl-[0-9]+'
> >> From schema: Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml
> >>
> >> If I remove the "additionalProperties: false" from the schema file, then
> >> it compiles fine.
> >
> > Yeah, you have to do 'unevaluatedProperties: false' which doesn't
> > actually do anything yet, but can 'see' into $ref's.
>
> I see, but even if I add the unevaluatedProperties: false I will have
> the same error as long as I have additionalProperties: false
Yes. I meant unevaluatedProperties instead of additionalProperties.
> If I remove the additionalProperties then it makes no difference if I
> have the unevaluatedProperties: false or I don't.
Not yet, but it will soon. Once I have the tree in a consistent state
in 5.10-rc1, there will be a meta-schema to check all this (which is
one of those must always be present).
Though, as of now 'unevaluatedProperties' doesn't do anything because
the underlying json-schema tool doesn't yet support it.
> >>>> + ti,sci-rm-range-bchan:
> >>>> + description: |
> >>>> + Array of BCDMA block-copy channel resource subtypes for resource
> >>>> + allocation for this host
> >>>> + allOf:
> >>>> + - $ref: /schemas/types.yaml#/definitions/uint32-array
> >>>> + minItems: 1
> >>>> + # Should be enough
> >>>> + maxItems: 255
> >>>
> >>> Are there constraints for the individual elements?
> >>
> >> In practice the subtype ID is 6bits number.
> >> Should I add limits to individual elements?
> >
> > Yes:
> >
> > items:
> > maximum: 0x3f
>
> Right, I can just omit the minimum.
>
> It would be nice if I could use definitions for these ranges to avoid
> duplicated lines by adding
>
> definitions:
> ti,rm-range:
> $ref: /schemas/types.yaml#/definitions/uint32-array
> minItems: 1
> # Should be enough
> maxItems: 255
> items:
> minimum: 0
> maximum: 0x3f
>
> to schemas/arm/keystone/ti,k3-sci-common.yaml
>
> and only have:
>
> ti,sci-rm-range-bchan:
> $ref:
> /schemas/arm/keystone/ti,k3-sci-common.yaml#/definitions/ti,rm-range
> description: |
> Array of BCDMA block-copy channel resource subtypes for resource
> allocation for this host
Just do:
patternProperties:
"^ti,sci-rm-range-[btr]chan$":
...
If this is common for other bindings, then you can put it in
ti,k3-sci-common.yaml.
> but it results:
> Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml:
> properties:ti,sci-rm-range-bchan: {'$ref':
> '/schemas/arm/keystone/ti,k3-sci-common.yaml#/definitions/ti,rm-range',
> 'description': 'Array of BCDMA block-copy channel resource subtypes for
> resource\nallocation for this host\n'} is not valid under any of the
> given schemas (Possible causes of the failure):
> Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml:
> properties:ti,sci-rm-range-bchan: 'not' is a required property
> Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml:
> properties:ti,sci-rm-range-bchan:$ref:
> '/schemas/arm/keystone/ti,k3-sci-common.yaml#/definitions/ti,rm-range'
> does not match 'types.yaml#[/]{0,1}definitions/.*'
We probably should allow for using 'definitions' which is pretty
common json-schema practice, but don't primarily in order to keep
folks within the lines. Things are optimized for not knowing
json-schema and trying to minimize errors I have to check for.
Supporting it would complicate the meta-schema and the tools' fixup
code. So far, the need for it has been pretty infrequent.
Rob
On 08/10/2020 22.15, Rob Herring wrote:
> On Thu, Oct 8, 2020 at 3:40 AM Peter Ujfalusi <[email protected]> wrote:
>>> Yeah, you have to do 'unevaluatedProperties: false' which doesn't
>>> actually do anything yet, but can 'see' into $ref's.
>>
>> I see, but even if I add the unevaluatedProperties: false I will have
>> the same error as long as I have additionalProperties: false
>
> Yes. I meant unevaluatedProperties instead of additionalProperties.
OK, changed it to unevaluatedProperties.
>> If I remove the additionalProperties then it makes no difference if I
>> have the unevaluatedProperties: false or I don't.
>
> Not yet, but it will soon. Once I have the tree in a consistent state
> in 5.10-rc1, there will be a meta-schema to check all this (which is
> one of those must always be present).
>
> Though, as of now 'unevaluatedProperties' doesn't do anything because
> the underlying json-schema tool doesn't yet support it.
Understand, thanks for the details.
>>>>>> + ti,sci-rm-range-bchan:
>>>>>> + description: |
>>>>>> + Array of BCDMA block-copy channel resource subtypes for resource
>>>>>> + allocation for this host
>>>>>> + allOf:
>>>>>> + - $ref: /schemas/types.yaml#/definitions/uint32-array
>>>>>> + minItems: 1
>>>>>> + # Should be enough
>>>>>> + maxItems: 255
>>>>>
>>>>> Are there constraints for the individual elements?
>>>>
>>>> In practice the subtype ID is 6bits number.
>>>> Should I add limits to individual elements?
>>>
>>> Yes:
>>>
>>> items:
>>> maximum: 0x3f
>>
>> Right, I can just omit the minimum.
>>
>> It would be nice if I could use definitions for these ranges to avoid
>> duplicated lines by adding
>>
>> definitions:
>> ti,rm-range:
>> $ref: /schemas/types.yaml#/definitions/uint32-array
>> minItems: 1
>> # Should be enough
>> maxItems: 255
>> items:
>> minimum: 0
>> maximum: 0x3f
>>
>> to schemas/arm/keystone/ti,k3-sci-common.yaml
>>
>> and only have:
>>
>> ti,sci-rm-range-bchan:
>> $ref:
>> /schemas/arm/keystone/ti,k3-sci-common.yaml#/definitions/ti,rm-range
>> description: |
>> Array of BCDMA block-copy channel resource subtypes for resource
>> allocation for this host
>
> Just do:
>
> patternProperties:
> "^ti,sci-rm-range-[btr]chan$":
> ...
>
> If this is common for other bindings, then you can put it in
> ti,k3-sci-common.yaml.
Similar property (for RM ranges) also used by the ringacc, I have tried
to standardize us to use: ti,sci-rm-range-* in DT.
I will leave it as it is now for this series and we can simplify it
later with a wider series touching all involved yaml files.
>> but it results:
>> Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml:
>> properties:ti,sci-rm-range-bchan: {'$ref':
>> '/schemas/arm/keystone/ti,k3-sci-common.yaml#/definitions/ti,rm-range',
>> 'description': 'Array of BCDMA block-copy channel resource subtypes for
>> resource\nallocation for this host\n'} is not valid under any of the
>> given schemas (Possible causes of the failure):
>> Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml:
>> properties:ti,sci-rm-range-bchan: 'not' is a required property
>> Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml:
>> properties:ti,sci-rm-range-bchan:$ref:
>> '/schemas/arm/keystone/ti,k3-sci-common.yaml#/definitions/ti,rm-range'
>> does not match 'types.yaml#[/]{0,1}definitions/.*'
>
> We probably should allow for using 'definitions' which is pretty
> common json-schema practice, but don't primarily in order to keep
> folks within the lines. Things are optimized for not knowing
> json-schema and trying to minimize errors I have to check for.
I agree on these.
> Supporting it would complicate the meta-schema and the tools' fixup
> code. So far, the need for it has been pretty infrequent.
Sure, for the couple of duplication I have it is manageable without
sacrificing readability.
btw: I have made the similar changes to the k3-pktdma schema.
>
> Rob
>
- Péter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
On 08-10-20, 09:41, Peter Ujfalusi wrote:
>
>
> On 07/10/2020 18.55, Vinod Koul wrote:
> > On 07-10-20, 11:08, Peter Ujfalusi wrote:
> >
> >> Not really. In DT an event triggered channel can be requested via router
> >> (when this is used) for example:
> >>
> >> dmas = <&inta_l2g a b c>;
> >> a - the input number of the DMA request in l2g
> >> b - edge or level trigger to be selected
> >> c - ASEL number for the channel for coherency
> >>
> >> The l2g router driver then translate this to:
> >> <&main_bcdma 1 0 c>
> >> 1 - Global trigger 0 is used by the DMA
> >> 0 - ignored
> >> c - ASEL number.
> >>
> >> The router needs to send an event which is going to be received by the
> >> channel we have picked up, this event number can only be known when we
> >> do have the channel.
> >>
> >> So the flow in this case:
> >> router converts the dma_spec for the DMA, but it does not yet know what
> >> is the event number it has to use.
> >> The BCDMA driver will pick an available bchan and notes that the
> >> transfers will be triggered by global event 0.
> >> When we have the channel, the core saves the router information and
> >> calls the device_router_config of BCDMA.
> >> In there we call back to the router and give the event number it has to
> >> use to send the trigger for the channel.
> >
> > Ah that is intresting, so you would call router driver foo_set_event()
> > and would send the event number
>
> Yes, that's correct.
>
> > why not call that API from alloc
> > channel or even xlate?
>
> at alloc / xlate time the DMA driver does not have information about
> router. The alloc/xlate will result the channel, but in my case it will
> result a broken setup as the router does not know which event to send.
>
> > Why do you need new callback?
>
> When I added the DMA event router support, it was designed in a way that
> the DMA driver itself must not know anything about the router, it has to
> be transparent. One can just add a router on front of any DMA and
> everything will work.
> This is the right thing to do, and it works for existing setups.
>
> > Or did i miss something..
>
> The BCDMA triggered channel setup is a chicken-egg setup.
> For this case the channel can be triggered by a global event. A channel
> can receive two global event, but this is not a concern atm.
> The event number depends on the channel we use, for simplicity let's
> say: bchan_id + trigger_offset = bchan_trigger_evt.
>
> of_dma_router_xlate does this:
>
> 1. calls the dma router's of_dma_route_allocate callback to allocate a
> route and craft a dma_spec for the DMA to configure a channel.
>
> 2. using this crafted dma_spec we request a channel via of_dma_xlate
> callback
>
> 3. if we got the channel, we save the router information, so it can be
> deallocated when the channel is disabled.
>
> I need a fourth step to do a final configuration since only at this time
> (after it has been allocated) the channel has information about possible
> router.
>
> In the new optional callback the DMA driver can figure out the event
> number which must be used by the router to send the event to the desired
> global event target of the channel.
>
> Other DMAs might need something different, but imho if there is going to
> be a need for such post alloc router config, then it is most likely will
> come from the need to feed back some sort of channel information to the
> router. Or take parameter from the router itself for the channel.
>
> To summarize:
> In of_dma_route_allocate() the router does not yet know the channel we
> are going to get.
> In of_dma_xlate() the DMA driver does not yet know if the channel will
> use router or not.
> I need to tell the router the event number it has to send, which is
> based on the channel number I got.
Sounds reasonable, btw why not pass this information in xlate. Router
will have a different xlate rather than non router right, or is it same.
If this information is anyway available in DT might be better to get it
and use from DT
Thanks
--
~Vinod
Hi Vinod,
On 28/10/2020 7.55, Vinod Koul wrote:
>> To summarize:
>> In of_dma_route_allocate() the router does not yet know the channel we
>> are going to get.
>> In of_dma_xlate() the DMA driver does not yet know if the channel will
>> use router or not.
>> I need to tell the router the event number it has to send, which is
>> based on the channel number I got.
>
> Sounds reasonable, btw why not pass this information in xlate. Router
> will have a different xlate rather than non router right, or is it same.
Yes, the router's have their separate xlate, but in that xlate we do not
yet have a channel. I don't know what is the event I need to send from
the router to trigger the channel.
> If this information is anyway available in DT might be better to get it
> and use from DT
Without a channel number I can not do anything.
It is close to a chicken and egg problem.
- Péter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
Hey Peter,
On 28-10-20, 11:56, Peter Ujfalusi wrote:
> Hi Vinod,
>
> On 28/10/2020 7.55, Vinod Koul wrote:
>
> >> To summarize:
> >> In of_dma_route_allocate() the router does not yet know the channel we
> >> are going to get.
> >> In of_dma_xlate() the DMA driver does not yet know if the channel will
> >> use router or not.
> >> I need to tell the router the event number it has to send, which is
> >> based on the channel number I got.
> >
> > Sounds reasonable, btw why not pass this information in xlate. Router
> > will have a different xlate rather than non router right, or is it same.
>
> Yes, the router's have their separate xlate, but in that xlate we do not
> yet have a channel. I don't know what is the event I need to send from
> the router to trigger the channel.
>
> > If this information is anyway available in DT might be better to get it
> > and use from DT
>
> Without a channel number I can not do anything.
> It is close to a chicken and egg problem.
We get 'channel' in xlate, so wont that help? I think I am still missing
something here :(
--
~Vinod
Hi Vinod,
On 09/11/2020 13.45, Vinod Koul wrote:
>> Without a channel number I can not do anything.
>> It is close to a chicken and egg problem.
>
> We get 'channel' in xlate, so wont that help? I think I am still missing
> something here :(
Yes, we get channel in xlate, but we get the channel after
ofdma->of_dma_route_allocate()
of_dma_route_allocate() si the place where DMA routers create the
dmaspec for the DMA controller to get a channel and they up until BCDMA
did also the HW configuration to get the event routed.
For a BCDMA channel we can have three triggers:
Global trigger 0 for the channel
Global trigger 1 for the channel
Local trigger for the channel
Every BCDMA channel have these triggers and for all of them they are the
same (from the channel's pow).
bchan0 can be triggered by global trigger 0
bchan1 can be triggered by global trigger 0
But these triggers are not the same ones, the real trigger depends on
the router, which of it's input is converted to send out an event to
trigger bchan0_trigger0 or to trigger bchan1_trigger0.
When we got the channel with the dmaspec from the router driver then we
need to tell the router driver that it needs to send a given event in
order to trigger the channel that we got.
We can not have traditional binding for BCDMA either where we would tell
the bchan index to be used because depending on the resource allocation
done within sysfw that exact channel might not be even available for us.
- Péter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
HI Peter,
On 09-11-20, 14:09, Peter Ujfalusi wrote:
> Hi Vinod,
>
> On 09/11/2020 13.45, Vinod Koul wrote:
> >> Without a channel number I can not do anything.
> >> It is close to a chicken and egg problem.
> >
> > We get 'channel' in xlate, so wont that help? I think I am still missing
> > something here :(
>
> Yes, we get channel in xlate, but we get the channel after
> ofdma->of_dma_route_allocate()
That is correct, so you need this info in allocate somehow..
> of_dma_route_allocate() si the place where DMA routers create the
> dmaspec for the DMA controller to get a channel and they up until BCDMA
> did also the HW configuration to get the event routed.
>
> For a BCDMA channel we can have three triggers:
> Global trigger 0 for the channel
> Global trigger 1 for the channel
> Local trigger for the channel
>
> Every BCDMA channel have these triggers and for all of them they are the
> same (from the channel's pow).
> bchan0 can be triggered by global trigger 0
> bchan1 can be triggered by global trigger 0
>
> But these triggers are not the same ones, the real trigger depends on
> the router, which of it's input is converted to send out an event to
> trigger bchan0_trigger0 or to trigger bchan1_trigger0.
>
> When we got the channel with the dmaspec from the router driver then we
> need to tell the router driver that it needs to send a given event in
> order to trigger the channel that we got.
>
> We can not have traditional binding for BCDMA either where we would tell
> the bchan index to be used because depending on the resource allocation
> done within sysfw that exact channel might not be even available for us.
>
> - P?ter
>
> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
--
~Vinod
On 09/11/2020 14.23, Vinod Koul wrote:
> HI Peter,
>
> On 09-11-20, 14:09, Peter Ujfalusi wrote:
>> Hi Vinod,
>>
>> On 09/11/2020 13.45, Vinod Koul wrote:
>>>> Without a channel number I can not do anything.
>>>> It is close to a chicken and egg problem.
>>>
>>> We get 'channel' in xlate, so wont that help? I think I am still missing
>>> something here :(
>>
>> Yes, we get channel in xlate, but we get the channel after
>> ofdma->of_dma_route_allocate()
>
> That is correct, so you need this info in allocate somehow..
To know the event number the router must send to trigger the channel I
need the router to 'craft' the dmaspec which can be used to request the
channel.
To request a bcdma channel to be triggered by global trigger 0:
[A]
<&main_bcdma 1 0 15>
main_bcdma - phandle to BCDMA
1 - triggered by global trigger0
0 - ignored
15 - ASEL value
A peripheral can not really use this binding directly as we need to
configure the get the event to be sent to the given channel's trigger0.
The binding for the router (l2g if INTA in this case):
[B]
<&inta_l2g 21 0 15>
inta_l2g - phandle to therouter
21 - local event index (input event/signal)
0 - event detection mode (pulsed/rising)
15 - ASEL value
The of_dma_router_xlate() receives the dmaspec for [B}, the router
driver creates the dmaspec for [A].
The xlate can not request the channel first as it needs the dmaspec from
the router to do so.
- Péter
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki