2015-11-23 02:28:48

by Sinan Kaya

[permalink] [raw]
Subject: [PATCH V7 0/3] dma: add Qualcomm Technologies HIDMA driver

The Qualcomm Technologies HIDMA device has been designed
to support virtualization technology. The driver has been
divided into two to follow the hardware design.

1. HIDMA Management driver
2. HIDMA Channel driver

Each HIDMA HW consists of multiple channels. These channels
share some set of common parameters. These parameters are
initialized by the management driver during power up.
Same management driver is used for monitoring the execution
of the channels. Management driver can change the performance
behavior dynamically such as bandwidth allocation and
prioritization in the future.

The management driver is executed in hypervisor context and
is the main management entity for all channels provided by
the device.

Changes from V6: (https://lkml.org/lkml/2015/11/22/111)
* none

Changes from V6: (https://lkml.org/lkml/2015/11/22/112)
* remove extra _ in documentation
* remove extra parenthesis in assignments
* create subdirectories for channels in sysfs

Changes from V6: (https://lkml.org/lkml/2015/11/22/113)
* remove extra parenthesis

Sinan Kaya (3):
dma: qcom_bam_dma: move to qcom directory
dma: add Qualcomm Technologies HIDMA management driver
dma: add Qualcomm Technologies HIDMA channel driver

.../ABI/testing/sysfs-platform-hidma-mgmt | 97 +++
.../devicetree/bindings/dma/qcom_hidma.txt | 18 +
.../devicetree/bindings/dma/qcom_hidma_mgmt.txt | 61 ++
drivers/dma/Kconfig | 11 +-
drivers/dma/Makefile | 2 +-
drivers/dma/qcom/Kconfig | 29 +
drivers/dma/qcom/Makefile | 4 +
drivers/dma/{qcom_bam_dma.c => qcom/bam_dma.c} | 4 +-
drivers/dma/qcom/hidma.c | 706 ++++++++++++++++
drivers/dma/qcom/hidma.h | 157 ++++
drivers/dma/qcom/hidma_dbg.c | 217 +++++
drivers/dma/qcom/hidma_ll.c | 924 +++++++++++++++++++++
drivers/dma/qcom/hidma_mgmt.c | 308 +++++++
drivers/dma/qcom/hidma_mgmt.h | 39 +
drivers/dma/qcom/hidma_mgmt_sys.c | 299 +++++++
15 files changed, 2864 insertions(+), 12 deletions(-)
create mode 100644 Documentation/ABI/testing/sysfs-platform-hidma-mgmt
create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma.txt
create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
create mode 100644 drivers/dma/qcom/Kconfig
create mode 100644 drivers/dma/qcom/Makefile
rename drivers/dma/{qcom_bam_dma.c => qcom/bam_dma.c} (99%)
create mode 100644 drivers/dma/qcom/hidma.c
create mode 100644 drivers/dma/qcom/hidma.h
create mode 100644 drivers/dma/qcom/hidma_dbg.c
create mode 100644 drivers/dma/qcom/hidma_ll.c
create mode 100644 drivers/dma/qcom/hidma_mgmt.c
create mode 100644 drivers/dma/qcom/hidma_mgmt.h
create mode 100644 drivers/dma/qcom/hidma_mgmt_sys.c

--
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project


2015-11-23 02:28:52

by Sinan Kaya

[permalink] [raw]
Subject: [PATCH V7 1/3] dma: qcom_bam_dma: move to qcom directory

Creating a QCOM directory for all QCOM DMA source files.

Signed-off-by: Sinan Kaya <[email protected]>
---
drivers/dma/Kconfig | 11 ++---------
drivers/dma/Makefile | 2 +-
drivers/dma/qcom/Kconfig | 8 ++++++++
drivers/dma/qcom/Makefile | 1 +
drivers/dma/{qcom_bam_dma.c => qcom/bam_dma.c} | 4 ++--
5 files changed, 14 insertions(+), 12 deletions(-)
create mode 100644 drivers/dma/qcom/Kconfig
create mode 100644 drivers/dma/qcom/Makefile
rename drivers/dma/{qcom_bam_dma.c => qcom/bam_dma.c} (99%)

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index b458475..47b1b98 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -408,15 +408,6 @@ config PXA_DMA
16 to 32 channels for peripheral to memory or memory to memory
transfers.

-config QCOM_BAM_DMA
- tristate "QCOM BAM DMA support"
- depends on ARCH_QCOM || (COMPILE_TEST && OF && ARM)
- select DMA_ENGINE
- select DMA_VIRTUAL_CHANNELS
- ---help---
- Enable support for the QCOM BAM DMA controller. This controller
- provides DMA capabilities for a variety of on-chip devices.
-
config SIRF_DMA
tristate "CSR SiRFprimaII/SiRFmarco DMA support"
depends on ARCH_SIRF
@@ -527,6 +518,8 @@ config ZX_DMA
# driver files
source "drivers/dma/bestcomm/Kconfig"

+source "drivers/dma/qcom/Kconfig"
+
source "drivers/dma/dw/Kconfig"

source "drivers/dma/hsu/Kconfig"
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index 7711a71..8dba90d 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -52,7 +52,6 @@ obj-$(CONFIG_PCH_DMA) += pch_dma.o
obj-$(CONFIG_PL330_DMA) += pl330.o
obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
obj-$(CONFIG_PXA_DMA) += pxa_dma.o
-obj-$(CONFIG_QCOM_BAM_DMA) += qcom_bam_dma.o
obj-$(CONFIG_RENESAS_DMA) += sh/
obj-$(CONFIG_SIRF_DMA) += sirf-dma.o
obj-$(CONFIG_STE_DMA40) += ste_dma40.o ste_dma40_ll.o
@@ -66,4 +65,5 @@ obj-$(CONFIG_TI_EDMA) += edma.o
obj-$(CONFIG_XGENE_DMA) += xgene-dma.o
obj-$(CONFIG_ZX_DMA) += zx296702_dma.o

+obj-y += qcom/
obj-y += xilinx/
diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig
new file mode 100644
index 0000000..f17c272
--- /dev/null
+++ b/drivers/dma/qcom/Kconfig
@@ -0,0 +1,8 @@
+config QCOM_BAM_DMA
+ tristate "QCOM BAM DMA support"
+ depends on ARCH_QCOM || (COMPILE_TEST && OF && ARM)
+ select DMA_ENGINE
+ select DMA_VIRTUAL_CHANNELS
+ ---help---
+ Enable support for the QCOM BAM DMA controller. This controller
+ provides DMA capabilities for a variety of on-chip devices.
diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
new file mode 100644
index 0000000..f612ae3
--- /dev/null
+++ b/drivers/dma/qcom/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
diff --git a/drivers/dma/qcom_bam_dma.c b/drivers/dma/qcom/bam_dma.c
similarity index 99%
rename from drivers/dma/qcom_bam_dma.c
rename to drivers/dma/qcom/bam_dma.c
index 5a250cd..b6f053d 100644
--- a/drivers/dma/qcom_bam_dma.c
+++ b/drivers/dma/qcom/bam_dma.c
@@ -49,8 +49,8 @@
#include <linux/clk.h>
#include <linux/dmaengine.h>

-#include "dmaengine.h"
-#include "virt-dma.h"
+#include "../dmaengine.h"
+#include "../virt-dma.h"

struct bam_desc_hw {
u32 addr; /* Buffer physical address */
--
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

2015-11-23 02:28:56

by Sinan Kaya

[permalink] [raw]
Subject: [PATCH V7 2/3] dma: add Qualcomm Technologies HIDMA management driver

The Qualcomm Technologies HIDMA device has been designed
to support virtualization technology. The driver has been
divided into two to follow the hardware design.

1. HIDMA Management driver
2. HIDMA Channel driver

Each HIDMA HW consists of multiple channels. These channels
share some set of common parameters. These parameters are
initialized by the management driver during power up.
Same management driver is used for monitoring the execution
of the channels. Management driver can change the performance
behavior dynamically such as bandwidth allocation and
prioritization.

The management driver is executed in hypervisor context and
is the main management entity for all channels provided by
the device.

Signed-off-by: Sinan Kaya <[email protected]>
---
.../ABI/testing/sysfs-platform-hidma-mgmt | 97 +++++++
.../devicetree/bindings/dma/qcom_hidma_mgmt.txt | 61 ++++
drivers/dma/qcom/Kconfig | 11 +
drivers/dma/qcom/Makefile | 1 +
drivers/dma/qcom/hidma_mgmt.c | 308 +++++++++++++++++++++
drivers/dma/qcom/hidma_mgmt.h | 39 +++
drivers/dma/qcom/hidma_mgmt_sys.c | 299 ++++++++++++++++++++
7 files changed, 816 insertions(+)
create mode 100644 Documentation/ABI/testing/sysfs-platform-hidma-mgmt
create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
create mode 100644 drivers/dma/qcom/hidma_mgmt.c
create mode 100644 drivers/dma/qcom/hidma_mgmt.h
create mode 100644 drivers/dma/qcom/hidma_mgmt_sys.c

diff --git a/Documentation/ABI/testing/sysfs-platform-hidma-mgmt b/Documentation/ABI/testing/sysfs-platform-hidma-mgmt
new file mode 100644
index 0000000..4fc6876
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-platform-hidma-mgmt
@@ -0,0 +1,97 @@
+What: /sys/devices/platform/hidma-mgmt*/chan*/priority
+ /sys/devices/platform/QCOM8060:*/chan*/priority
+Date: Nov 2015
+KernelVersion: 4.4
+Contact: "Sinan Kaya <[email protected]>"
+Description:
+ Contains either 0 or 1 and indicates if the DMA channel is a
+ low priority (0) or high priority (1) channel.
+
+What: /sys/devices/platform/hidma-mgmt*/chan*/weight
+ /sys/devices/platform/QCOM8060:*/chan*/weight
+Date: Nov 2015
+KernelVersion: 4.4
+Contact: "Sinan Kaya <[email protected]>"
+Description:
+ Contains 0..15 and indicates the weight of the channel among
+ equal priority channels during round robin scheduling.
+
+What: /sys/devices/platform/hidma-mgmt*/chreset_timeout_cycles
+ /sys/devices/platform/QCOM8060:*/chreset_timeout_cycles
+Date: Nov 2015
+KernelVersion: 4.4
+Contact: "Sinan Kaya <[email protected]>"
+Description:
+ Contains the platform specific cycle value to wait after a
+ reset command is issued. If the value is chosen too short,
+ then the HW will issue a reset failure interrupt. The value
+ is platform specific and should not be changed without
+ consultance.
+
+What: /sys/devices/platform/hidma-mgmt*/dma_channels
+ /sys/devices/platform/QCOM8060:*/dma_channels
+Date: Nov 2015
+KernelVersion: 4.4
+Contact: "Sinan Kaya <[email protected]>"
+Description:
+ Contains the number of dma channels supported by one instance
+ of HIDMA hardware. The value may change from chip to chip.
+
+What: /sys/devices/platform/hidma-mgmt*/hw_version_major
+ /sys/devices/platform/QCOM8060:*/hw_version_major
+Date: Nov 2015
+KernelVersion: 4.4
+Contact: "Sinan Kaya <[email protected]>"
+Description:
+ Version number major for the hardware.
+
+What: /sys/devices/platform/hidma-mgmt*/hw_version_minor
+ /sys/devices/platform/QCOM8060:*/hw_version_minor
+Date: Nov 2015
+KernelVersion: 4.4
+Contact: "Sinan Kaya <[email protected]>"
+Description:
+ Version number minor for the hardware.
+
+What: /sys/devices/platform/hidma-mgmt*/max_rd_xactions
+ /sys/devices/platform/QCOM8060:*/max_rd_xactions
+Date: Nov 2015
+KernelVersion: 4.4
+Contact: "Sinan Kaya <[email protected]>"
+Description:
+ Contains a value between 0 and 31. Maximum number of
+ read transactions that can be issued back to back.
+ Choosing a higher number gives better performance but
+ can also cause performance reduction to other peripherals
+ sharing the same bus.
+
+What: /sys/devices/platform/hidma-mgmt*/max_read_request
+ /sys/devices/platform/QCOM8060:*/max_read_request
+Date: Nov 2015
+KernelVersion: 4.4
+Contact: "Sinan Kaya <[email protected]>"
+Description:
+ Size of each read request. The value needs to be a power
+ of two and can be between 128 and 1024.
+
+What: /sys/devices/platform/hidma-mgmt*/max_wr_xactions
+ /sys/devices/platform/QCOM8060:*/max_wr_xactions
+Date: Nov 2015
+KernelVersion: 4.4
+Contact: "Sinan Kaya <[email protected]>"
+Description:
+ Contains a value between 0 and 31. Maximum number of
+ write transactions that can be issued back to back.
+ Choosing a higher number gives better performance but
+ can also cause performance reduction to other peripherals
+ sharing the same bus.
+
+
+What: /sys/devices/platform/hidma-mgmt*/max_write_request
+ /sys/devices/platform/QCOM8060:*/max_write_request
+Date: Nov 2015
+KernelVersion: 4.4
+Contact: "Sinan Kaya <[email protected]>"
+Description:
+ Size of each write request. The value needs to be a power
+ of two and can be between 128 and 1024.
diff --git a/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
new file mode 100644
index 0000000..eb053b9
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
@@ -0,0 +1,61 @@
+Qualcomm Technologies HIDMA Management interface
+
+The Qualcomm Technologies HIDMA device has been designed
+to support virtualization technology. The driver has been
+divided into two to follow the hardware design. The management
+driver is executed in hypervisor context and is the main
+management entity for all channels provided by the device.
+
+Each HIDMA HW consists of multiple channels. These channels
+share some set of common parameters. These parameters are
+initialized by the management driver during power up.
+Same management driver is used for monitoring the execution
+of the channels. Management driver can change the performance
+behavior dynamically such as bandwidth allocation and
+prioritization.
+
+All channel devices get probed in the hypervisor
+context during power up. They show up as DMA engine
+DMA channels. Then, before starting the virtualization; each
+channel device is unbound from the hypervisor by VFIO
+and assign to the guest machine for control.
+
+This management driver will be used by the system
+admin to monitor/reset the execution state of the DMA
+channels. This will be the management interface.
+
+
+Required properties:
+- compatible: "qcom,hidma-mgmt-1.0";
+- reg: Address range for DMA device
+- dma-channels: Number of channels supported by this DMA controller.
+- max-write-burst-bytes: Maximum write burst in bytes. A memcpy requested is
+ fragmented to multiples of this amount.
+- max-read-burst-bytes: Maximum read burst in bytes. A memcpy request is
+ fragmented to multiples of this amount.
+- max-write-transactions: Maximum write transactions to perform in a burst
+- max-read-transactions: Maximum read transactions to perform in a burst
+- channel-reset-timeout-cycles: Channel reset timeout in cycles for this SOC.
+- channel-priority: Priority of the channel.
+ Each dma channel share the same HW bandwidth with other dma channels.
+ If two requests reach to the HW at the same time from a low priority and
+ high priority channel, high priority channel will claim the bus.
+ 0=low priority, 1=high priority
+- channel-weight: Round robin weight of the channel
+ Since there are only two priority levels supported, scheduling among
+ the equal priority channels is done via weights.
+
+Example:
+
+ hidma-mgmt@f9984000 = {
+ compatible = "qcom,hidma-mgmt-1.0";
+ reg = <0xf9984000 0x15000>;
+ dma-channels = 6;
+ max-write-burst-bytes = 1024;
+ max-read-burst-bytes = 1024;
+ max-write-transactions = 31;
+ max-read-transactions = 31;
+ channel-reset-timeout-cycles = 0x500;
+ channel-priority = < 1 1 0 0 0 0>;
+ channel-weight = < 1 13 10 3 4 5>;
+ };
diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig
index f17c272..ebe5b8c 100644
--- a/drivers/dma/qcom/Kconfig
+++ b/drivers/dma/qcom/Kconfig
@@ -6,3 +6,14 @@ config QCOM_BAM_DMA
---help---
Enable support for the QCOM BAM DMA controller. This controller
provides DMA capabilities for a variety of on-chip devices.
+
+config QCOM_HIDMA_MGMT
+ tristate "Qualcomm Technologies HIDMA Management support"
+ select DMA_ENGINE
+ help
+ Enable support for the Qualcomm Technologies HIDMA Management.
+ Each DMA device requires one management interface driver
+ for basic initialization before QCOM_HIDMA channel driver can
+ start managing the channels. In a virtualized environment,
+ the guest OS would run QCOM_HIDMA channel driver and the
+ hypervisor would run the QCOM_HIDMA_MGMT management driver.
diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
index f612ae3..1a5a96d 100644
--- a/drivers/dma/qcom/Makefile
+++ b/drivers/dma/qcom/Makefile
@@ -1 +1,2 @@
obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
+obj-$(CONFIG_QCOM_HIDMA_MGMT) += hidma_mgmt.o hidma_mgmt_sys.o
diff --git a/drivers/dma/qcom/hidma_mgmt.c b/drivers/dma/qcom/hidma_mgmt.c
new file mode 100644
index 0000000..ea78a4d
--- /dev/null
+++ b/drivers/dma/qcom/hidma_mgmt.c
@@ -0,0 +1,308 @@
+/*
+ * Qualcomm Technologies HIDMA DMA engine Management interface
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/dmaengine.h>
+#include <linux/acpi.h>
+#include <linux/of.h>
+#include <linux/property.h>
+#include <linux/interrupt.h>
+#include <linux/platform_device.h>
+#include <linux/module.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <linux/pm_runtime.h>
+#include <linux/bitops.h>
+
+#include "hidma_mgmt.h"
+
+#define QOS_N_OFFSET 0x300
+#define CFG_OFFSET 0x400
+#define MAX_BUS_REQ_LEN_OFFSET 0x41C
+#define MAX_XACTIONS_OFFSET 0x420
+#define HW_VERSION_OFFSET 0x424
+#define CHRESET_TIMEOUT_OFFSET 0x418
+
+#define MAX_WR_XACTIONS_MASK GENMASK(4, 0)
+#define MAX_RD_XACTIONS_MASK GENMASK(4, 0)
+#define WEIGHT_MASK GENMASK(6, 0)
+#define MAX_BUS_REQ_LEN_MASK GENMASK(15, 0)
+#define CHRESET_TIMEOUUT_MASK GENMASK(19, 0)
+
+#define MAX_WR_XACTIONS_BIT_POS 16
+#define MAX_BUS_WR_REQ_BIT_POS 16
+#define WRR_BIT_POS 8
+#define PRIORITY_BIT_POS 15
+
+#define AUTOSUSPEND_TIMEOUT 2000
+#define MAX_CHANNEL_WEIGHT 15
+
+int hidma_mgmt_setup(struct hidma_mgmt_dev *mgmtdev)
+{
+ unsigned int i;
+ u32 val;
+
+ if (!is_power_of_2(mgmtdev->max_write_request) ||
+ (mgmtdev->max_write_request < 128) ||
+ (mgmtdev->max_write_request > 1024)) {
+ dev_err(&mgmtdev->pdev->dev, "invalid write request %d\n",
+ mgmtdev->max_write_request);
+ return -EINVAL;
+ }
+
+ if (!is_power_of_2(mgmtdev->max_read_request) ||
+ (mgmtdev->max_read_request < 128) ||
+ (mgmtdev->max_read_request > 1024)) {
+ dev_err(&mgmtdev->pdev->dev, "invalid read request %d\n",
+ mgmtdev->max_read_request);
+ return -EINVAL;
+ }
+
+ if (mgmtdev->max_wr_xactions > MAX_WR_XACTIONS_MASK) {
+ dev_err(&mgmtdev->pdev->dev,
+ "max_wr_xactions cannot be bigger than %ld\n",
+ MAX_WR_XACTIONS_MASK);
+ return -EINVAL;
+ }
+
+ if (mgmtdev->max_rd_xactions > MAX_RD_XACTIONS_MASK) {
+ dev_err(&mgmtdev->pdev->dev,
+ "max_rd_xactions cannot be bigger than %ld\n",
+ MAX_RD_XACTIONS_MASK);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < mgmtdev->dma_channels; i++) {
+ if (mgmtdev->priority[i] > 1) {
+ dev_err(&mgmtdev->pdev->dev,
+ "priority can be 0 or 1\n");
+ return -EINVAL;
+ }
+
+ if (mgmtdev->weight[i] > MAX_CHANNEL_WEIGHT) {
+ dev_err(&mgmtdev->pdev->dev,
+ "max value of weight can be %d.\n",
+ MAX_CHANNEL_WEIGHT);
+ return -EINVAL;
+ }
+
+ /* weight needs to be at least one */
+ if (mgmtdev->weight[i] == 0)
+ mgmtdev->weight[i] = 1;
+ }
+
+ pm_runtime_get_sync(&mgmtdev->pdev->dev);
+ val = readl(mgmtdev->virtaddr + MAX_BUS_REQ_LEN_OFFSET);
+ val &= ~(MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS);
+ val |= mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS;
+ val &= ~MAX_BUS_REQ_LEN_MASK;
+ val |= mgmtdev->max_read_request;
+ writel(val, mgmtdev->virtaddr + MAX_BUS_REQ_LEN_OFFSET);
+
+ val = readl(mgmtdev->virtaddr + MAX_XACTIONS_OFFSET);
+ val &= ~(MAX_WR_XACTIONS_MASK << MAX_WR_XACTIONS_BIT_POS);
+ val |= mgmtdev->max_wr_xactions << MAX_WR_XACTIONS_BIT_POS;
+ val &= ~MAX_RD_XACTIONS_MASK;
+ val |= mgmtdev->max_rd_xactions;
+ writel(val, mgmtdev->virtaddr + MAX_XACTIONS_OFFSET);
+
+ mgmtdev->hw_version = readl(mgmtdev->virtaddr + HW_VERSION_OFFSET);
+ mgmtdev->hw_version_major = (mgmtdev->hw_version >> 28) & 0xF;
+ mgmtdev->hw_version_minor = (mgmtdev->hw_version >> 16) & 0xF;
+
+ for (i = 0; i < mgmtdev->dma_channels; i++) {
+ val = readl(mgmtdev->virtaddr + QOS_N_OFFSET + (4 * i));
+ val &= ~(1 << PRIORITY_BIT_POS);
+ val |= (mgmtdev->priority[i] & 0x1) << PRIORITY_BIT_POS;
+ val &= ~(WEIGHT_MASK << WRR_BIT_POS);
+ val |= (mgmtdev->weight[i] & WEIGHT_MASK) << WRR_BIT_POS;
+ writel(val, mgmtdev->virtaddr + QOS_N_OFFSET + (4 * i));
+ }
+
+ val = readl(mgmtdev->virtaddr + CHRESET_TIMEOUT_OFFSET);
+ val &= ~CHRESET_TIMEOUUT_MASK;
+ val |= mgmtdev->chreset_timeout_cycles & CHRESET_TIMEOUUT_MASK;
+ writel(val, mgmtdev->virtaddr + CHRESET_TIMEOUT_OFFSET);
+
+ pm_runtime_mark_last_busy(&mgmtdev->pdev->dev);
+ pm_runtime_put_autosuspend(&mgmtdev->pdev->dev);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(hidma_mgmt_setup);
+
+static int hidma_mgmt_probe(struct platform_device *pdev)
+{
+ struct hidma_mgmt_dev *mgmtdev;
+ struct resource *res;
+ void __iomem *virtaddr;
+ int irq;
+ int rc;
+ u32 val;
+
+ pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
+ pm_runtime_use_autosuspend(&pdev->dev);
+ pm_runtime_set_active(&pdev->dev);
+ pm_runtime_enable(&pdev->dev);
+ pm_runtime_get_sync(&pdev->dev);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ virtaddr = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(virtaddr)) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0) {
+ dev_err(&pdev->dev, "irq resources not found\n");
+ rc = irq;
+ goto out;
+ }
+
+ mgmtdev = devm_kzalloc(&pdev->dev, sizeof(*mgmtdev), GFP_KERNEL);
+ if (!mgmtdev) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ mgmtdev->pdev = pdev;
+ mgmtdev->addrsize = resource_size(res);
+ mgmtdev->virtaddr = virtaddr;
+
+ rc = device_property_read_u32(&pdev->dev, "dma-channels",
+ &mgmtdev->dma_channels);
+ if (rc) {
+ dev_err(&pdev->dev, "number of channels missing\n");
+ goto out;
+ }
+
+ rc = device_property_read_u32(&pdev->dev,
+ "channel-reset-timeout-cycles",
+ &mgmtdev->chreset_timeout_cycles);
+ if (rc) {
+ dev_err(&pdev->dev, "channel reset timeout missing\n");
+ goto out;
+ }
+
+ rc = device_property_read_u32(&pdev->dev, "max-write-burst-bytes",
+ &mgmtdev->max_write_request);
+ if (rc) {
+ dev_err(&pdev->dev, "max-write-burst-bytes missing\n");
+ goto out;
+ }
+
+ rc = device_property_read_u32(&pdev->dev, "max-read-burst-bytes",
+ &mgmtdev->max_read_request);
+ if (rc) {
+ dev_err(&pdev->dev, "max-read-burst-bytes missing\n");
+ goto out;
+ }
+
+ rc = device_property_read_u32(&pdev->dev, "max-write-transactions",
+ &mgmtdev->max_wr_xactions);
+ if (rc) {
+ dev_err(&pdev->dev, "max-write-transactions missing\n");
+ goto out;
+ }
+
+ rc = device_property_read_u32(&pdev->dev, "max-read-transactions",
+ &mgmtdev->max_rd_xactions);
+ if (rc) {
+ dev_err(&pdev->dev, "max-read-transactions missing\n");
+ goto out;
+ }
+
+ mgmtdev->priority = devm_kcalloc(&pdev->dev,
+ mgmtdev->dma_channels, sizeof(*mgmtdev->priority), GFP_KERNEL);
+ if (!mgmtdev->priority) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ mgmtdev->weight = devm_kcalloc(&pdev->dev,
+ mgmtdev->dma_channels, sizeof(*mgmtdev->weight), GFP_KERNEL);
+ if (!mgmtdev->weight) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ rc = device_property_read_u32_array(&pdev->dev, "channel-priority",
+ mgmtdev->priority, mgmtdev->dma_channels);
+ if (rc) {
+ dev_err(&pdev->dev, "channel-priority missing\n");
+ goto out;
+ }
+
+ rc = device_property_read_u32_array(&pdev->dev, "channel-weight",
+ mgmtdev->weight, mgmtdev->dma_channels);
+ if (rc) {
+ dev_err(&pdev->dev, "channel-weight missing\n");
+ goto out;
+ }
+
+ rc = hidma_mgmt_setup(mgmtdev);
+ if (rc) {
+ dev_err(&pdev->dev, "setup failed\n");
+ goto out;
+ }
+
+ /* start the HW */
+ val = readl(mgmtdev->virtaddr + CFG_OFFSET);
+ val |= 1;
+ writel(val, mgmtdev->virtaddr + CFG_OFFSET);
+
+ rc = hidma_mgmt_init_sys(mgmtdev);
+ if (rc) {
+ dev_err(&pdev->dev, "sysfs setup failed\n");
+ goto out;
+ }
+
+ dev_info(&pdev->dev,
+ "HW rev: %d.%d @ %pa with %d physical channels\n",
+ mgmtdev->hw_version_major, mgmtdev->hw_version_minor,
+ &res->start, mgmtdev->dma_channels);
+
+ platform_set_drvdata(pdev, mgmtdev);
+ pm_runtime_mark_last_busy(&pdev->dev);
+ pm_runtime_put_autosuspend(&pdev->dev);
+ return 0;
+out:
+ pm_runtime_put_sync_suspend(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
+ return rc;
+}
+
+#if IS_ENABLED(CONFIG_ACPI)
+static const struct acpi_device_id hidma_mgmt_acpi_ids[] = {
+ {"QCOM8060"},
+ {},
+};
+#endif
+
+static const struct of_device_id hidma_mgmt_match[] = {
+ { .compatible = "qcom,hidma-mgmt-1.0", },
+ {},
+};
+MODULE_DEVICE_TABLE(of, hidma_mgmt_match);
+
+static struct platform_driver hidma_mgmt_driver = {
+ .probe = hidma_mgmt_probe,
+ .driver = {
+ .name = "hidma-mgmt",
+ .of_match_table = hidma_mgmt_match,
+ .acpi_match_table = ACPI_PTR(hidma_mgmt_acpi_ids),
+ },
+};
+module_platform_driver(hidma_mgmt_driver);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/dma/qcom/hidma_mgmt.h b/drivers/dma/qcom/hidma_mgmt.h
new file mode 100644
index 0000000..f7daf33
--- /dev/null
+++ b/drivers/dma/qcom/hidma_mgmt.h
@@ -0,0 +1,39 @@
+/*
+ * Qualcomm Technologies HIDMA Management common header
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+struct hidma_mgmt_dev {
+ u8 hw_version_major;
+ u8 hw_version_minor;
+
+ u32 max_wr_xactions;
+ u32 max_rd_xactions;
+ u32 max_write_request;
+ u32 max_read_request;
+ u32 dma_channels;
+ u32 chreset_timeout_cycles;
+ u32 hw_version;
+ u32 *priority;
+ u32 *weight;
+
+ /* Hardware device constants */
+ void __iomem *virtaddr;
+ resource_size_t addrsize;
+
+ struct kobject **chroots;
+ struct platform_device *pdev;
+};
+
+int hidma_mgmt_init_sys(struct hidma_mgmt_dev *dev);
+int hidma_mgmt_setup(struct hidma_mgmt_dev *mgmtdev);
diff --git a/drivers/dma/qcom/hidma_mgmt_sys.c b/drivers/dma/qcom/hidma_mgmt_sys.c
new file mode 100644
index 0000000..a4a6414
--- /dev/null
+++ b/drivers/dma/qcom/hidma_mgmt_sys.c
@@ -0,0 +1,299 @@
+/*
+ * Qualcomm Technologies HIDMA Management SYS interface
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/sysfs.h>
+#include <linux/platform_device.h>
+
+#include "hidma_mgmt.h"
+
+struct hidma_chan_attr {
+ struct hidma_mgmt_dev *mdev;
+ int index;
+ struct kobj_attribute attr;
+};
+
+struct hidma_mgmt_fileinfo {
+ char *name;
+ int mode;
+ int (*get)(struct hidma_mgmt_dev *mdev);
+ int (*set)(struct hidma_mgmt_dev *mdev, u64 val);
+};
+
+#define IMPLEMENT_GETSET(name) \
+static int get_##name(struct hidma_mgmt_dev *mdev) \
+{ \
+ return mdev->name; \
+} \
+static int set_##name(struct hidma_mgmt_dev *mdev, u64 val) \
+{ \
+ u64 tmp; \
+ int rc; \
+ \
+ tmp = mdev->name; \
+ mdev->name = val; \
+ rc = hidma_mgmt_setup(mdev); \
+ if (rc) \
+ mdev->name = tmp; \
+ return rc; \
+}
+
+#define DECLARE_ATTRIBUTE(name, mode) \
+ {#name, mode, get_##name, set_##name}
+
+IMPLEMENT_GETSET(hw_version_major)
+IMPLEMENT_GETSET(hw_version_minor)
+IMPLEMENT_GETSET(max_wr_xactions)
+IMPLEMENT_GETSET(max_rd_xactions)
+IMPLEMENT_GETSET(max_write_request)
+IMPLEMENT_GETSET(max_read_request)
+IMPLEMENT_GETSET(dma_channels)
+IMPLEMENT_GETSET(chreset_timeout_cycles)
+
+static int set_priority(struct hidma_mgmt_dev *mdev, unsigned int i, u64 val)
+{
+ u64 tmp;
+ int rc;
+
+ if (i > mdev->dma_channels)
+ return -EINVAL;
+
+ tmp = mdev->priority[i];
+ mdev->priority[i] = val;
+ rc = hidma_mgmt_setup(mdev);
+ if (rc)
+ mdev->priority[i] = tmp;
+ return rc;
+}
+
+static int set_weight(struct hidma_mgmt_dev *mdev, unsigned int i, u64 val)
+{
+ u64 tmp;
+ int rc;
+
+ if (i > mdev->dma_channels)
+ return -EINVAL;
+
+ tmp = mdev->weight[i];
+ mdev->weight[i] = val;
+ rc = hidma_mgmt_setup(mdev);
+ if (rc)
+ mdev->weight[i] = tmp;
+ return rc;
+}
+
+static struct hidma_mgmt_fileinfo hidma_mgmt_files[] = {
+ DECLARE_ATTRIBUTE(hw_version_major, S_IRUGO),
+ DECLARE_ATTRIBUTE(hw_version_minor, S_IRUGO),
+ DECLARE_ATTRIBUTE(dma_channels, S_IRUGO),
+ DECLARE_ATTRIBUTE(chreset_timeout_cycles, S_IRUGO),
+ DECLARE_ATTRIBUTE(max_wr_xactions, (S_IRUGO|S_IWUGO)),
+ DECLARE_ATTRIBUTE(max_rd_xactions, (S_IRUGO|S_IWUGO)),
+ DECLARE_ATTRIBUTE(max_write_request, (S_IRUGO|S_IWUGO)),
+ DECLARE_ATTRIBUTE(max_read_request, (S_IRUGO|S_IWUGO)),
+};
+
+static ssize_t show_values(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct hidma_mgmt_dev *mdev = platform_get_drvdata(pdev);
+ unsigned int i;
+
+ buf[0] = 0;
+
+ for (i = 0; i < ARRAY_SIZE(hidma_mgmt_files); i++) {
+ if (strcmp(attr->attr.name, hidma_mgmt_files[i].name) == 0) {
+ sprintf(buf, "%d\n", hidma_mgmt_files[i].get(mdev));
+ goto done;
+ }
+ }
+done:
+ return strlen(buf);
+}
+
+static ssize_t set_values(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct hidma_mgmt_dev *mdev = platform_get_drvdata(pdev);
+ unsigned long tmp;
+ unsigned int i;
+ int rc;
+
+ rc = kstrtoul(buf, 0, &tmp);
+ if (rc)
+ return rc;
+
+ for (i = 0; i < ARRAY_SIZE(hidma_mgmt_files); i++) {
+ if (strcmp(attr->attr.name, hidma_mgmt_files[i].name) == 0) {
+ rc = hidma_mgmt_files[i].set(mdev, tmp);
+ if (rc)
+ return rc;
+
+ goto done;
+ }
+ }
+done:
+ return count;
+}
+
+static ssize_t show_values_channel(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct hidma_chan_attr *chattr;
+ struct hidma_mgmt_dev *mdev;
+
+ buf[0] = 0;
+ chattr = container_of(attr, struct hidma_chan_attr, attr);
+ mdev = chattr->mdev;
+ if (strcmp(attr->attr.name, "priority") == 0) {
+ sprintf(buf, "%d\n", mdev->priority[chattr->index]);
+ goto done;
+ }
+
+ if (strcmp(attr->attr.name, "weight") == 0) {
+ sprintf(buf, "%d\n", mdev->weight[chattr->index]);
+ goto done;
+ }
+
+done:
+ return strlen(buf);
+}
+
+static ssize_t set_values_channel(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ struct hidma_chan_attr *chattr;
+ struct hidma_mgmt_dev *mdev;
+ unsigned long tmp;
+ int rc;
+
+ chattr = container_of(attr, struct hidma_chan_attr, attr);
+ mdev = chattr->mdev;
+
+ rc = kstrtoul(buf, 0, &tmp);
+ if (rc)
+ return rc;
+
+ if (strcmp(attr->attr.name, "priority") == 0) {
+ rc = set_priority(mdev, chattr->index, tmp);
+ if (rc)
+ return rc;
+ goto done;
+ }
+
+ if (strcmp(attr->attr.name, "weight") == 0) {
+ rc = set_weight(mdev, chattr->index, tmp);
+ if (rc)
+ return rc;
+ }
+done:
+ return count;
+}
+
+static int create_sysfs_entry(struct hidma_mgmt_dev *dev, char *name, int mode)
+{
+ struct device_attribute *attrs;
+ char *name_copy;
+
+ attrs = devm_kmalloc(&dev->pdev->dev,
+ sizeof(struct device_attribute), GFP_KERNEL);
+ if (!attrs)
+ return -ENOMEM;
+
+ name_copy = devm_kstrdup(&dev->pdev->dev, name, GFP_KERNEL);
+ if (!name_copy)
+ return -ENOMEM;
+
+ attrs->attr.name = name_copy;
+ attrs->attr.mode = mode;
+ attrs->show = show_values;
+ attrs->store = set_values;
+ sysfs_attr_init(&attrs->attr);
+
+ return device_create_file(&dev->pdev->dev, attrs);
+}
+
+static int create_sysfs_entry_channel(struct hidma_mgmt_dev *mdev, char *name,
+ int mode, int index, struct kobject *parent)
+{
+ struct hidma_chan_attr *chattr;
+ char *name_copy;
+
+ chattr = devm_kmalloc(&mdev->pdev->dev, sizeof(*chattr), GFP_KERNEL);
+ if (!chattr)
+ return -ENOMEM;
+
+ name_copy = devm_kstrdup(&mdev->pdev->dev, name, GFP_KERNEL);
+ if (!name_copy)
+ return -ENOMEM;
+
+ chattr->mdev = mdev;
+ chattr->index = index;
+ chattr->attr.attr.name = name_copy;
+ chattr->attr.attr.mode = mode;
+ chattr->attr.show = show_values_channel;
+ chattr->attr.store = set_values_channel;
+ sysfs_attr_init(&chattr->attr.attr);
+
+ return sysfs_create_file(parent, &chattr->attr.attr);
+}
+
+int hidma_mgmt_init_sys(struct hidma_mgmt_dev *mdev)
+{
+ unsigned int i;
+ int rc;
+ int required;
+
+ required = sizeof(*mdev->chroots) * mdev->dma_channels;
+ mdev->chroots = devm_kmalloc(&mdev->pdev->dev, required, GFP_KERNEL);
+ if (!mdev->chroots)
+ return -ENOMEM;
+
+ /* create each channel directory here */
+ for (i = 0; i < mdev->dma_channels; i++) {
+ char name[10];
+
+ snprintf(name, sizeof(name), "chan%d", i);
+ mdev->chroots[i] = kobject_create_and_add(name,
+ &mdev->pdev->dev.kobj);
+ if (!mdev->chroots[i])
+ return -ENOMEM;
+ }
+
+ /* populate common parameters */
+ for (i = 0; i < ARRAY_SIZE(hidma_mgmt_files); i++) {
+ rc = create_sysfs_entry(mdev, hidma_mgmt_files[i].name,
+ hidma_mgmt_files[i].mode);
+ if (rc)
+ return rc;
+ }
+
+ /* populate parameters that are per channel */
+ for (i = 0; i < mdev->dma_channels; i++) {
+ rc = create_sysfs_entry_channel(mdev, "priority",
+ (S_IRUGO|S_IWUGO), i, mdev->chroots[i]);
+ if (rc)
+ return rc;
+
+ rc = create_sysfs_entry_channel(mdev, "weight",
+ (S_IRUGO|S_IWUGO), i, mdev->chroots[i]);
+ if (rc)
+ return rc;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(hidma_mgmt_init_sys);
--
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

2015-11-23 02:29:06

by Sinan Kaya

[permalink] [raw]
Subject: [PATCH V7 3/3] dma: add Qualcomm Technologies HIDMA channel driver

This patch adds support for hidma engine. The driver consists
of two logical blocks. The DMA engine interface and the
low-level interface. The hardware only supports memcpy/memset
and this driver only support memcpy interface. HW and driver
doesn't support slave interface.

Signed-off-by: Sinan Kaya <[email protected]>
---
.../devicetree/bindings/dma/qcom_hidma.txt | 18 +
drivers/dma/qcom/Kconfig | 10 +
drivers/dma/qcom/Makefile | 2 +
drivers/dma/qcom/hidma.c | 706 ++++++++++++++++
drivers/dma/qcom/hidma.h | 157 ++++
drivers/dma/qcom/hidma_dbg.c | 217 +++++
drivers/dma/qcom/hidma_ll.c | 924 +++++++++++++++++++++
7 files changed, 2034 insertions(+)
create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma.txt
create mode 100644 drivers/dma/qcom/hidma.c
create mode 100644 drivers/dma/qcom/hidma.h
create mode 100644 drivers/dma/qcom/hidma_dbg.c
create mode 100644 drivers/dma/qcom/hidma_ll.c

diff --git a/Documentation/devicetree/bindings/dma/qcom_hidma.txt b/Documentation/devicetree/bindings/dma/qcom_hidma.txt
new file mode 100644
index 0000000..d18c8fc
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/qcom_hidma.txt
@@ -0,0 +1,18 @@
+Qualcomm Technologies HIDMA Channel driver
+
+Required properties:
+- compatible: must contain "qcom,hidma-1.0"
+- reg: Addresses for the transfer and event channel
+- interrupts: Should contain the event interrupt
+- desc-count: Number of asynchronous requests this channel can handle
+- event-channel: The HW event channel completions will be delivered.
+Example:
+
+ hidma_24: dma-controller@0x5c050000 {
+ compatible = "qcom,hidma-1.0";
+ reg = <0 0x5c050000 0x0 0x1000>,
+ <0 0x5c0b0000 0x0 0x1000>;
+ interrupts = <0 389 0>;
+ desc-count = <10>;
+ event-channel = <4>;
+ };
diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig
index ebe5b8c..5588e1c 100644
--- a/drivers/dma/qcom/Kconfig
+++ b/drivers/dma/qcom/Kconfig
@@ -17,3 +17,13 @@ config QCOM_HIDMA_MGMT
start managing the channels. In a virtualized environment,
the guest OS would run QCOM_HIDMA channel driver and the
hypervisor would run the QCOM_HIDMA_MGMT management driver.
+
+config QCOM_HIDMA
+ tristate "Qualcomm Technologies HIDMA Channel support"
+ select DMA_ENGINE
+ help
+ Enable support for the Qualcomm Technologies HIDMA controller.
+ The HIDMA controller supports optimized buffer copies
+ (user to kernel, kernel to kernel, etc.). It only supports
+ memcpy interface. The core is not intended for general
+ purpose slave DMA.
diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
index 1a5a96d..f702df1 100644
--- a/drivers/dma/qcom/Makefile
+++ b/drivers/dma/qcom/Makefile
@@ -1,2 +1,4 @@
obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
obj-$(CONFIG_QCOM_HIDMA_MGMT) += hidma_mgmt.o hidma_mgmt_sys.o
+obj-$(CONFIG_QCOM_HIDMA) += hdma.o
+hdma-objs := hidma_ll.o hidma.o hidma_dbg.o
diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
new file mode 100644
index 0000000..c0c7f44
--- /dev/null
+++ b/drivers/dma/qcom/hidma.c
@@ -0,0 +1,706 @@
+/*
+ * Qualcomm Technologies HIDMA DMA engine interface
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * Copyright (C) Freescale Semicondutor, Inc. 2007, 2008.
+ * Copyright (C) Semihalf 2009
+ * Copyright (C) Ilya Yanok, Emcraft Systems 2010
+ * Copyright (C) Alexander Popov, Promcontroller 2014
+ *
+ * Written by Piotr Ziecik <[email protected]>. Hardware description
+ * (defines, structures and comments) was taken from MPC5121 DMA driver
+ * written by Hongjun Chen <[email protected]>.
+ *
+ * Approved as OSADL project by a majority of OSADL members and funded
+ * by OSADL membership fees in 2009; for details see http://www.osadl.org.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in the
+ * file called COPYING.
+ */
+
+/* Linux Foundation elects GPLv2 license only. */
+
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/of_dma.h>
+#include <linux/property.h>
+#include <linux/delay.h>
+#include <linux/highmem.h>
+#include <linux/io.h>
+#include <linux/sched.h>
+#include <linux/wait.h>
+#include <linux/acpi.h>
+#include <linux/irq.h>
+#include <linux/atomic.h>
+#include <linux/pm_runtime.h>
+
+#include "../dmaengine.h"
+#include "hidma.h"
+
+/*
+ * Default idle time is 2 seconds. This parameter can
+ * be overridden by changing the following
+ * /sys/bus/platform/devices/QCOM8061:<xy>/power/autosuspend_delay_ms
+ * during kernel boot.
+ */
+#define AUTOSUSPEND_TIMEOUT 2000
+#define ERR_INFO_SW 0xFF
+#define ERR_CODE_UNEXPECTED_TERMINATE 0x0
+
+static inline struct hidma_dev *to_hidma_dev(struct dma_device *dmadev)
+{
+ return container_of(dmadev, struct hidma_dev, ddev);
+}
+
+static inline
+struct hidma_dev *to_hidma_dev_from_lldev(struct hidma_lldev **_lldevp)
+{
+ return container_of(_lldevp, struct hidma_dev, lldev);
+}
+
+static inline struct hidma_chan *to_hidma_chan(struct dma_chan *dmach)
+{
+ return container_of(dmach, struct hidma_chan, chan);
+}
+
+static inline struct hidma_desc *
+to_hidma_desc(struct dma_async_tx_descriptor *t)
+{
+ return container_of(t, struct hidma_desc, desc);
+}
+
+static void hidma_free(struct hidma_dev *dmadev)
+{
+ INIT_LIST_HEAD(&dmadev->ddev.channels);
+}
+
+static unsigned int nr_desc_prm;
+module_param(nr_desc_prm, uint, 0644);
+MODULE_PARM_DESC(nr_desc_prm, "number of descriptors (default: 0)");
+
+#define MAX_HIDMA_CHANNELS 64
+static int event_channel_idx[MAX_HIDMA_CHANNELS] = {
+ [0 ... (MAX_HIDMA_CHANNELS - 1)] = -1
+};
+
+static unsigned int num_event_channel_idx;
+module_param_array_named(event_channel_idx, event_channel_idx, int,
+ &num_event_channel_idx, 0644);
+MODULE_PARM_DESC(event_channel_idx,
+ "event channel index array for the notifications");
+static atomic_t channel_ref_count;
+
+/* process completed descriptors */
+static void hidma_process_completed(struct hidma_dev *mdma)
+{
+ struct dma_async_tx_descriptor *desc;
+ dma_cookie_t last_cookie;
+ struct hidma_chan *mchan;
+ struct hidma_desc *mdesc;
+ unsigned long irqflags;
+ struct list_head list;
+ struct dma_chan *dmach;
+
+ list_for_each_entry(dmach, &mdma->ddev.channels, device_node) {
+ mchan = to_hidma_chan(dmach);
+ INIT_LIST_HEAD(&list);
+
+ /* Get all completed descriptors */
+ spin_lock_irqsave(&mchan->lock, irqflags);
+ list_splice_tail_init(&mchan->completed, &list);
+ spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+ /* Execute callbacks and run dependencies */
+ list_for_each_entry(mdesc, &list, node) {
+ enum dma_status llstat;
+
+ desc = &mdesc->desc;
+
+ spin_lock_irqsave(&mchan->lock, irqflags);
+ dma_cookie_complete(desc);
+ spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+ llstat = hidma_ll_status(mdma->lldev, mdesc->tre_ch);
+ if (desc->callback && (llstat == DMA_COMPLETE))
+ desc->callback(desc->callback_param);
+
+ last_cookie = desc->cookie;
+ dma_run_dependencies(desc);
+ }
+
+ /* Free descriptors */
+ spin_lock_irqsave(&mchan->lock, irqflags);
+ list_splice_tail_init(&list, &mchan->free);
+ spin_unlock_irqrestore(&mchan->lock, irqflags);
+ }
+}
+
+/*
+ * Called once for each submitted descriptor.
+ * PM is locked once for each descriptor that is currently
+ * in execution.
+ */
+static void hidma_callback(void *data)
+{
+ struct hidma_desc *mdesc = data;
+ struct hidma_chan *mchan = to_hidma_chan(mdesc->desc.chan);
+ struct dma_device *ddev = mchan->chan.device;
+ struct hidma_dev *dmadev = to_hidma_dev(ddev);
+ unsigned long irqflags;
+ bool queued = false;
+
+ spin_lock_irqsave(&mchan->lock, irqflags);
+ if (mdesc->node.next) {
+ /* Delete from the active list, add to completed list */
+ list_move_tail(&mdesc->node, &mchan->completed);
+ queued = true;
+ }
+ spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+ hidma_process_completed(dmadev);
+
+ if (queued) {
+ pm_runtime_mark_last_busy(dmadev->ddev.dev);
+ pm_runtime_put_autosuspend(dmadev->ddev.dev);
+ }
+}
+
+static int hidma_chan_init(struct hidma_dev *dmadev, u32 dma_sig)
+{
+ struct hidma_chan *mchan;
+ struct dma_device *ddev;
+
+ mchan = devm_kzalloc(dmadev->ddev.dev, sizeof(*mchan), GFP_KERNEL);
+ if (!mchan)
+ return -ENOMEM;
+
+ ddev = &dmadev->ddev;
+ mchan->dma_sig = dma_sig;
+ mchan->dmadev = dmadev;
+ mchan->chan.device = ddev;
+ dma_cookie_init(&mchan->chan);
+
+ INIT_LIST_HEAD(&mchan->free);
+ INIT_LIST_HEAD(&mchan->prepared);
+ INIT_LIST_HEAD(&mchan->active);
+ INIT_LIST_HEAD(&mchan->completed);
+
+ spin_lock_init(&mchan->lock);
+ list_add_tail(&mchan->chan.device_node, &ddev->channels);
+ dmadev->ddev.chancnt++;
+ return 0;
+}
+
+static void hidma_issue_pending(struct dma_chan *dmach)
+{
+ struct hidma_chan *mchan = to_hidma_chan(dmach);
+ struct hidma_dev *dmadev = mchan->dmadev;
+
+ /* PM will be released in hidma_callback function. */
+ pm_runtime_get_sync(dmadev->ddev.dev);
+ hidma_ll_start(dmadev->lldev);
+}
+
+static enum dma_status hidma_tx_status(struct dma_chan *dmach,
+ dma_cookie_t cookie, struct dma_tx_state *txstate)
+{
+ struct hidma_chan *mchan = to_hidma_chan(dmach);
+ enum dma_status ret;
+
+ if (mchan->paused)
+ ret = DMA_PAUSED;
+ else
+ ret = dma_cookie_status(dmach, cookie, txstate);
+
+ return ret;
+}
+
+/*
+ * Submit descriptor to hardware.
+ * Lock the PM for each descriptor we are sending.
+ */
+static dma_cookie_t hidma_tx_submit(struct dma_async_tx_descriptor *txd)
+{
+ struct hidma_chan *mchan = to_hidma_chan(txd->chan);
+ struct hidma_dev *dmadev = mchan->dmadev;
+ struct hidma_desc *mdesc;
+ unsigned long irqflags;
+ dma_cookie_t cookie;
+
+ if (!hidma_ll_isenabled(dmadev->lldev))
+ return -ENODEV;
+
+ mdesc = container_of(txd, struct hidma_desc, desc);
+ spin_lock_irqsave(&mchan->lock, irqflags);
+
+ /* Move descriptor to active */
+ list_move_tail(&mdesc->node, &mchan->active);
+
+ /* Update cookie */
+ cookie = dma_cookie_assign(txd);
+
+ hidma_ll_queue_request(dmadev->lldev, mdesc->tre_ch);
+ spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+ return cookie;
+}
+
+static int hidma_alloc_chan_resources(struct dma_chan *dmach)
+{
+ struct hidma_chan *mchan = to_hidma_chan(dmach);
+ struct hidma_dev *dmadev = mchan->dmadev;
+ struct hidma_desc *mdesc, *tmp;
+ unsigned long irqflags;
+ LIST_HEAD(descs);
+ unsigned int i;
+ int rc = 0;
+
+ if (mchan->allocated)
+ return 0;
+
+ /* Alloc descriptors for this channel */
+ for (i = 0; i < dmadev->nr_descriptors; i++) {
+ mdesc = kzalloc(sizeof(struct hidma_desc), GFP_KERNEL);
+ if (!mdesc) {
+ rc = -ENOMEM;
+ break;
+ }
+ dma_async_tx_descriptor_init(&mdesc->desc, dmach);
+ mdesc->desc.flags = DMA_CTRL_ACK;
+ mdesc->desc.tx_submit = hidma_tx_submit;
+
+ rc = hidma_ll_request(dmadev->lldev, mchan->dma_sig,
+ "DMA engine", hidma_callback, mdesc, &mdesc->tre_ch);
+ if (rc) {
+ dev_err(dmach->device->dev,
+ "channel alloc failed at %u\n", i);
+ kfree(mdesc);
+ break;
+ }
+ list_add_tail(&mdesc->node, &descs);
+ }
+
+ if (rc) {
+ /* return the allocated descriptors */
+ list_for_each_entry_safe(mdesc, tmp, &descs, node) {
+ hidma_ll_free(dmadev->lldev, mdesc->tre_ch);
+ kfree(mdesc);
+ }
+ return rc;
+ }
+
+ spin_lock_irqsave(&mchan->lock, irqflags);
+ list_splice_tail_init(&descs, &mchan->free);
+ mchan->allocated = true;
+ spin_unlock_irqrestore(&mchan->lock, irqflags);
+ return 1;
+}
+
+static void hidma_free_chan_resources(struct dma_chan *dmach)
+{
+ struct hidma_chan *mchan = to_hidma_chan(dmach);
+ struct hidma_dev *mdma = mchan->dmadev;
+ struct hidma_desc *mdesc, *tmp;
+ unsigned long irqflags;
+ LIST_HEAD(descs);
+
+ if (!list_empty(&mchan->prepared) || !list_empty(&mchan->active) ||
+ !list_empty(&mchan->completed)) {
+ /*
+ * We have unfinished requests waiting.
+ * Terminate the request from the hardware.
+ */
+ hidma_cleanup_pending_tre(mdma->lldev, ERR_INFO_SW,
+ ERR_CODE_UNEXPECTED_TERMINATE);
+
+ /* Give enough time for completions to be called. */
+ msleep(100);
+ }
+
+ spin_lock_irqsave(&mchan->lock, irqflags);
+ /* Channel must be idle */
+ WARN_ON(!list_empty(&mchan->prepared));
+ WARN_ON(!list_empty(&mchan->active));
+ WARN_ON(!list_empty(&mchan->completed));
+
+ /* Move data */
+ list_splice_tail_init(&mchan->free, &descs);
+
+ /* Free descriptors */
+ list_for_each_entry_safe(mdesc, tmp, &descs, node) {
+ hidma_ll_free(mdma->lldev, mdesc->tre_ch);
+ list_del(&mdesc->node);
+ kfree(mdesc);
+ }
+
+ mchan->allocated = 0;
+ spin_unlock_irqrestore(&mchan->lock, irqflags);
+}
+
+static struct dma_async_tx_descriptor *
+hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dma_dest,
+ dma_addr_t dma_src, size_t len, unsigned long flags)
+{
+ struct hidma_chan *mchan = to_hidma_chan(dmach);
+ struct hidma_desc *mdesc = NULL;
+ struct hidma_dev *mdma = mchan->dmadev;
+ unsigned long irqflags;
+
+ /* Get free descriptor */
+ spin_lock_irqsave(&mchan->lock, irqflags);
+ if (!list_empty(&mchan->free)) {
+ mdesc = list_first_entry(&mchan->free, struct hidma_desc, node);
+ list_del(&mdesc->node);
+ }
+ spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+ if (!mdesc)
+ return NULL;
+
+ hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch,
+ dma_src, dma_dest, len, flags);
+
+ /* Place descriptor in prepared list */
+ spin_lock_irqsave(&mchan->lock, irqflags);
+ list_add_tail(&mdesc->node, &mchan->prepared);
+ spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+ return &mdesc->desc;
+}
+
+static int hidma_terminate_all(struct dma_chan *chan)
+{
+ struct hidma_dev *dmadev;
+ struct hidma_chan *mchan;
+ struct hidma_desc *tmp, *mdesc;
+ unsigned long irqflags;
+ LIST_HEAD(head);
+ LIST_HEAD(list);
+ int rc;
+
+ mchan = to_hidma_chan(chan);
+ dmadev = to_hidma_dev(mchan->chan.device);
+
+ pm_runtime_get_sync(dmadev->ddev.dev);
+ /* give completed requests a chance to finish */
+ hidma_process_completed(dmadev);
+
+ spin_lock_irqsave(&mchan->lock, irqflags);
+ list_splice_init(&mchan->active, &list);
+ list_splice_init(&mchan->prepared, &list);
+ list_splice_init(&mchan->completed, &list);
+ spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+ /* this suspends the existing transfer */
+ rc = hidma_ll_pause(dmadev->lldev);
+ if (rc) {
+ dev_err(dmadev->ddev.dev, "channel did not pause\n");
+ goto out;
+ }
+
+ /* return all user requests */
+ list_for_each_entry_safe(mdesc, tmp, &list, node) {
+ struct dma_async_tx_descriptor *txd = &mdesc->desc;
+ dma_async_tx_callback callback = mdesc->desc.callback;
+ void *param = mdesc->desc.callback_param;
+ enum dma_status status;
+
+ dma_descriptor_unmap(txd);
+
+ status = hidma_ll_status(dmadev->lldev, mdesc->tre_ch);
+ /*
+ * The API requires that no submissions are done from a
+ * callback, so we don't need to drop the lock here
+ */
+ if (callback && (status == DMA_COMPLETE))
+ callback(param);
+
+ dma_run_dependencies(txd);
+
+ /* move myself to free_list */
+ list_move(&mdesc->node, &mchan->free);
+ }
+
+ /* reinitialize the hardware */
+ rc = hidma_ll_setup(dmadev->lldev);
+
+out:
+ pm_runtime_mark_last_busy(dmadev->ddev.dev);
+ pm_runtime_put_autosuspend(dmadev->ddev.dev);
+ return rc;
+}
+
+static int hidma_pause(struct dma_chan *chan)
+{
+ struct hidma_chan *mchan;
+ struct hidma_dev *dmadev;
+
+ mchan = to_hidma_chan(chan);
+ dmadev = to_hidma_dev(mchan->chan.device);
+ if (!mchan->paused) {
+ pm_runtime_get_sync(dmadev->ddev.dev);
+ if (hidma_ll_pause(dmadev->lldev))
+ dev_warn(dmadev->ddev.dev, "channel did not stop\n");
+ mchan->paused = true;
+ pm_runtime_mark_last_busy(dmadev->ddev.dev);
+ pm_runtime_put_autosuspend(dmadev->ddev.dev);
+ }
+ return 0;
+}
+
+static int hidma_resume(struct dma_chan *chan)
+{
+ struct hidma_chan *mchan;
+ struct hidma_dev *dmadev;
+ int rc = 0;
+
+ mchan = to_hidma_chan(chan);
+ dmadev = to_hidma_dev(mchan->chan.device);
+ if (mchan->paused) {
+ pm_runtime_get_sync(dmadev->ddev.dev);
+ rc = hidma_ll_resume(dmadev->lldev);
+ if (!rc)
+ mchan->paused = false;
+ else
+ dev_err(dmadev->ddev.dev,
+ "failed to resume the channel");
+ pm_runtime_mark_last_busy(dmadev->ddev.dev);
+ pm_runtime_put_autosuspend(dmadev->ddev.dev);
+ }
+ return rc;
+}
+
+static irqreturn_t hidma_chirq_handler(int chirq, void *arg)
+{
+ struct hidma_lldev **lldev_ptr = arg;
+
+ /*
+ * All interrupts are request driven.
+ * HW doesn't send an interrupt by itself.
+ */
+ return hidma_ll_inthandler(chirq, *lldev_ptr);
+}
+
+static int hidma_probe(struct platform_device *pdev)
+{
+ struct hidma_dev *dmadev;
+ struct resource *trca_resource;
+ struct resource *evca_resource;
+ int chirq;
+ int current_channel_index = atomic_read(&channel_ref_count);
+ void __iomem *evca;
+ void __iomem *trca;
+ int rc;
+
+ pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
+ pm_runtime_use_autosuspend(&pdev->dev);
+ pm_runtime_set_active(&pdev->dev);
+ pm_runtime_enable(&pdev->dev);
+
+ trca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ trca = devm_ioremap_resource(&pdev->dev, trca_resource);
+ if (IS_ERR(trca)) {
+ rc = -ENOMEM;
+ goto bailout;
+ }
+
+ evca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ evca = devm_ioremap_resource(&pdev->dev, evca_resource);
+ if (IS_ERR(evca)) {
+ rc = -ENOMEM;
+ goto bailout;
+ }
+
+ /*
+ * This driver only handles the channel IRQs.
+ * Common IRQ is handled by the management driver.
+ */
+ chirq = platform_get_irq(pdev, 0);
+ if (chirq < 0) {
+ rc = -ENODEV;
+ goto bailout;
+ }
+
+ dmadev = devm_kzalloc(&pdev->dev, sizeof(*dmadev), GFP_KERNEL);
+ if (!dmadev) {
+ rc = -ENOMEM;
+ goto bailout;
+ }
+
+ INIT_LIST_HEAD(&dmadev->ddev.channels);
+ spin_lock_init(&dmadev->lock);
+ dmadev->ddev.dev = &pdev->dev;
+ pm_runtime_get_sync(dmadev->ddev.dev);
+
+ dma_cap_set(DMA_MEMCPY, dmadev->ddev.cap_mask);
+ if (WARN_ON(!pdev->dev.dma_mask)) {
+ rc = -ENXIO;
+ goto dmafree;
+ }
+
+ dmadev->dev_evca = evca;
+ dmadev->evca_resource = evca_resource;
+ dmadev->dev_trca = trca;
+ dmadev->trca_resource = trca_resource;
+ dmadev->ddev.device_prep_dma_memcpy = hidma_prep_dma_memcpy;
+ dmadev->ddev.device_alloc_chan_resources = hidma_alloc_chan_resources;
+ dmadev->ddev.device_free_chan_resources = hidma_free_chan_resources;
+ dmadev->ddev.device_tx_status = hidma_tx_status;
+ dmadev->ddev.device_issue_pending = hidma_issue_pending;
+ dmadev->ddev.device_pause = hidma_pause;
+ dmadev->ddev.device_resume = hidma_resume;
+ dmadev->ddev.device_terminate_all = hidma_terminate_all;
+ dmadev->ddev.copy_align = 8;
+
+ device_property_read_u32(&pdev->dev, "desc-count",
+ &dmadev->nr_descriptors);
+
+ if (!dmadev->nr_descriptors && nr_desc_prm)
+ dmadev->nr_descriptors = nr_desc_prm;
+
+ if (!dmadev->nr_descriptors) {
+ rc = -EINVAL;
+ goto dmafree;
+ }
+
+ if (current_channel_index > MAX_HIDMA_CHANNELS) {
+ rc = -EINVAL;
+ goto dmafree;
+ }
+
+ dmadev->evridx = -1;
+ device_property_read_u32(&pdev->dev, "event-channel", &dmadev->evridx);
+
+ /* kernel command line override for the guest machine */
+ if (event_channel_idx[current_channel_index] != -1)
+ dmadev->evridx = event_channel_idx[current_channel_index];
+
+ if (dmadev->evridx == -1) {
+ rc = -EINVAL;
+ goto dmafree;
+ }
+
+ /* Set DMA mask to 64 bits. */
+ rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
+ if (rc) {
+ dev_warn(&pdev->dev, "unable to set coherent mask to 64");
+ rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+ if (rc)
+ goto dmafree;
+ }
+
+ dmadev->lldev = hidma_ll_init(dmadev->ddev.dev,
+ dmadev->nr_descriptors, dmadev->dev_trca,
+ dmadev->dev_evca, dmadev->evridx);
+ if (!dmadev->lldev) {
+ rc = -EPROBE_DEFER;
+ goto dmafree;
+ }
+
+ rc = devm_request_irq(&pdev->dev, chirq, hidma_chirq_handler, 0,
+ "qcom-hidma", &dmadev->lldev);
+ if (rc)
+ goto uninit;
+
+ INIT_LIST_HEAD(&dmadev->ddev.channels);
+ rc = hidma_chan_init(dmadev, 0);
+ if (rc)
+ goto uninit;
+
+ rc = dma_async_device_register(&dmadev->ddev);
+ if (rc)
+ goto uninit;
+
+ hidma_debug_init(dmadev);
+ dev_info(&pdev->dev, "HI-DMA engine driver registration complete\n");
+ platform_set_drvdata(pdev, dmadev);
+ pm_runtime_mark_last_busy(dmadev->ddev.dev);
+ pm_runtime_put_autosuspend(dmadev->ddev.dev);
+ atomic_inc(&channel_ref_count);
+ return 0;
+
+uninit:
+ hidma_debug_uninit(dmadev);
+ hidma_ll_uninit(dmadev->lldev);
+dmafree:
+ if (dmadev)
+ hidma_free(dmadev);
+bailout:
+ pm_runtime_put_sync(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
+ return rc;
+}
+
+static int hidma_remove(struct platform_device *pdev)
+{
+ struct hidma_dev *dmadev = platform_get_drvdata(pdev);
+
+ pm_runtime_get_sync(dmadev->ddev.dev);
+ dma_async_device_unregister(&dmadev->ddev);
+ hidma_debug_uninit(dmadev);
+ hidma_ll_uninit(dmadev->lldev);
+ hidma_free(dmadev);
+
+ dev_info(&pdev->dev, "HI-DMA engine removed\n");
+ pm_runtime_put_sync_suspend(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
+
+ return 0;
+}
+
+#if IS_ENABLED(CONFIG_ACPI)
+static const struct acpi_device_id hidma_acpi_ids[] = {
+ {"QCOM8061"},
+ {},
+};
+#endif
+
+static const struct of_device_id hidma_match[] = {
+ {.compatible = "qcom,hidma-1.0",},
+ {},
+};
+
+MODULE_DEVICE_TABLE(of, hidma_match);
+
+static struct platform_driver hidma_driver = {
+ .probe = hidma_probe,
+ .remove = hidma_remove,
+ .driver = {
+ .name = "hidma",
+ .of_match_table = hidma_match,
+ .acpi_match_table = ACPI_PTR(hidma_acpi_ids),
+ },
+};
+
+module_platform_driver(hidma_driver);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/dma/qcom/hidma.h b/drivers/dma/qcom/hidma.h
new file mode 100644
index 0000000..b2e6764
--- /dev/null
+++ b/drivers/dma/qcom/hidma.h
@@ -0,0 +1,157 @@
+/*
+ * Qualcomm Technologies HIDMA data structures
+ *
+ * Copyright (c) 2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef QCOM_HIDMA_H
+#define QCOM_HIDMA_H
+
+#include <linux/kfifo.h>
+#include <linux/interrupt.h>
+#include <linux/dmaengine.h>
+
+#define TRE_SIZE 32 /* each TRE is 32 bytes */
+#define TRE_CFG_IDX 0
+#define TRE_LEN_IDX 1
+#define TRE_SRC_LOW_IDX 2
+#define TRE_SRC_HI_IDX 3
+#define TRE_DEST_LOW_IDX 4
+#define TRE_DEST_HI_IDX 5
+
+struct hidma_tx_status {
+ u8 err_info; /* error record in this transfer */
+ u8 err_code; /* completion code */
+};
+
+struct hidma_tre {
+ atomic_t allocated; /* if this channel is allocated */
+ bool queued; /* flag whether this is pending */
+ u16 status; /* status */
+ u32 chidx; /* index of the tre */
+ u32 dma_sig; /* signature of the tre */
+ const char *dev_name; /* name of the device */
+ void (*callback)(void *data); /* requester callback */
+ void *data; /* Data associated with this channel*/
+ struct hidma_lldev *lldev; /* lldma device pointer */
+ u32 tre_local[TRE_SIZE / sizeof(u32) + 1]; /* TRE local copy */
+ u32 tre_index; /* the offset where this was written*/
+ u32 int_flags; /* interrupt flags*/
+};
+
+struct hidma_lldev {
+ bool initialized; /* initialized flag */
+ u8 trch_state; /* trch_state of the device */
+ u8 evch_state; /* evch_state of the device */
+ u8 evridx; /* event channel to notify */
+ u32 nr_tres; /* max number of configs */
+ spinlock_t lock; /* reentrancy */
+ struct hidma_tre *trepool; /* trepool of user configs */
+ struct device *dev; /* device */
+ void __iomem *trca; /* Transfer Channel address */
+ void __iomem *evca; /* Event Channel address */
+ struct hidma_tre
+ **pending_tre_list; /* Pointers to pending TREs */
+ struct hidma_tx_status
+ *tx_status_list; /* Pointers to pending TREs status*/
+ s32 pending_tre_count; /* Number of TREs pending */
+
+ void *tre_ring; /* TRE ring */
+ dma_addr_t tre_ring_handle; /* TRE ring to be shared with HW */
+ u32 tre_ring_size; /* Byte size of the ring */
+ u32 tre_processed_off; /* last processed TRE */
+
+ void *evre_ring; /* EVRE ring */
+ dma_addr_t evre_ring_handle; /* EVRE ring to be shared with HW */
+ u32 evre_ring_size; /* Byte size of the ring */
+ u32 evre_processed_off; /* last processed EVRE */
+
+ u32 tre_write_offset; /* TRE write location */
+ struct tasklet_struct task; /* task delivering notifications */
+ DECLARE_KFIFO_PTR(handoff_fifo,
+ struct hidma_tre *); /* pending TREs FIFO */
+};
+
+struct hidma_desc {
+ struct dma_async_tx_descriptor desc;
+ /* link list node for this channel*/
+ struct list_head node;
+ u32 tre_ch;
+};
+
+struct hidma_chan {
+ bool paused;
+ bool allocated;
+ char dbg_name[16];
+ u32 dma_sig;
+
+ /*
+ * active descriptor on this channel
+ * It is used by the DMA complete notification to
+ * locate the descriptor that initiated the transfer.
+ */
+ struct dentry *debugfs;
+ struct dentry *stats;
+ struct hidma_dev *dmadev;
+
+ struct dma_chan chan;
+ struct list_head free;
+ struct list_head prepared;
+ struct list_head active;
+ struct list_head completed;
+
+ /* Lock for this structure */
+ spinlock_t lock;
+};
+
+struct hidma_dev {
+ int evridx;
+ u32 nr_descriptors;
+
+ struct hidma_lldev *lldev;
+ void __iomem *dev_trca;
+ struct resource *trca_resource;
+ void __iomem *dev_evca;
+ struct resource *evca_resource;
+
+ /* used to protect the pending channel list*/
+ spinlock_t lock;
+ struct dma_device ddev;
+
+ struct dentry *debugfs;
+ struct dentry *stats;
+};
+
+int hidma_ll_request(struct hidma_lldev *llhndl, u32 dev_id,
+ const char *dev_name,
+ void (*callback)(void *data), void *data, u32 *tre_ch);
+
+void hidma_ll_free(struct hidma_lldev *llhndl, u32 tre_ch);
+enum dma_status hidma_ll_status(struct hidma_lldev *llhndl, u32 tre_ch);
+bool hidma_ll_isenabled(struct hidma_lldev *llhndl);
+void hidma_ll_queue_request(struct hidma_lldev *llhndl, u32 tre_ch);
+void hidma_ll_start(struct hidma_lldev *llhndl);
+int hidma_ll_pause(struct hidma_lldev *llhndl);
+int hidma_ll_resume(struct hidma_lldev *llhndl);
+void hidma_ll_set_transfer_params(struct hidma_lldev *llhndl, u32 tre_ch,
+ dma_addr_t src, dma_addr_t dest, u32 len, u32 flags);
+int hidma_ll_setup(struct hidma_lldev *lldev);
+struct hidma_lldev *hidma_ll_init(struct device *dev, u32 max_channels,
+ void __iomem *trca, void __iomem *evca,
+ u8 evridx);
+int hidma_ll_uninit(struct hidma_lldev *llhndl);
+irqreturn_t hidma_ll_inthandler(int irq, void *arg);
+void hidma_cleanup_pending_tre(struct hidma_lldev *llhndl, u8 err_info,
+ u8 err_code);
+int hidma_debug_init(struct hidma_dev *dmadev);
+void hidma_debug_uninit(struct hidma_dev *dmadev);
+#endif
diff --git a/drivers/dma/qcom/hidma_dbg.c b/drivers/dma/qcom/hidma_dbg.c
new file mode 100644
index 0000000..a58af18
--- /dev/null
+++ b/drivers/dma/qcom/hidma_dbg.c
@@ -0,0 +1,217 @@
+/*
+ * Qualcomm Technologies HIDMA debug file
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/list.h>
+#include <linux/pm_runtime.h>
+
+#include "hidma.h"
+
+static void hidma_ll_chstats(struct seq_file *s, void *llhndl, u32 tre_ch)
+{
+ struct hidma_lldev *lldev = llhndl;
+ struct hidma_tre *tre;
+ u32 length;
+ dma_addr_t src_start;
+ dma_addr_t dest_start;
+ u32 *tre_local;
+
+ if (tre_ch >= lldev->nr_tres) {
+ dev_err(lldev->dev, "invalid TRE number in chstats:%d", tre_ch);
+ return;
+ }
+ tre = &lldev->trepool[tre_ch];
+ seq_printf(s, "------Channel %d -----\n", tre_ch);
+ seq_printf(s, "allocated=%d\n", atomic_read(&tre->allocated));
+ seq_printf(s, "queued = 0x%x\n", tre->queued);
+ seq_printf(s, "err_info = 0x%x\n",
+ lldev->tx_status_list[tre->chidx].err_info);
+ seq_printf(s, "err_code = 0x%x\n",
+ lldev->tx_status_list[tre->chidx].err_code);
+ seq_printf(s, "status = 0x%x\n", tre->status);
+ seq_printf(s, "chidx = 0x%x\n", tre->chidx);
+ seq_printf(s, "dma_sig = 0x%x\n", tre->dma_sig);
+ seq_printf(s, "dev_name=%s\n", tre->dev_name);
+ seq_printf(s, "callback=%p\n", tre->callback);
+ seq_printf(s, "data=%p\n", tre->data);
+ seq_printf(s, "tre_index = 0x%x\n", tre->tre_index);
+
+ tre_local = &tre->tre_local[0];
+ src_start = tre_local[TRE_SRC_LOW_IDX];
+ src_start = ((u64) (tre_local[TRE_SRC_HI_IDX]) << 32) + src_start;
+ dest_start = tre_local[TRE_DEST_LOW_IDX];
+ dest_start += ((u64) (tre_local[TRE_DEST_HI_IDX]) << 32);
+ length = tre_local[TRE_LEN_IDX];
+
+ seq_printf(s, "src=%pap\n", &src_start);
+ seq_printf(s, "dest=%pap\n", &dest_start);
+ seq_printf(s, "length = 0x%x\n", length);
+}
+
+static void hidma_ll_devstats(struct seq_file *s, void *llhndl)
+{
+ struct hidma_lldev *lldev = llhndl;
+
+ seq_puts(s, "------Device -----\n");
+ seq_printf(s, "lldev init = 0x%x\n", lldev->initialized);
+ seq_printf(s, "trch_state = 0x%x\n", lldev->trch_state);
+ seq_printf(s, "evch_state = 0x%x\n", lldev->evch_state);
+ seq_printf(s, "evridx = 0x%x\n", lldev->evridx);
+ seq_printf(s, "nr_tres = 0x%x\n", lldev->nr_tres);
+ seq_printf(s, "trca=%p\n", lldev->trca);
+ seq_printf(s, "tre_ring=%p\n", lldev->tre_ring);
+ seq_printf(s, "tre_ring_handle=%pap\n", &lldev->tre_ring_handle);
+ seq_printf(s, "tre_ring_size = 0x%x\n", lldev->tre_ring_size);
+ seq_printf(s, "tre_processed_off = 0x%x\n", lldev->tre_processed_off);
+ seq_printf(s, "pending_tre_count=%d\n", lldev->pending_tre_count);
+ seq_printf(s, "evca=%p\n", lldev->evca);
+ seq_printf(s, "evre_ring=%p\n", lldev->evre_ring);
+ seq_printf(s, "evre_ring_handle=%pap\n", &lldev->evre_ring_handle);
+ seq_printf(s, "evre_ring_size = 0x%x\n", lldev->evre_ring_size);
+ seq_printf(s, "evre_processed_off = 0x%x\n", lldev->evre_processed_off);
+ seq_printf(s, "tre_write_offset = 0x%x\n", lldev->tre_write_offset);
+}
+
+/*
+ * hidma_chan_stats: display HIDMA channel statistics
+ *
+ * Display the statistics for the current HIDMA virtual channel device.
+ */
+static int hidma_chan_stats(struct seq_file *s, void *unused)
+{
+ struct hidma_chan *mchan = s->private;
+ struct hidma_desc *mdesc;
+ struct hidma_dev *dmadev = mchan->dmadev;
+
+ pm_runtime_get_sync(dmadev->ddev.dev);
+ seq_printf(s, "paused=%u\n", mchan->paused);
+ seq_printf(s, "dma_sig=%u\n", mchan->dma_sig);
+ seq_puts(s, "prepared\n");
+ list_for_each_entry(mdesc, &mchan->prepared, node)
+ hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
+
+ seq_puts(s, "active\n");
+ list_for_each_entry(mdesc, &mchan->active, node)
+ hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
+
+ seq_puts(s, "completed\n");
+ list_for_each_entry(mdesc, &mchan->completed, node)
+ hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
+
+ hidma_ll_devstats(s, mchan->dmadev->lldev);
+ pm_runtime_mark_last_busy(dmadev->ddev.dev);
+ pm_runtime_put_autosuspend(dmadev->ddev.dev);
+ return 0;
+}
+
+/*
+ * hidma_dma_info: display HIDMA device info
+ *
+ * Display the info for the current HIDMA device.
+ */
+static int hidma_dma_info(struct seq_file *s, void *unused)
+{
+ struct hidma_dev *dmadev = s->private;
+ resource_size_t sz;
+
+ seq_printf(s, "nr_descriptors=%d\n", dmadev->nr_descriptors);
+ seq_printf(s, "dev_trca=%p\n", &dmadev->dev_trca);
+ seq_printf(s, "dev_trca_phys=%pa\n", &dmadev->trca_resource->start);
+ sz = resource_size(dmadev->trca_resource);
+ seq_printf(s, "dev_trca_size=%pa\n", &sz);
+ seq_printf(s, "dev_evca=%p\n", &dmadev->dev_evca);
+ seq_printf(s, "dev_evca_phys=%pa\n", &dmadev->evca_resource->start);
+ sz = resource_size(dmadev->evca_resource);
+ seq_printf(s, "dev_evca_size=%pa\n", &sz);
+ return 0;
+}
+
+static int hidma_chan_stats_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, hidma_chan_stats, inode->i_private);
+}
+
+static int hidma_dma_info_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, hidma_dma_info, inode->i_private);
+}
+
+static const struct file_operations hidma_chan_fops = {
+ .open = hidma_chan_stats_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+static const struct file_operations hidma_dma_fops = {
+ .open = hidma_dma_info_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+void hidma_debug_uninit(struct hidma_dev *dmadev)
+{
+ debugfs_remove_recursive(dmadev->debugfs);
+ debugfs_remove_recursive(dmadev->stats);
+}
+
+int hidma_debug_init(struct hidma_dev *dmadev)
+{
+ int rc = 0;
+ int chidx = 0;
+ struct list_head *position = NULL;
+
+ dmadev->debugfs = debugfs_create_dir(dev_name(dmadev->ddev.dev), NULL);
+ if (!dmadev->debugfs) {
+ rc = -ENODEV;
+ return rc;
+ }
+
+ /* walk through the virtual channel list */
+ list_for_each(position, &dmadev->ddev.channels) {
+ struct hidma_chan *chan;
+
+ chan = list_entry(position, struct hidma_chan,
+ chan.device_node);
+ sprintf(chan->dbg_name, "chan%d", chidx);
+ chan->debugfs = debugfs_create_dir(chan->dbg_name,
+ dmadev->debugfs);
+ if (!chan->debugfs) {
+ rc = -ENOMEM;
+ goto cleanup;
+ }
+ chan->stats = debugfs_create_file("stats", S_IRUGO,
+ chan->debugfs, chan, &hidma_chan_fops);
+ if (!chan->stats) {
+ rc = -ENOMEM;
+ goto cleanup;
+ }
+ chidx++;
+ }
+
+ dmadev->stats = debugfs_create_file("stats", S_IRUGO,
+ dmadev->debugfs, dmadev, &hidma_dma_fops);
+ if (!dmadev->stats) {
+ rc = -ENOMEM;
+ goto cleanup;
+ }
+
+ return 0;
+cleanup:
+ hidma_debug_uninit(dmadev);
+ return rc;
+}
diff --git a/drivers/dma/qcom/hidma_ll.c b/drivers/dma/qcom/hidma_ll.c
new file mode 100644
index 0000000..59621f7
--- /dev/null
+++ b/drivers/dma/qcom/hidma_ll.c
@@ -0,0 +1,924 @@
+/*
+ * Qualcomm Technologies HIDMA DMA engine low level code
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/dmaengine.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/dma-mapping.h>
+#include <linux/delay.h>
+#include <linux/atomic.h>
+#include <linux/iopoll.h>
+#include <linux/kfifo.h>
+
+#include "hidma.h"
+
+#define EVRE_SIZE 16 /* each EVRE is 16 bytes */
+
+#define TRCA_CTRLSTS_OFFSET 0x0
+#define TRCA_RING_LOW_OFFSET 0x8
+#define TRCA_RING_HIGH_OFFSET 0xC
+#define TRCA_RING_LEN_OFFSET 0x10
+#define TRCA_READ_PTR_OFFSET 0x18
+#define TRCA_WRITE_PTR_OFFSET 0x20
+#define TRCA_DOORBELL_OFFSET 0x400
+
+#define EVCA_CTRLSTS_OFFSET 0x0
+#define EVCA_INTCTRL_OFFSET 0x4
+#define EVCA_RING_LOW_OFFSET 0x8
+#define EVCA_RING_HIGH_OFFSET 0xC
+#define EVCA_RING_LEN_OFFSET 0x10
+#define EVCA_READ_PTR_OFFSET 0x18
+#define EVCA_WRITE_PTR_OFFSET 0x20
+#define EVCA_DOORBELL_OFFSET 0x400
+
+#define EVCA_IRQ_STAT_OFFSET 0x100
+#define EVCA_IRQ_CLR_OFFSET 0x108
+#define EVCA_IRQ_EN_OFFSET 0x110
+
+#define EVRE_CFG_IDX 0
+#define EVRE_LEN_IDX 1
+#define EVRE_DEST_LOW_IDX 2
+#define EVRE_DEST_HI_IDX 3
+
+#define EVRE_ERRINFO_BIT_POS 24
+#define EVRE_CODE_BIT_POS 28
+
+#define EVRE_ERRINFO_MASK 0xF
+#define EVRE_CODE_MASK 0xF
+
+#define CH_CONTROL_MASK 0xFF
+#define CH_STATE_MASK 0xFF
+#define CH_STATE_BIT_POS 0x8
+
+#define MAKE64(high, low) (((u64)(high) << 32) | (low))
+
+#define IRQ_EV_CH_EOB_IRQ_BIT_POS 0
+#define IRQ_EV_CH_WR_RESP_BIT_POS 1
+#define IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS 9
+#define IRQ_TR_CH_DATA_RD_ER_BIT_POS 10
+#define IRQ_TR_CH_DATA_WR_ER_BIT_POS 11
+#define IRQ_TR_CH_INVALID_TRE_BIT_POS 14
+
+#define ENABLE_IRQS (BIT(IRQ_EV_CH_EOB_IRQ_BIT_POS) | \
+ BIT(IRQ_EV_CH_WR_RESP_BIT_POS) | \
+ BIT(IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS) | \
+ BIT(IRQ_TR_CH_DATA_RD_ER_BIT_POS) | \
+ BIT(IRQ_TR_CH_DATA_WR_ER_BIT_POS) | \
+ BIT(IRQ_TR_CH_INVALID_TRE_BIT_POS))
+
+enum ch_command {
+ CH_DISABLE = 0,
+ CH_ENABLE = 1,
+ CH_SUSPEND = 2,
+ CH_RESET = 9,
+};
+
+enum ch_state {
+ CH_DISABLED = 0,
+ CH_ENABLED = 1,
+ CH_RUNNING = 2,
+ CH_SUSPENDED = 3,
+ CH_STOPPED = 4,
+ CH_ERROR = 5,
+ CH_IN_RESET = 9,
+};
+
+enum tre_type {
+ TRE_MEMCPY = 3,
+ TRE_MEMSET = 4,
+};
+
+enum evre_type {
+ EVRE_DMA_COMPLETE = 0x23,
+ EVRE_IMM_DATA = 0x24,
+};
+
+enum err_code {
+ EVRE_STATUS_COMPLETE = 1,
+ EVRE_STATUS_ERROR = 4,
+};
+
+void hidma_ll_free(struct hidma_lldev *lldev, u32 tre_ch)
+{
+ struct hidma_tre *tre;
+
+ if (tre_ch >= lldev->nr_tres) {
+ dev_err(lldev->dev, "invalid TRE number in free:%d", tre_ch);
+ return;
+ }
+
+ tre = &lldev->trepool[tre_ch];
+ if (atomic_read(&tre->allocated) != true) {
+ dev_err(lldev->dev, "trying to free an unused TRE:%d", tre_ch);
+ return;
+ }
+
+ atomic_set(&tre->allocated, 0);
+}
+
+int hidma_ll_request(struct hidma_lldev *lldev, u32 dma_sig,
+ const char *dev_name,
+ void (*callback)(void *data), void *data, u32 *tre_ch)
+{
+ unsigned int i;
+ struct hidma_tre *tre;
+ u32 *tre_local;
+
+ if (!tre_ch || !lldev)
+ return -EINVAL;
+
+ /* need to have at least one empty spot in the queue */
+ for (i = 0; i < lldev->nr_tres - 1; i++) {
+ if (atomic_add_unless(&lldev->trepool[i].allocated, 1, 1))
+ break;
+ }
+
+ if (i == (lldev->nr_tres - 1))
+ return -ENOMEM;
+
+ tre = &lldev->trepool[i];
+ tre->dma_sig = dma_sig;
+ tre->dev_name = dev_name;
+ tre->callback = callback;
+ tre->data = data;
+ tre->chidx = i;
+ tre->status = 0;
+ tre->queued = 0;
+ lldev->tx_status_list[i].err_code = 0;
+ tre->lldev = lldev;
+ tre_local = &tre->tre_local[0];
+ tre_local[TRE_CFG_IDX] = TRE_MEMCPY;
+ tre_local[TRE_CFG_IDX] |= (lldev->evridx & 0xFF) << 8;
+ tre_local[TRE_CFG_IDX] |= BIT(16); /* set IEOB */
+ *tre_ch = i;
+ if (callback)
+ callback(data);
+ return 0;
+}
+
+/*
+ * Multiple TREs may be queued and waiting in the
+ * pending queue.
+ */
+static void hidma_ll_tre_complete(unsigned long arg)
+{
+ struct hidma_lldev *lldev = (struct hidma_lldev *)arg;
+ struct hidma_tre *tre;
+
+ while (kfifo_out(&lldev->handoff_fifo, &tre, 1)) {
+ /* call the user if it has been read by the hardware */
+ if (tre->callback)
+ tre->callback(tre->data);
+ }
+}
+
+/*
+ * Called to handle the interrupt for the channel.
+ * Return a positive number if TRE or EVRE were consumed on this run.
+ * Return a positive number if there are pending TREs or EVREs.
+ * Return 0 if there is nothing to consume or no pending TREs/EVREs found.
+ */
+static int hidma_handle_tre_completion(struct hidma_lldev *lldev)
+{
+ struct hidma_tre *tre;
+ u32 evre_write_off;
+ u32 evre_ring_size = lldev->evre_ring_size;
+ u32 tre_ring_size = lldev->tre_ring_size;
+ u32 num_completed = 0, tre_iterator, evre_iterator;
+ unsigned long flags;
+
+ evre_write_off = readl_relaxed(lldev->evca + EVCA_WRITE_PTR_OFFSET);
+ tre_iterator = lldev->tre_processed_off;
+ evre_iterator = lldev->evre_processed_off;
+
+ if ((evre_write_off > evre_ring_size) ||
+ ((evre_write_off % EVRE_SIZE) != 0)) {
+ dev_err(lldev->dev, "HW reports invalid EVRE write offset\n");
+ return 0;
+ }
+
+ /*
+ * By the time control reaches here the number of EVREs and TREs
+ * may not match. Only consume the ones that hardware told us.
+ */
+ while ((evre_iterator != evre_write_off)) {
+ u32 *current_evre = lldev->evre_ring + evre_iterator;
+ u32 cfg;
+ u8 err_info;
+
+ spin_lock_irqsave(&lldev->lock, flags);
+ tre = lldev->pending_tre_list[tre_iterator / TRE_SIZE];
+ if (!tre) {
+ spin_unlock_irqrestore(&lldev->lock, flags);
+ dev_warn(lldev->dev,
+ "tre_index [%d] and tre out of sync\n",
+ tre_iterator / TRE_SIZE);
+ tre_iterator += TRE_SIZE;
+ if (tre_iterator >= tre_ring_size)
+ tre_iterator -= tre_ring_size;
+ evre_iterator += EVRE_SIZE;
+ if (evre_iterator >= evre_ring_size)
+ evre_iterator -= evre_ring_size;
+
+ continue;
+ }
+ lldev->pending_tre_list[tre->tre_index] = NULL;
+
+ /*
+ * Keep track of pending TREs that SW is expecting to receive
+ * from HW. We got one now. Decrement our counter.
+ */
+ lldev->pending_tre_count--;
+ if (lldev->pending_tre_count < 0) {
+ dev_warn(lldev->dev,
+ "tre count mismatch on completion");
+ lldev->pending_tre_count = 0;
+ }
+
+ spin_unlock_irqrestore(&lldev->lock, flags);
+
+ cfg = current_evre[EVRE_CFG_IDX];
+ err_info = (cfg >> EVRE_ERRINFO_BIT_POS);
+ err_info &= EVRE_ERRINFO_MASK;
+ lldev->tx_status_list[tre->chidx].err_info = err_info;
+ lldev->tx_status_list[tre->chidx].err_code =
+ (cfg >> EVRE_CODE_BIT_POS) & EVRE_CODE_MASK;
+ tre->queued = 0;
+
+ kfifo_put(&lldev->handoff_fifo, tre);
+ tasklet_schedule(&lldev->task);
+
+ tre_iterator += TRE_SIZE;
+ if (tre_iterator >= tre_ring_size)
+ tre_iterator -= tre_ring_size;
+ evre_iterator += EVRE_SIZE;
+ if (evre_iterator >= evre_ring_size)
+ evre_iterator -= evre_ring_size;
+
+ /*
+ * Read the new event descriptor written by the HW.
+ * As we are processing the delivered events, other events
+ * get queued to the SW for processing.
+ */
+ evre_write_off =
+ readl_relaxed(lldev->evca + EVCA_WRITE_PTR_OFFSET);
+ num_completed++;
+ }
+
+ if (num_completed) {
+ u32 evre_read_off = (lldev->evre_processed_off +
+ EVRE_SIZE * num_completed);
+ u32 tre_read_off = (lldev->tre_processed_off +
+ TRE_SIZE * num_completed);
+
+ evre_read_off = evre_read_off % evre_ring_size;
+ tre_read_off = tre_read_off % tre_ring_size;
+
+ writel(evre_read_off, lldev->evca + EVCA_DOORBELL_OFFSET);
+
+ /* record the last processed tre offset */
+ lldev->tre_processed_off = tre_read_off;
+ lldev->evre_processed_off = evre_read_off;
+ }
+
+ return num_completed;
+}
+
+void hidma_cleanup_pending_tre(struct hidma_lldev *lldev, u8 err_info,
+ u8 err_code)
+{
+ u32 tre_iterator;
+ struct hidma_tre *tre;
+ u32 tre_ring_size = lldev->tre_ring_size;
+ int num_completed = 0;
+ u32 tre_read_off;
+ unsigned long flags;
+
+ tre_iterator = lldev->tre_processed_off;
+ while (lldev->pending_tre_count) {
+ int tre_index = tre_iterator / TRE_SIZE;
+
+ spin_lock_irqsave(&lldev->lock, flags);
+ tre = lldev->pending_tre_list[tre_index];
+ if (!tre) {
+ spin_unlock_irqrestore(&lldev->lock, flags);
+ tre_iterator += TRE_SIZE;
+ if (tre_iterator >= tre_ring_size)
+ tre_iterator -= tre_ring_size;
+ continue;
+ }
+ lldev->pending_tre_list[tre_index] = NULL;
+ lldev->pending_tre_count--;
+ if (lldev->pending_tre_count < 0) {
+ dev_warn(lldev->dev,
+ "tre count mismatch on completion");
+ lldev->pending_tre_count = 0;
+ }
+ spin_unlock_irqrestore(&lldev->lock, flags);
+
+ lldev->tx_status_list[tre->chidx].err_info = err_info;
+ lldev->tx_status_list[tre->chidx].err_code = err_code;
+ tre->queued = 0;
+
+ kfifo_put(&lldev->handoff_fifo, tre);
+ tasklet_schedule(&lldev->task);
+
+ tre_iterator += TRE_SIZE;
+ if (tre_iterator >= tre_ring_size)
+ tre_iterator -= tre_ring_size;
+
+ num_completed++;
+ }
+ tre_read_off = (lldev->tre_processed_off + TRE_SIZE * num_completed);
+
+ tre_read_off = tre_read_off % tre_ring_size;
+
+ /* record the last processed tre offset */
+ lldev->tre_processed_off = tre_read_off;
+}
+
+static int hidma_ll_reset(struct hidma_lldev *lldev)
+{
+ u32 val;
+ int ret;
+
+ val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+ val &= ~(CH_CONTROL_MASK << 16);
+ val |= CH_RESET << 16;
+ writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
+
+ /*
+ * Delay 10ms after reset to allow DMA logic to quiesce.
+ * Do a polled read up to 1ms and 10ms maximum.
+ */
+ ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
+ (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_DISABLED),
+ 1000, 10000);
+ if (ret) {
+ dev_err(lldev->dev, "transfer channel did not reset\n");
+ return ret;
+ }
+
+ val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+ val &= ~(CH_CONTROL_MASK << 16);
+ val |= CH_RESET << 16;
+ writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
+
+ /*
+ * Delay 10ms after reset to allow DMA logic to quiesce.
+ * Do a polled read up to 1ms and 10ms maximum.
+ */
+ ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
+ (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_DISABLED),
+ 1000, 10000);
+ if (ret)
+ return ret;
+
+ lldev->trch_state = CH_DISABLED;
+ lldev->evch_state = CH_DISABLED;
+ return 0;
+}
+
+static void hidma_ll_enable_irq(struct hidma_lldev *lldev, u32 irq_bits)
+{
+ writel(irq_bits, lldev->evca + EVCA_IRQ_EN_OFFSET);
+}
+
+/*
+ * The interrupt handler for HIDMA will try to consume as many pending
+ * EVRE from the event queue as possible. Each EVRE has an associated
+ * TRE that holds the user interface parameters. EVRE reports the
+ * result of the transaction. Hardware guarantees ordering between EVREs
+ * and TREs. We use last processed offset to figure out which TRE is
+ * associated with which EVRE. If two TREs are consumed by HW, the EVREs
+ * are in order in the event ring.
+ *
+ * This handler will do a one pass for consuming EVREs. Other EVREs may
+ * be delivered while we are working. It will try to consume incoming
+ * EVREs one more time and return.
+ *
+ * For unprocessed EVREs, hardware will trigger another interrupt until
+ * all the interrupt bits are cleared.
+ *
+ * Hardware guarantees that by the time interrupt is observed, all data
+ * transactions in flight are delivered to their respective places and
+ * are visible to the CPU.
+ *
+ * On demand paging for IOMMU is only supported for PCIe via PRI
+ * (Page Request Interface) not for HIDMA. All other hardware instances
+ * including HIDMA work on pinned DMA addresses.
+ *
+ * HIDMA is not aware of IOMMU presence since it follows the DMA API. All
+ * IOMMU latency will be built into the data movement time. By the time
+ * interrupt happens, IOMMU lookups + data movement has already taken place.
+ *
+ * While the first read in a typical PCI endpoint ISR flushes all outstanding
+ * requests traditionally to the destination, this concept does not apply
+ * here for this HW.
+ */
+static void hidma_ll_int_handler_internal(struct hidma_lldev *lldev)
+{
+ u32 status;
+ u32 enable;
+ u32 cause;
+ int repeat = 2;
+ unsigned long timeout;
+
+ /*
+ * Fine tuned for this HW...
+ *
+ * This ISR has been designed for this particular hardware. Relaxed
+ * read and write accessors are used for performance reasons due to
+ * interrupt delivery guarantees. Do not copy this code blindly and
+ * expect that to work.
+ */
+ status = readl_relaxed(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+ enable = readl_relaxed(lldev->evca + EVCA_IRQ_EN_OFFSET);
+ cause = status & enable;
+
+ if ((cause & (BIT(IRQ_TR_CH_INVALID_TRE_BIT_POS))) ||
+ (cause & BIT(IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS)) ||
+ (cause & BIT(IRQ_EV_CH_WR_RESP_BIT_POS)) ||
+ (cause & BIT(IRQ_TR_CH_DATA_RD_ER_BIT_POS)) ||
+ (cause & BIT(IRQ_TR_CH_DATA_WR_ER_BIT_POS))) {
+ u8 err_code = EVRE_STATUS_ERROR;
+ u8 err_info = 0xFF;
+
+ /* Clear out pending interrupts */
+ writel(cause, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+ dev_err(lldev->dev, "error 0x%x, resetting...\n", cause);
+
+ hidma_cleanup_pending_tre(lldev, err_info, err_code);
+
+ /* reset the channel for recovery */
+ if (hidma_ll_setup(lldev)) {
+ dev_err(lldev->dev,
+ "channel reinitialize failed after error\n");
+ return;
+ }
+ hidma_ll_enable_irq(lldev, ENABLE_IRQS);
+ return;
+ }
+
+ /*
+ * Try to consume as many EVREs as possible.
+ * skip this loop if the interrupt is spurious.
+ */
+ while (cause && repeat) {
+ unsigned long start = jiffies;
+
+ /* This timeout should be sufficent for core to finish */
+ timeout = start + msecs_to_jiffies(500);
+
+ while (lldev->pending_tre_count) {
+ hidma_handle_tre_completion(lldev);
+ if (time_is_before_jiffies(timeout)) {
+ dev_warn(lldev->dev,
+ "ISR timeout %lx-%lx from %lx [%d]\n",
+ jiffies, timeout, start,
+ lldev->pending_tre_count);
+ break;
+ }
+ }
+
+ /* We consumed TREs or there are pending TREs or EVREs. */
+ writel_relaxed(cause, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+ /*
+ * Another interrupt might have arrived while we are
+ * processing this one. Read the new cause.
+ */
+ status = readl_relaxed(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+ enable = readl_relaxed(lldev->evca + EVCA_IRQ_EN_OFFSET);
+ cause = status & enable;
+
+ repeat--;
+ }
+}
+
+static int hidma_ll_enable(struct hidma_lldev *lldev)
+{
+ u32 val;
+ int ret;
+
+ val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+ val &= ~(CH_CONTROL_MASK << 16);
+ val |= CH_ENABLE << 16;
+ writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
+
+ ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
+ ((((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_ENABLED)
+ ||
+ (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_RUNNING)),
+ 1000, 10000);
+ if (ret) {
+ dev_err(lldev->dev, "event channel did not get enabled\n");
+ return ret;
+ }
+
+ val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+ val &= ~(CH_CONTROL_MASK << 16);
+ val |= CH_ENABLE << 16;
+ writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
+
+ ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
+ ((((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_ENABLED)
+ ||
+ (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_RUNNING)),
+ 1000, 10000);
+ if (ret) {
+ dev_err(lldev->dev, "transfer channel did not get enabled\n");
+ return ret;
+ }
+
+ lldev->trch_state = CH_ENABLED;
+ lldev->evch_state = CH_ENABLED;
+
+ return 0;
+}
+
+int hidma_ll_resume(struct hidma_lldev *lldev)
+{
+ return hidma_ll_enable(lldev);
+}
+
+static void hidma_ll_hw_start(struct hidma_lldev *lldev)
+{
+ unsigned long irqflags;
+
+ spin_lock_irqsave(&lldev->lock, irqflags);
+ writel(lldev->tre_write_offset, lldev->trca + TRCA_DOORBELL_OFFSET);
+ spin_unlock_irqrestore(&lldev->lock, irqflags);
+}
+
+bool hidma_ll_isenabled(struct hidma_lldev *lldev)
+{
+ u32 val;
+
+ val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+ lldev->trch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
+ val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+ lldev->evch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
+
+ /* both channels have to be enabled before calling this function */
+ if (((lldev->trch_state == CH_ENABLED) ||
+ (lldev->trch_state == CH_RUNNING)) &&
+ ((lldev->evch_state == CH_ENABLED) ||
+ (lldev->evch_state == CH_RUNNING)))
+ return true;
+
+ return false;
+}
+
+void hidma_ll_queue_request(struct hidma_lldev *lldev, u32 tre_ch)
+{
+ struct hidma_tre *tre;
+ unsigned long flags;
+
+ tre = &lldev->trepool[tre_ch];
+
+ /* copy the TRE into its location in the TRE ring */
+ spin_lock_irqsave(&lldev->lock, flags);
+ tre->tre_index = lldev->tre_write_offset / TRE_SIZE;
+ lldev->pending_tre_list[tre->tre_index] = tre;
+ memcpy(lldev->tre_ring + lldev->tre_write_offset, &tre->tre_local[0],
+ TRE_SIZE);
+ lldev->tx_status_list[tre->chidx].err_code = 0;
+ lldev->tx_status_list[tre->chidx].err_info = 0;
+ tre->queued = 1;
+ lldev->pending_tre_count++;
+ lldev->tre_write_offset = (lldev->tre_write_offset + TRE_SIZE)
+ % lldev->tre_ring_size;
+ spin_unlock_irqrestore(&lldev->lock, flags);
+}
+
+void hidma_ll_start(struct hidma_lldev *lldev)
+{
+ hidma_ll_hw_start(lldev);
+}
+
+/*
+ * Note that even though we stop this channel
+ * if there is a pending transaction in flight
+ * it will complete and follow the callback.
+ * This request will prevent further requests
+ * to be made.
+ */
+int hidma_ll_pause(struct hidma_lldev *lldev)
+{
+ u32 val;
+ int ret;
+
+ val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+ lldev->evch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
+ val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+ lldev->trch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
+
+ /* already suspended by this OS */
+ if ((lldev->trch_state == CH_SUSPENDED) ||
+ (lldev->evch_state == CH_SUSPENDED))
+ return 0;
+
+ /* already stopped by the manager */
+ if ((lldev->trch_state == CH_STOPPED) ||
+ (lldev->evch_state == CH_STOPPED))
+ return 0;
+
+ val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+ val &= ~(CH_CONTROL_MASK << 16);
+ val |= CH_SUSPEND << 16;
+ writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
+
+ /*
+ * Start the wait right after the suspend is confirmed.
+ * Do a polled read up to 1ms and 10ms maximum.
+ */
+ ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
+ (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_SUSPENDED),
+ 1000, 10000);
+ if (ret)
+ return ret;
+
+ val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+ val &= ~(CH_CONTROL_MASK << 16);
+ val |= CH_SUSPEND << 16;
+ writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
+
+ /*
+ * Start the wait right after the suspend is confirmed
+ * Delay up to 10ms after reset to allow DMA logic to quiesce.
+ */
+ ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
+ (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_SUSPENDED),
+ 1000, 10000);
+ if (ret)
+ return ret;
+
+ lldev->trch_state = CH_SUSPENDED;
+ lldev->evch_state = CH_SUSPENDED;
+ return 0;
+}
+
+void hidma_ll_set_transfer_params(struct hidma_lldev *lldev, u32 tre_ch,
+ dma_addr_t src, dma_addr_t dest, u32 len,
+ u32 flags)
+{
+ struct hidma_tre *tre;
+ u32 *tre_local;
+
+ if (tre_ch >= lldev->nr_tres) {
+ dev_err(lldev->dev,
+ "invalid TRE number in transfer params:%d", tre_ch);
+ return;
+ }
+
+ tre = &lldev->trepool[tre_ch];
+ if (atomic_read(&tre->allocated) != true) {
+ dev_err(lldev->dev,
+ "trying to set params on an unused TRE:%d", tre_ch);
+ return;
+ }
+
+ tre_local = &tre->tre_local[0];
+ tre_local[TRE_LEN_IDX] = len;
+ tre_local[TRE_SRC_LOW_IDX] = lower_32_bits(src);
+ tre_local[TRE_SRC_HI_IDX] = upper_32_bits(src);
+ tre_local[TRE_DEST_LOW_IDX] = lower_32_bits(dest);
+ tre_local[TRE_DEST_HI_IDX] = upper_32_bits(dest);
+ tre->int_flags = flags;
+}
+
+/*
+ * Called during initialization and after an error condition
+ * to restore hardware state.
+ */
+int hidma_ll_setup(struct hidma_lldev *lldev)
+{
+ int rc;
+ u64 addr;
+ u32 val;
+ u32 nr_tres = lldev->nr_tres;
+
+ lldev->pending_tre_count = 0;
+ lldev->tre_processed_off = 0;
+ lldev->evre_processed_off = 0;
+ lldev->tre_write_offset = 0;
+
+ /* disable interrupts */
+ hidma_ll_enable_irq(lldev, 0);
+
+ /* clear all pending interrupts */
+ val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+ writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+ rc = hidma_ll_reset(lldev);
+ if (rc)
+ return rc;
+
+ /*
+ * Clear all pending interrupts again.
+ * Otherwise, we observe reset complete interrupts.
+ */
+ val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+ writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+ /* disable interrupts again after reset */
+ hidma_ll_enable_irq(lldev, 0);
+
+ addr = lldev->tre_ring_handle;
+ writel(lower_32_bits(addr), lldev->trca + TRCA_RING_LOW_OFFSET);
+ writel(upper_32_bits(addr), lldev->trca + TRCA_RING_HIGH_OFFSET);
+ writel(lldev->tre_ring_size, lldev->trca + TRCA_RING_LEN_OFFSET);
+
+ addr = lldev->evre_ring_handle;
+ writel(lower_32_bits(addr), lldev->evca + EVCA_RING_LOW_OFFSET);
+ writel(upper_32_bits(addr), lldev->evca + EVCA_RING_HIGH_OFFSET);
+ writel(EVRE_SIZE * nr_tres, lldev->evca + EVCA_RING_LEN_OFFSET);
+
+ /* support IRQ only for now */
+ val = readl(lldev->evca + EVCA_INTCTRL_OFFSET);
+ val &= ~0xF;
+ val |= 0x1;
+ writel(val, lldev->evca + EVCA_INTCTRL_OFFSET);
+
+ /* clear all pending interrupts and enable them */
+ writel(ENABLE_IRQS, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+ hidma_ll_enable_irq(lldev, ENABLE_IRQS);
+
+ rc = hidma_ll_enable(lldev);
+ if (rc)
+ return rc;
+
+ return rc;
+}
+
+struct hidma_lldev *hidma_ll_init(struct device *dev, u32 nr_tres,
+ void __iomem *trca, void __iomem *evca,
+ u8 evridx)
+{
+ u32 required_bytes;
+ struct hidma_lldev *lldev;
+ int rc;
+
+ if (!trca || !evca || !dev || !nr_tres)
+ return NULL;
+
+ /* need at least four TREs */
+ if (nr_tres < 4)
+ return NULL;
+
+ /* need an extra space */
+ nr_tres += 1;
+
+ lldev = devm_kzalloc(dev, sizeof(struct hidma_lldev), GFP_KERNEL);
+ if (!lldev)
+ return NULL;
+
+ lldev->evca = evca;
+ lldev->trca = trca;
+ lldev->dev = dev;
+ required_bytes = sizeof(struct hidma_tre) * nr_tres;
+ lldev->trepool = devm_kzalloc(lldev->dev, required_bytes, GFP_KERNEL);
+ if (!lldev->trepool)
+ return NULL;
+
+ required_bytes = sizeof(lldev->pending_tre_list[0]) * nr_tres;
+ lldev->pending_tre_list = devm_kzalloc(dev, required_bytes, GFP_KERNEL);
+ if (!lldev->pending_tre_list)
+ return NULL;
+
+ required_bytes = sizeof(lldev->tx_status_list[0]) * nr_tres;
+ lldev->tx_status_list = devm_kzalloc(dev, required_bytes, GFP_KERNEL);
+ if (!lldev->tx_status_list)
+ return NULL;
+
+ lldev->tre_ring = dmam_alloc_coherent(dev, (TRE_SIZE + 1) * nr_tres,
+ &lldev->tre_ring_handle, GFP_KERNEL);
+ if (!lldev->tre_ring)
+ return NULL;
+
+ memset(lldev->tre_ring, 0, (TRE_SIZE + 1) * nr_tres);
+ lldev->tre_ring_size = TRE_SIZE * nr_tres;
+ lldev->nr_tres = nr_tres;
+
+ /* the TRE ring has to be TRE_SIZE aligned */
+ if (!IS_ALIGNED(lldev->tre_ring_handle, TRE_SIZE)) {
+ u8 tre_ring_shift;
+
+ tre_ring_shift = lldev->tre_ring_handle % TRE_SIZE;
+ tre_ring_shift = TRE_SIZE - tre_ring_shift;
+ lldev->tre_ring_handle += tre_ring_shift;
+ lldev->tre_ring += tre_ring_shift;
+ }
+
+ lldev->evre_ring = dmam_alloc_coherent(dev, (EVRE_SIZE + 1) * nr_tres,
+ &lldev->evre_ring_handle, GFP_KERNEL);
+ if (!lldev->evre_ring)
+ return NULL;
+
+ memset(lldev->evre_ring, 0, (EVRE_SIZE + 1) * nr_tres);
+ lldev->evre_ring_size = EVRE_SIZE * nr_tres;
+
+ /* the EVRE ring has to be EVRE_SIZE aligned */
+ if (!IS_ALIGNED(lldev->evre_ring_handle, EVRE_SIZE)) {
+ u8 evre_ring_shift;
+
+ evre_ring_shift = lldev->evre_ring_handle % EVRE_SIZE;
+ evre_ring_shift = EVRE_SIZE - evre_ring_shift;
+ lldev->evre_ring_handle += evre_ring_shift;
+ lldev->evre_ring += evre_ring_shift;
+ }
+ lldev->nr_tres = nr_tres;
+ lldev->evridx = evridx;
+
+ rc = kfifo_alloc(&lldev->handoff_fifo,
+ nr_tres * sizeof(struct hidma_tre *), GFP_KERNEL);
+ if (rc)
+ return NULL;
+
+ rc = hidma_ll_setup(lldev);
+ if (rc)
+ return NULL;
+
+ spin_lock_init(&lldev->lock);
+ tasklet_init(&lldev->task, hidma_ll_tre_complete, (unsigned long)lldev);
+ lldev->initialized = 1;
+ hidma_ll_enable_irq(lldev, ENABLE_IRQS);
+ return lldev;
+}
+
+int hidma_ll_uninit(struct hidma_lldev *lldev)
+{
+ int rc = 0;
+ u32 val;
+
+ if (!lldev)
+ return -ENODEV;
+
+ if (lldev->initialized) {
+ u32 required_bytes;
+
+ lldev->initialized = 0;
+
+ required_bytes = sizeof(struct hidma_tre) * lldev->nr_tres;
+ tasklet_kill(&lldev->task);
+ memset(lldev->trepool, 0, required_bytes);
+ lldev->trepool = NULL;
+ lldev->pending_tre_count = 0;
+ lldev->tre_write_offset = 0;
+
+ rc = hidma_ll_reset(lldev);
+
+ /*
+ * Clear all pending interrupts again.
+ * Otherwise, we observe reset complete interrupts.
+ */
+ val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+ writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+ hidma_ll_enable_irq(lldev, 0);
+ }
+ return rc;
+}
+
+irqreturn_t hidma_ll_inthandler(int chirq, void *arg)
+{
+ struct hidma_lldev *lldev = arg;
+
+ hidma_ll_int_handler_internal(lldev);
+ return IRQ_HANDLED;
+}
+
+enum dma_status hidma_ll_status(struct hidma_lldev *lldev, u32 tre_ch)
+{
+ enum dma_status ret = DMA_ERROR;
+ unsigned long flags;
+ u8 err_code;
+
+ spin_lock_irqsave(&lldev->lock, flags);
+ err_code = lldev->tx_status_list[tre_ch].err_code;
+
+ if (err_code & EVRE_STATUS_COMPLETE)
+ ret = DMA_COMPLETE;
+ else if (err_code & EVRE_STATUS_ERROR)
+ ret = DMA_ERROR;
+ else
+ ret = DMA_IN_PROGRESS;
+ spin_unlock_irqrestore(&lldev->lock, flags);
+
+ return ret;
+}
--
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

2015-11-23 18:06:32

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH V7 2/3] dma: add Qualcomm Technologies HIDMA management driver

On Mon, Nov 23, 2015 at 4:28 AM, Sinan Kaya <[email protected]> wrote:
> The Qualcomm Technologies HIDMA device has been designed
> to support virtualization technology. The driver has been
> divided into two to follow the hardware design.
>
> 1. HIDMA Management driver
> 2. HIDMA Channel driver
>
> Each HIDMA HW consists of multiple channels. These channels
> share some set of common parameters. These parameters are
> initialized by the management driver during power up.
> Same management driver is used for monitoring the execution
> of the channels. Management driver can change the performance
> behavior dynamically such as bandwidth allocation and
> prioritization.
>
> The management driver is executed in hypervisor context and
> is the main management entity for all channels provided by
> the device.
>

Thanks for an update!

Looks cool! Though still few errors spelling and style nitpicks (you
may ignore them if you wish) below.

Reviewed-by: Andy Shevchenko <[email protected]>

> Signed-off-by: Sinan Kaya <[email protected]>
> ---
> .../ABI/testing/sysfs-platform-hidma-mgmt | 97 +++++++
> .../devicetree/bindings/dma/qcom_hidma_mgmt.txt | 61 ++++
> drivers/dma/qcom/Kconfig | 11 +
> drivers/dma/qcom/Makefile | 1 +
> drivers/dma/qcom/hidma_mgmt.c | 308 +++++++++++++++++++++
> drivers/dma/qcom/hidma_mgmt.h | 39 +++
> drivers/dma/qcom/hidma_mgmt_sys.c | 299 ++++++++++++++++++++
> 7 files changed, 816 insertions(+)
> create mode 100644 Documentation/ABI/testing/sysfs-platform-hidma-mgmt
> create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
> create mode 100644 drivers/dma/qcom/hidma_mgmt.c
> create mode 100644 drivers/dma/qcom/hidma_mgmt.h
> create mode 100644 drivers/dma/qcom/hidma_mgmt_sys.c
>
> diff --git a/Documentation/ABI/testing/sysfs-platform-hidma-mgmt b/Documentation/ABI/testing/sysfs-platform-hidma-mgmt
> new file mode 100644
> index 0000000..4fc6876
> --- /dev/null
> +++ b/Documentation/ABI/testing/sysfs-platform-hidma-mgmt
> @@ -0,0 +1,97 @@
> +What: /sys/devices/platform/hidma-mgmt*/chan*/priority
> + /sys/devices/platform/QCOM8060:*/chan*/priority
> +Date: Nov 2015
> +KernelVersion: 4.4
> +Contact: "Sinan Kaya <[email protected]>"
> +Description:
> + Contains either 0 or 1 and indicates if the DMA channel is a
> + low priority (0) or high priority (1) channel.
> +
> +What: /sys/devices/platform/hidma-mgmt*/chan*/weight
> + /sys/devices/platform/QCOM8060:*/chan*/weight
> +Date: Nov 2015
> +KernelVersion: 4.4
> +Contact: "Sinan Kaya <[email protected]>"
> +Description:
> + Contains 0..15 and indicates the weight of the channel among
> + equal priority channels during round robin scheduling.
> +
> +What: /sys/devices/platform/hidma-mgmt*/chreset_timeout_cycles
> + /sys/devices/platform/QCOM8060:*/chreset_timeout_cycles
> +Date: Nov 2015
> +KernelVersion: 4.4
> +Contact: "Sinan Kaya <[email protected]>"
> +Description:
> + Contains the platform specific cycle value to wait after a
> + reset command is issued. If the value is chosen too short,
> + then the HW will issue a reset failure interrupt. The value
> + is platform specific and should not be changed without
> + consultance.
> +
> +What: /sys/devices/platform/hidma-mgmt*/dma_channels
> + /sys/devices/platform/QCOM8060:*/dma_channels
> +Date: Nov 2015
> +KernelVersion: 4.4
> +Contact: "Sinan Kaya <[email protected]>"
> +Description:
> + Contains the number of dma channels supported by one instance
> + of HIDMA hardware. The value may change from chip to chip.
> +
> +What: /sys/devices/platform/hidma-mgmt*/hw_version_major
> + /sys/devices/platform/QCOM8060:*/hw_version_major
> +Date: Nov 2015
> +KernelVersion: 4.4
> +Contact: "Sinan Kaya <[email protected]>"
> +Description:
> + Version number major for the hardware.
> +
> +What: /sys/devices/platform/hidma-mgmt*/hw_version_minor
> + /sys/devices/platform/QCOM8060:*/hw_version_minor
> +Date: Nov 2015
> +KernelVersion: 4.4
> +Contact: "Sinan Kaya <[email protected]>"
> +Description:
> + Version number minor for the hardware.
> +
> +What: /sys/devices/platform/hidma-mgmt*/max_rd_xactions
> + /sys/devices/platform/QCOM8060:*/max_rd_xactions
> +Date: Nov 2015
> +KernelVersion: 4.4
> +Contact: "Sinan Kaya <[email protected]>"
> +Description:
> + Contains a value between 0 and 31. Maximum number of
> + read transactions that can be issued back to back.
> + Choosing a higher number gives better performance but
> + can also cause performance reduction to other peripherals
> + sharing the same bus.
> +
> +What: /sys/devices/platform/hidma-mgmt*/max_read_request
> + /sys/devices/platform/QCOM8060:*/max_read_request
> +Date: Nov 2015
> +KernelVersion: 4.4
> +Contact: "Sinan Kaya <[email protected]>"
> +Description:
> + Size of each read request. The value needs to be a power
> + of two and can be between 128 and 1024.
> +
> +What: /sys/devices/platform/hidma-mgmt*/max_wr_xactions
> + /sys/devices/platform/QCOM8060:*/max_wr_xactions
> +Date: Nov 2015
> +KernelVersion: 4.4
> +Contact: "Sinan Kaya <[email protected]>"
> +Description:
> + Contains a value between 0 and 31. Maximum number of
> + write transactions that can be issued back to back.
> + Choosing a higher number gives better performance but
> + can also cause performance reduction to other peripherals
> + sharing the same bus.
> +
> +
> +What: /sys/devices/platform/hidma-mgmt*/max_write_request
> + /sys/devices/platform/QCOM8060:*/max_write_request
> +Date: Nov 2015
> +KernelVersion: 4.4
> +Contact: "Sinan Kaya <[email protected]>"
> +Description:
> + Size of each write request. The value needs to be a power
> + of two and can be between 128 and 1024.
> diff --git a/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
> new file mode 100644
> index 0000000..eb053b9
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
> @@ -0,0 +1,61 @@
> +Qualcomm Technologies HIDMA Management interface
> +
> +The Qualcomm Technologies HIDMA device has been designed
> +to support virtualization technology. The driver has been
> +divided into two to follow the hardware design. The management
> +driver is executed in hypervisor context and is the main
> +management entity for all channels provided by the device.
> +
> +Each HIDMA HW consists of multiple channels. These channels
> +share some set of common parameters. These parameters are
> +initialized by the management driver during power up.
> +Same management driver is used for monitoring the execution
> +of the channels. Management driver can change the performance
> +behavior dynamically such as bandwidth allocation and
> +prioritization.
> +
> +All channel devices get probed in the hypervisor
> +context during power up. They show up as DMA engine
> +DMA channels. Then, before starting the virtualization; each
> +channel device is unbound from the hypervisor by VFIO
> +and assign to the guest machine for control.
> +
> +This management driver will be used by the system
> +admin to monitor/reset the execution state of the DMA
> +channels. This will be the management interface.
> +
> +
> +Required properties:
> +- compatible: "qcom,hidma-mgmt-1.0";
> +- reg: Address range for DMA device
> +- dma-channels: Number of channels supported by this DMA controller.
> +- max-write-burst-bytes: Maximum write burst in bytes. A memcpy requested is
> + fragmented to multiples of this amount.
> +- max-read-burst-bytes: Maximum read burst in bytes. A memcpy request is
> + fragmented to multiples of this amount.
> +- max-write-transactions: Maximum write transactions to perform in a burst
> +- max-read-transactions: Maximum read transactions to perform in a burst
> +- channel-reset-timeout-cycles: Channel reset timeout in cycles for this SOC.
> +- channel-priority: Priority of the channel.
> + Each dma channel share the same HW bandwidth with other dma channels.
> + If two requests reach to the HW at the same time from a low priority and
> + high priority channel, high priority channel will claim the bus.
> + 0=low priority, 1=high priority
> +- channel-weight: Round robin weight of the channel
> + Since there are only two priority levels supported, scheduling among
> + the equal priority channels is done via weights.
> +
> +Example:
> +
> + hidma-mgmt@f9984000 = {
> + compatible = "qcom,hidma-mgmt-1.0";
> + reg = <0xf9984000 0x15000>;
> + dma-channels = 6;
> + max-write-burst-bytes = 1024;
> + max-read-burst-bytes = 1024;
> + max-write-transactions = 31;
> + max-read-transactions = 31;
> + channel-reset-timeout-cycles = 0x500;

> + channel-priority = < 1 1 0 0 0 0>;
> + channel-weight = < 1 13 10 3 4 5>;

I don't know if it's okay, but I saw format of arrays like <X Y Z> (no
head space).

> + };
> diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig
> index f17c272..ebe5b8c 100644
> --- a/drivers/dma/qcom/Kconfig
> +++ b/drivers/dma/qcom/Kconfig
> @@ -6,3 +6,14 @@ config QCOM_BAM_DMA
> ---help---
> Enable support for the QCOM BAM DMA controller. This controller
> provides DMA capabilities for a variety of on-chip devices.
> +
> +config QCOM_HIDMA_MGMT
> + tristate "Qualcomm Technologies HIDMA Management support"
> + select DMA_ENGINE
> + help
> + Enable support for the Qualcomm Technologies HIDMA Management.
> + Each DMA device requires one management interface driver
> + for basic initialization before QCOM_HIDMA channel driver can
> + start managing the channels. In a virtualized environment,
> + the guest OS would run QCOM_HIDMA channel driver and the
> + hypervisor would run the QCOM_HIDMA_MGMT management driver.
> diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
> index f612ae3..1a5a96d 100644
> --- a/drivers/dma/qcom/Makefile
> +++ b/drivers/dma/qcom/Makefile
> @@ -1 +1,2 @@
> obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
> +obj-$(CONFIG_QCOM_HIDMA_MGMT) += hidma_mgmt.o hidma_mgmt_sys.o
> diff --git a/drivers/dma/qcom/hidma_mgmt.c b/drivers/dma/qcom/hidma_mgmt.c
> new file mode 100644
> index 0000000..ea78a4d
> --- /dev/null
> +++ b/drivers/dma/qcom/hidma_mgmt.c
> @@ -0,0 +1,308 @@
> +/*
> + * Qualcomm Technologies HIDMA DMA engine Management interface
> + *
> + * Copyright (c) 2015, The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + */
> +
> +#include <linux/dmaengine.h>
> +#include <linux/acpi.h>
> +#include <linux/of.h>
> +#include <linux/property.h>
> +#include <linux/interrupt.h>
> +#include <linux/platform_device.h>
> +#include <linux/module.h>
> +#include <linux/uaccess.h>
> +#include <linux/slab.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/bitops.h>
> +
> +#include "hidma_mgmt.h"
> +
> +#define QOS_N_OFFSET 0x300
> +#define CFG_OFFSET 0x400
> +#define MAX_BUS_REQ_LEN_OFFSET 0x41C
> +#define MAX_XACTIONS_OFFSET 0x420
> +#define HW_VERSION_OFFSET 0x424
> +#define CHRESET_TIMEOUT_OFFSET 0x418
> +
> +#define MAX_WR_XACTIONS_MASK GENMASK(4, 0)
> +#define MAX_RD_XACTIONS_MASK GENMASK(4, 0)
> +#define WEIGHT_MASK GENMASK(6, 0)
> +#define MAX_BUS_REQ_LEN_MASK GENMASK(15, 0)
> +#define CHRESET_TIMEOUUT_MASK GENMASK(19, 0)

Spelling?

> +
> +#define MAX_WR_XACTIONS_BIT_POS 16
> +#define MAX_BUS_WR_REQ_BIT_POS 16
> +#define WRR_BIT_POS 8
> +#define PRIORITY_BIT_POS 15
> +
> +#define AUTOSUSPEND_TIMEOUT 2000
> +#define MAX_CHANNEL_WEIGHT 15
> +
> +int hidma_mgmt_setup(struct hidma_mgmt_dev *mgmtdev)
> +{
> + unsigned int i;
> + u32 val;
> +
> + if (!is_power_of_2(mgmtdev->max_write_request) ||
> + (mgmtdev->max_write_request < 128) ||
> + (mgmtdev->max_write_request > 1024)) {
> + dev_err(&mgmtdev->pdev->dev, "invalid write request %d\n",
> + mgmtdev->max_write_request);
> + return -EINVAL;
> + }
> +
> + if (!is_power_of_2(mgmtdev->max_read_request) ||
> + (mgmtdev->max_read_request < 128) ||
> + (mgmtdev->max_read_request > 1024)) {
> + dev_err(&mgmtdev->pdev->dev, "invalid read request %d\n",
> + mgmtdev->max_read_request);
> + return -EINVAL;
> + }
> +
> + if (mgmtdev->max_wr_xactions > MAX_WR_XACTIONS_MASK) {
> + dev_err(&mgmtdev->pdev->dev,
> + "max_wr_xactions cannot be bigger than %ld\n",
> + MAX_WR_XACTIONS_MASK);
> + return -EINVAL;
> + }
> +
> + if (mgmtdev->max_rd_xactions > MAX_RD_XACTIONS_MASK) {
> + dev_err(&mgmtdev->pdev->dev,
> + "max_rd_xactions cannot be bigger than %ld\n",
> + MAX_RD_XACTIONS_MASK);
> + return -EINVAL;
> + }
> +
> + for (i = 0; i < mgmtdev->dma_channels; i++) {
> + if (mgmtdev->priority[i] > 1) {
> + dev_err(&mgmtdev->pdev->dev,
> + "priority can be 0 or 1\n");
> + return -EINVAL;
> + }
> +
> + if (mgmtdev->weight[i] > MAX_CHANNEL_WEIGHT) {
> + dev_err(&mgmtdev->pdev->dev,
> + "max value of weight can be %d.\n",
> + MAX_CHANNEL_WEIGHT);
> + return -EINVAL;
> + }
> +
> + /* weight needs to be at least one */
> + if (mgmtdev->weight[i] == 0)
> + mgmtdev->weight[i] = 1;
> + }
> +
> + pm_runtime_get_sync(&mgmtdev->pdev->dev);
> + val = readl(mgmtdev->virtaddr + MAX_BUS_REQ_LEN_OFFSET);
> + val &= ~(MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS);
> + val |= mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS;
> + val &= ~MAX_BUS_REQ_LEN_MASK;
> + val |= mgmtdev->max_read_request;
> + writel(val, mgmtdev->virtaddr + MAX_BUS_REQ_LEN_OFFSET);
> +
> + val = readl(mgmtdev->virtaddr + MAX_XACTIONS_OFFSET);
> + val &= ~(MAX_WR_XACTIONS_MASK << MAX_WR_XACTIONS_BIT_POS);
> + val |= mgmtdev->max_wr_xactions << MAX_WR_XACTIONS_BIT_POS;
> + val &= ~MAX_RD_XACTIONS_MASK;
> + val |= mgmtdev->max_rd_xactions;
> + writel(val, mgmtdev->virtaddr + MAX_XACTIONS_OFFSET);
> +
> + mgmtdev->hw_version = readl(mgmtdev->virtaddr + HW_VERSION_OFFSET);
> + mgmtdev->hw_version_major = (mgmtdev->hw_version >> 28) & 0xF;
> + mgmtdev->hw_version_minor = (mgmtdev->hw_version >> 16) & 0xF;
> +
> + for (i = 0; i < mgmtdev->dma_channels; i++) {
> + val = readl(mgmtdev->virtaddr + QOS_N_OFFSET + (4 * i));
> + val &= ~(1 << PRIORITY_BIT_POS);
> + val |= (mgmtdev->priority[i] & 0x1) << PRIORITY_BIT_POS;
> + val &= ~(WEIGHT_MASK << WRR_BIT_POS);
> + val |= (mgmtdev->weight[i] & WEIGHT_MASK) << WRR_BIT_POS;
> + writel(val, mgmtdev->virtaddr + QOS_N_OFFSET + (4 * i));
> + }
> +
> + val = readl(mgmtdev->virtaddr + CHRESET_TIMEOUT_OFFSET);
> + val &= ~CHRESET_TIMEOUUT_MASK;

Ditto. _TIMEOUT_?

> + val |= mgmtdev->chreset_timeout_cycles & CHRESET_TIMEOUUT_MASK;

Ditto.

> + writel(val, mgmtdev->virtaddr + CHRESET_TIMEOUT_OFFSET);
> +
> + pm_runtime_mark_last_busy(&mgmtdev->pdev->dev);
> + pm_runtime_put_autosuspend(&mgmtdev->pdev->dev);
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(hidma_mgmt_setup);
> +
> +static int hidma_mgmt_probe(struct platform_device *pdev)
> +{
> + struct hidma_mgmt_dev *mgmtdev;
> + struct resource *res;
> + void __iomem *virtaddr;
> + int irq;
> + int rc;
> + u32 val;
> +
> + pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
> + pm_runtime_use_autosuspend(&pdev->dev);
> + pm_runtime_set_active(&pdev->dev);
> + pm_runtime_enable(&pdev->dev);
> + pm_runtime_get_sync(&pdev->dev);
> +
> + res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> + virtaddr = devm_ioremap_resource(&pdev->dev, res);
> + if (IS_ERR(virtaddr)) {
> + rc = -ENOMEM;
> + goto out;
> + }
> +
> + irq = platform_get_irq(pdev, 0);
> + if (irq < 0) {
> + dev_err(&pdev->dev, "irq resources not found\n");
> + rc = irq;
> + goto out;
> + }
> +
> + mgmtdev = devm_kzalloc(&pdev->dev, sizeof(*mgmtdev), GFP_KERNEL);
> + if (!mgmtdev) {
> + rc = -ENOMEM;
> + goto out;
> + }
> +
> + mgmtdev->pdev = pdev;
> + mgmtdev->addrsize = resource_size(res);
> + mgmtdev->virtaddr = virtaddr;
> +
> + rc = device_property_read_u32(&pdev->dev, "dma-channels",
> + &mgmtdev->dma_channels);
> + if (rc) {
> + dev_err(&pdev->dev, "number of channels missing\n");
> + goto out;
> + }
> +
> + rc = device_property_read_u32(&pdev->dev,
> + "channel-reset-timeout-cycles",
> + &mgmtdev->chreset_timeout_cycles);
> + if (rc) {
> + dev_err(&pdev->dev, "channel reset timeout missing\n");
> + goto out;
> + }
> +
> + rc = device_property_read_u32(&pdev->dev, "max-write-burst-bytes",
> + &mgmtdev->max_write_request);
> + if (rc) {
> + dev_err(&pdev->dev, "max-write-burst-bytes missing\n");
> + goto out;
> + }
> +
> + rc = device_property_read_u32(&pdev->dev, "max-read-burst-bytes",
> + &mgmtdev->max_read_request);
> + if (rc) {
> + dev_err(&pdev->dev, "max-read-burst-bytes missing\n");
> + goto out;
> + }
> +
> + rc = device_property_read_u32(&pdev->dev, "max-write-transactions",
> + &mgmtdev->max_wr_xactions);
> + if (rc) {
> + dev_err(&pdev->dev, "max-write-transactions missing\n");
> + goto out;
> + }
> +
> + rc = device_property_read_u32(&pdev->dev, "max-read-transactions",
> + &mgmtdev->max_rd_xactions);
> + if (rc) {
> + dev_err(&pdev->dev, "max-read-transactions missing\n");
> + goto out;
> + }
> +
> + mgmtdev->priority = devm_kcalloc(&pdev->dev,
> + mgmtdev->dma_channels, sizeof(*mgmtdev->priority), GFP_KERNEL);
> + if (!mgmtdev->priority) {
> + rc = -ENOMEM;
> + goto out;
> + }
> +
> + mgmtdev->weight = devm_kcalloc(&pdev->dev,
> + mgmtdev->dma_channels, sizeof(*mgmtdev->weight), GFP_KERNEL);
> + if (!mgmtdev->weight) {
> + rc = -ENOMEM;
> + goto out;
> + }
> +
> + rc = device_property_read_u32_array(&pdev->dev, "channel-priority",
> + mgmtdev->priority, mgmtdev->dma_channels);
> + if (rc) {
> + dev_err(&pdev->dev, "channel-priority missing\n");
> + goto out;
> + }
> +
> + rc = device_property_read_u32_array(&pdev->dev, "channel-weight",
> + mgmtdev->weight, mgmtdev->dma_channels);
> + if (rc) {
> + dev_err(&pdev->dev, "channel-weight missing\n");
> + goto out;
> + }
> +
> + rc = hidma_mgmt_setup(mgmtdev);
> + if (rc) {
> + dev_err(&pdev->dev, "setup failed\n");
> + goto out;
> + }
> +
> + /* start the HW */
> + val = readl(mgmtdev->virtaddr + CFG_OFFSET);
> + val |= 1;
> + writel(val, mgmtdev->virtaddr + CFG_OFFSET);
> +
> + rc = hidma_mgmt_init_sys(mgmtdev);
> + if (rc) {
> + dev_err(&pdev->dev, "sysfs setup failed\n");
> + goto out;
> + }
> +
> + dev_info(&pdev->dev,
> + "HW rev: %d.%d @ %pa with %d physical channels\n",
> + mgmtdev->hw_version_major, mgmtdev->hw_version_minor,
> + &res->start, mgmtdev->dma_channels);
> +
> + platform_set_drvdata(pdev, mgmtdev);
> + pm_runtime_mark_last_busy(&pdev->dev);
> + pm_runtime_put_autosuspend(&pdev->dev);
> + return 0;
> +out:
> + pm_runtime_put_sync_suspend(&pdev->dev);
> + pm_runtime_disable(&pdev->dev);
> + return rc;
> +}
> +
> +#if IS_ENABLED(CONFIG_ACPI)
> +static const struct acpi_device_id hidma_mgmt_acpi_ids[] = {
> + {"QCOM8060"},
> + {},
> +};
> +#endif
> +
> +static const struct of_device_id hidma_mgmt_match[] = {
> + { .compatible = "qcom,hidma-mgmt-1.0", },
> + {},
> +};
> +MODULE_DEVICE_TABLE(of, hidma_mgmt_match);
> +
> +static struct platform_driver hidma_mgmt_driver = {
> + .probe = hidma_mgmt_probe,
> + .driver = {
> + .name = "hidma-mgmt",
> + .of_match_table = hidma_mgmt_match,
> + .acpi_match_table = ACPI_PTR(hidma_mgmt_acpi_ids),
> + },
> +};
> +module_platform_driver(hidma_mgmt_driver);
> +MODULE_LICENSE("GPL v2");
> diff --git a/drivers/dma/qcom/hidma_mgmt.h b/drivers/dma/qcom/hidma_mgmt.h
> new file mode 100644
> index 0000000..f7daf33
> --- /dev/null
> +++ b/drivers/dma/qcom/hidma_mgmt.h
> @@ -0,0 +1,39 @@
> +/*
> + * Qualcomm Technologies HIDMA Management common header
> + *
> + * Copyright (c) 2015, The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + */
> +
> +struct hidma_mgmt_dev {
> + u8 hw_version_major;
> + u8 hw_version_minor;
> +
> + u32 max_wr_xactions;
> + u32 max_rd_xactions;
> + u32 max_write_request;
> + u32 max_read_request;
> + u32 dma_channels;
> + u32 chreset_timeout_cycles;
> + u32 hw_version;
> + u32 *priority;
> + u32 *weight;
> +
> + /* Hardware device constants */
> + void __iomem *virtaddr;
> + resource_size_t addrsize;
> +
> + struct kobject **chroots;
> + struct platform_device *pdev;
> +};
> +
> +int hidma_mgmt_init_sys(struct hidma_mgmt_dev *dev);
> +int hidma_mgmt_setup(struct hidma_mgmt_dev *mgmtdev);
> diff --git a/drivers/dma/qcom/hidma_mgmt_sys.c b/drivers/dma/qcom/hidma_mgmt_sys.c
> new file mode 100644
> index 0000000..a4a6414
> --- /dev/null
> +++ b/drivers/dma/qcom/hidma_mgmt_sys.c
> @@ -0,0 +1,299 @@
> +/*
> + * Qualcomm Technologies HIDMA Management SYS interface
> + *
> + * Copyright (c) 2015, The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + */
> +
> +#include <linux/sysfs.h>
> +#include <linux/platform_device.h>
> +
> +#include "hidma_mgmt.h"
> +
> +struct hidma_chan_attr {
> + struct hidma_mgmt_dev *mdev;
> + int index;
> + struct kobj_attribute attr;
> +};
> +
> +struct hidma_mgmt_fileinfo {
> + char *name;
> + int mode;
> + int (*get)(struct hidma_mgmt_dev *mdev);
> + int (*set)(struct hidma_mgmt_dev *mdev, u64 val);
> +};
> +
> +#define IMPLEMENT_GETSET(name) \
> +static int get_##name(struct hidma_mgmt_dev *mdev) \
> +{ \
> + return mdev->name; \
> +} \
> +static int set_##name(struct hidma_mgmt_dev *mdev, u64 val) \
> +{ \
> + u64 tmp; \
> + int rc; \
> + \
> + tmp = mdev->name; \
> + mdev->name = val; \
> + rc = hidma_mgmt_setup(mdev); \
> + if (rc) \
> + mdev->name = tmp; \
> + return rc; \
> +}
> +
> +#define DECLARE_ATTRIBUTE(name, mode) \
> + {#name, mode, get_##name, set_##name}
> +
> +IMPLEMENT_GETSET(hw_version_major)
> +IMPLEMENT_GETSET(hw_version_minor)
> +IMPLEMENT_GETSET(max_wr_xactions)
> +IMPLEMENT_GETSET(max_rd_xactions)
> +IMPLEMENT_GETSET(max_write_request)
> +IMPLEMENT_GETSET(max_read_request)
> +IMPLEMENT_GETSET(dma_channels)
> +IMPLEMENT_GETSET(chreset_timeout_cycles)
> +
> +static int set_priority(struct hidma_mgmt_dev *mdev, unsigned int i, u64 val)
> +{
> + u64 tmp;
> + int rc;
> +
> + if (i > mdev->dma_channels)
> + return -EINVAL;
> +
> + tmp = mdev->priority[i];
> + mdev->priority[i] = val;
> + rc = hidma_mgmt_setup(mdev);
> + if (rc)
> + mdev->priority[i] = tmp;
> + return rc;
> +}
> +
> +static int set_weight(struct hidma_mgmt_dev *mdev, unsigned int i, u64 val)
> +{
> + u64 tmp;
> + int rc;
> +
> + if (i > mdev->dma_channels)
> + return -EINVAL;
> +
> + tmp = mdev->weight[i];
> + mdev->weight[i] = val;
> + rc = hidma_mgmt_setup(mdev);
> + if (rc)
> + mdev->weight[i] = tmp;
> + return rc;
> +}
> +
> +static struct hidma_mgmt_fileinfo hidma_mgmt_files[] = {
> + DECLARE_ATTRIBUTE(hw_version_major, S_IRUGO),
> + DECLARE_ATTRIBUTE(hw_version_minor, S_IRUGO),
> + DECLARE_ATTRIBUTE(dma_channels, S_IRUGO),
> + DECLARE_ATTRIBUTE(chreset_timeout_cycles, S_IRUGO),
> + DECLARE_ATTRIBUTE(max_wr_xactions, (S_IRUGO|S_IWUGO)),
> + DECLARE_ATTRIBUTE(max_rd_xactions, (S_IRUGO|S_IWUGO)),
> + DECLARE_ATTRIBUTE(max_write_request, (S_IRUGO|S_IWUGO)),
> + DECLARE_ATTRIBUTE(max_read_request, (S_IRUGO|S_IWUGO)),
> +};
> +
> +static ssize_t show_values(struct device *dev, struct device_attribute *attr,
> + char *buf)
> +{
> + struct platform_device *pdev = to_platform_device(dev);
> + struct hidma_mgmt_dev *mdev = platform_get_drvdata(pdev);
> + unsigned int i;
> +
> + buf[0] = 0;
> +
> + for (i = 0; i < ARRAY_SIZE(hidma_mgmt_files); i++) {
> + if (strcmp(attr->attr.name, hidma_mgmt_files[i].name) == 0) {
> + sprintf(buf, "%d\n", hidma_mgmt_files[i].get(mdev));
> + goto done;

break; ?

> + }
> + }
> +done:
> + return strlen(buf);
> +}
> +
> +static ssize_t set_values(struct device *dev, struct device_attribute *attr,
> + const char *buf, size_t count)
> +{
> + struct platform_device *pdev = to_platform_device(dev);
> + struct hidma_mgmt_dev *mdev = platform_get_drvdata(pdev);
> + unsigned long tmp;
> + unsigned int i;
> + int rc;
> +
> + rc = kstrtoul(buf, 0, &tmp);
> + if (rc)
> + return rc;
> +
> + for (i = 0; i < ARRAY_SIZE(hidma_mgmt_files); i++) {
> + if (strcmp(attr->attr.name, hidma_mgmt_files[i].name) == 0) {
> + rc = hidma_mgmt_files[i].set(mdev, tmp);
> + if (rc)
> + return rc;
> +
> + goto done;

Ditto.

> + }
> + }
> +done:
> + return count;
> +}
> +
> +static ssize_t show_values_channel(struct kobject *kobj,
> + struct kobj_attribute *attr, char *buf)
> +{
> + struct hidma_chan_attr *chattr;
> + struct hidma_mgmt_dev *mdev;
> +
> + buf[0] = 0;
> + chattr = container_of(attr, struct hidma_chan_attr, attr);
> + mdev = chattr->mdev;
> + if (strcmp(attr->attr.name, "priority") == 0) {
> + sprintf(buf, "%d\n", mdev->priority[chattr->index]);
> + goto done;
> + }
> +
> + if (strcmp(attr->attr.name, "weight") == 0) {
> + sprintf(buf, "%d\n", mdev->weight[chattr->index]);
> + goto done;
> + }
> +
> +done:
> + return strlen(buf);
> +}
> +
> +static ssize_t set_values_channel(struct kobject *kobj,
> + struct kobj_attribute *attr, const char *buf, size_t count)
> +{
> + struct hidma_chan_attr *chattr;
> + struct hidma_mgmt_dev *mdev;
> + unsigned long tmp;
> + int rc;
> +
> + chattr = container_of(attr, struct hidma_chan_attr, attr);
> + mdev = chattr->mdev;
> +
> + rc = kstrtoul(buf, 0, &tmp);
> + if (rc)
> + return rc;
> +
> + if (strcmp(attr->attr.name, "priority") == 0) {
> + rc = set_priority(mdev, chattr->index, tmp);
> + if (rc)
> + return rc;
> + goto done;
> + }
> +
> + if (strcmp(attr->attr.name, "weight") == 0) {
> + rc = set_weight(mdev, chattr->index, tmp);
> + if (rc)
> + return rc;
> + }
> +done:
> + return count;
> +}
> +
> +static int create_sysfs_entry(struct hidma_mgmt_dev *dev, char *name, int mode)
> +{
> + struct device_attribute *attrs;
> + char *name_copy;
> +
> + attrs = devm_kmalloc(&dev->pdev->dev,
> + sizeof(struct device_attribute), GFP_KERNEL);
> + if (!attrs)
> + return -ENOMEM;
> +
> + name_copy = devm_kstrdup(&dev->pdev->dev, name, GFP_KERNEL);
> + if (!name_copy)
> + return -ENOMEM;
> +
> + attrs->attr.name = name_copy;
> + attrs->attr.mode = mode;
> + attrs->show = show_values;
> + attrs->store = set_values;
> + sysfs_attr_init(&attrs->attr);
> +
> + return device_create_file(&dev->pdev->dev, attrs);
> +}
> +
> +static int create_sysfs_entry_channel(struct hidma_mgmt_dev *mdev, char *name,
> + int mode, int index, struct kobject *parent)
> +{
> + struct hidma_chan_attr *chattr;
> + char *name_copy;
> +
> + chattr = devm_kmalloc(&mdev->pdev->dev, sizeof(*chattr), GFP_KERNEL);
> + if (!chattr)
> + return -ENOMEM;
> +
> + name_copy = devm_kstrdup(&mdev->pdev->dev, name, GFP_KERNEL);
> + if (!name_copy)
> + return -ENOMEM;
> +
> + chattr->mdev = mdev;
> + chattr->index = index;
> + chattr->attr.attr.name = name_copy;
> + chattr->attr.attr.mode = mode;
> + chattr->attr.show = show_values_channel;
> + chattr->attr.store = set_values_channel;
> + sysfs_attr_init(&chattr->attr.attr);
> +
> + return sysfs_create_file(parent, &chattr->attr.attr);
> +}
> +
> +int hidma_mgmt_init_sys(struct hidma_mgmt_dev *mdev)
> +{
> + unsigned int i;
> + int rc;
> + int required;
> +
> + required = sizeof(*mdev->chroots) * mdev->dma_channels;
> + mdev->chroots = devm_kmalloc(&mdev->pdev->dev, required, GFP_KERNEL);
> + if (!mdev->chroots)
> + return -ENOMEM;
> +
> + /* create each channel directory here */
> + for (i = 0; i < mdev->dma_channels; i++) {
> + char name[10];
> +
> + snprintf(name, sizeof(name), "chan%d", i);
> + mdev->chroots[i] = kobject_create_and_add(name,
> + &mdev->pdev->dev.kobj);
> + if (!mdev->chroots[i])
> + return -ENOMEM;
> + }
> +
> + /* populate common parameters */
> + for (i = 0; i < ARRAY_SIZE(hidma_mgmt_files); i++) {
> + rc = create_sysfs_entry(mdev, hidma_mgmt_files[i].name,
> + hidma_mgmt_files[i].mode);
> + if (rc)
> + return rc;
> + }
> +
> + /* populate parameters that are per channel */
> + for (i = 0; i < mdev->dma_channels; i++) {
> + rc = create_sysfs_entry_channel(mdev, "priority",
> + (S_IRUGO|S_IWUGO), i, mdev->chroots[i]);
> + if (rc)
> + return rc;
> +
> + rc = create_sysfs_entry_channel(mdev, "weight",
> + (S_IRUGO|S_IWUGO), i, mdev->chroots[i]);
> + if (rc)
> + return rc;
> + }
> +
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(hidma_mgmt_init_sys);
> --
> Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/



--
With Best Regards,
Andy Shevchenko

2015-11-23 18:49:44

by Sinan Kaya

[permalink] [raw]
Subject: Re: [PATCH V7 2/3] dma: add Qualcomm Technologies HIDMA management driver

On 11/23/2015 1:06 PM, Andy Shevchenko wrote:
> Thanks for an update!
>
> Looks cool! Though still few errors spelling and style nitpicks (you
> may ignore them if you wish) below.
>
> Reviewed-by: Andy Shevchenko <[email protected]>

Thanks, I'll post an updated version next week.

--
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
Linux Foundation Collaborative Project

2015-11-25 21:42:41

by Andy Gross

[permalink] [raw]
Subject: Re: [PATCH V7 1/3] dma: qcom_bam_dma: move to qcom directory

On Sun, Nov 22, 2015 at 09:28:23PM -0500, Sinan Kaya wrote:
> Creating a QCOM directory for all QCOM DMA source files.
>
> Signed-off-by: Sinan Kaya <[email protected]>
> ---

Kind of nice having a different directory.

Reviewed-by: Andy Gross <[email protected]>

2015-11-28 15:06:50

by Sinan Kaya

[permalink] [raw]
Subject: Re: [PATCH V7 1/3] dma: qcom_bam_dma: move to qcom directory

On 11/25/2015 4:42 PM, Andy Gross wrote:
> On Sun, Nov 22, 2015 at 09:28:23PM -0500, Sinan Kaya wrote:
>> Creating a QCOM directory for all QCOM DMA source files.
>>
>> Signed-off-by: Sinan Kaya <[email protected]>
>> ---
>
> Kind of nice having a different directory.
>
> Reviewed-by: Andy Gross <[email protected]>
>
thanks

--
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
Linux Foundation Collaborative Project

2015-11-30 08:18:49

by Vinod Koul

[permalink] [raw]
Subject: Re: [PATCH V7 2/3] dma: add Qualcomm Technologies HIDMA management driver

On Sun, Nov 22, 2015 at 09:28:24PM -0500, Sinan Kaya wrote:

> +++ b/Documentation/ABI/testing/sysfs-platform-hidma-mgmt
> @@ -0,0 +1,97 @@
> +What: /sys/devices/platform/hidma-mgmt*/chan*/priority
> + /sys/devices/platform/QCOM8060:*/chan*/priority
> +Date: Nov 2015
> +KernelVersion: 4.4
> +Contact: "Sinan Kaya <[email protected]>"
> +Description:
> + Contains either 0 or 1 and indicates if the DMA channel is a
> + low priority (0) or high priority (1) channel.

What is purpose of adding sysfs entries here ?

> +
> +#define QOS_N_OFFSET 0x300
> +#define CFG_OFFSET 0x400
> +#define MAX_BUS_REQ_LEN_OFFSET 0x41C
> +#define MAX_XACTIONS_OFFSET 0x420
> +#define HW_VERSION_OFFSET 0x424
> +#define CHRESET_TIMEOUT_OFFSET 0x418
> +
> +#define MAX_WR_XACTIONS_MASK GENMASK(4, 0)
> +#define MAX_RD_XACTIONS_MASK GENMASK(4, 0)
> +#define WEIGHT_MASK GENMASK(6, 0)
> +#define MAX_BUS_REQ_LEN_MASK GENMASK(15, 0)
> +#define CHRESET_TIMEOUUT_MASK GENMASK(19, 0)
> +
> +#define MAX_WR_XACTIONS_BIT_POS 16
> +#define MAX_BUS_WR_REQ_BIT_POS 16
> +#define WRR_BIT_POS 8
> +#define PRIORITY_BIT_POS 15
> +
> +#define AUTOSUSPEND_TIMEOUT 2000
> +#define MAX_CHANNEL_WEIGHT 15

These names are quite generic and prone to collide with generic names,
please prefix them with your driver name

> +}
> +EXPORT_SYMBOL_GPL(hidma_mgmt_setup);

Why is this exported or rather who would be users of this?

> +static int hidma_mgmt_probe(struct platform_device *pdev)
> +{
> + struct hidma_mgmt_dev *mgmtdev;
> + struct resource *res;
> + void __iomem *virtaddr;
> + int irq;
> + int rc;
> + u32 val;
> +
> + pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
> + pm_runtime_use_autosuspend(&pdev->dev);
> + pm_runtime_set_active(&pdev->dev);
> + pm_runtime_enable(&pdev->dev);

at this time pm core will treat device as fully enabled and pm methods can
be invoked, but you are not ready yet right. Typically these are done at the
end of the probe unless you have a reason...

> +static ssize_t show_values(struct device *dev, struct device_attribute *attr,
> + char *buf)

Please fix the coding style here and other places as well. Specifically
please read Chapter 2

--
~Vinod

2015-11-30 08:56:57

by Vinod Koul

[permalink] [raw]
Subject: Re: [PATCH V7 3/3] dma: add Qualcomm Technologies HIDMA channel driver

On Sun, Nov 22, 2015 at 09:28:25PM -0500, Sinan Kaya wrote:
> This patch adds support for hidma engine. The driver consists
> of two logical blocks. The DMA engine interface and the
> low-level interface. The hardware only supports memcpy/memset
> and this driver only support memcpy interface. HW and driver
> doesn't support slave interface.
>
> Signed-off-by: Sinan Kaya <[email protected]>
> ---
> .../devicetree/bindings/dma/qcom_hidma.txt | 18 +
> drivers/dma/qcom/Kconfig | 10 +
> drivers/dma/qcom/Makefile | 2 +
> drivers/dma/qcom/hidma.c | 706 ++++++++++++++++
> drivers/dma/qcom/hidma.h | 157 ++++
> drivers/dma/qcom/hidma_dbg.c | 217 +++++
> drivers/dma/qcom/hidma_ll.c | 924 +++++++++++++++++++++
> 7 files changed, 2034 insertions(+)

This was one of the reasons for slow review of this. I started few times but
large patches made this getting pushed back

Pls help reviewers by splitting your code which looking at this could have
been done

> +#include <linux/dmaengine.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/err.h>
> +#include <linux/init.h>
> +#include <linux/interrupt.h>
> +#include <linux/list.h>
> +#include <linux/module.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +#include <linux/of_dma.h>
> +#include <linux/property.h>
> +#include <linux/delay.h>
> +#include <linux/highmem.h>
> +#include <linux/io.h>
> +#include <linux/sched.h>
> +#include <linux/wait.h>
> +#include <linux/acpi.h>
> +#include <linux/irq.h>
> +#include <linux/atomic.h>
> +#include <linux/pm_runtime.h>

Do you need so many headers...?

> +MODULE_PARM_DESC(event_channel_idx,
> + "event channel index array for the notifications");

Can you explain this parameter in detail pls

> +static void hidma_callback(void *data)
> +{
> + struct hidma_desc *mdesc = data;
> + struct hidma_chan *mchan = to_hidma_chan(mdesc->desc.chan);
> + struct dma_device *ddev = mchan->chan.device;
> + struct hidma_dev *dmadev = to_hidma_dev(ddev);
> + unsigned long irqflags;
> + bool queued = false;
> +
> + spin_lock_irqsave(&mchan->lock, irqflags);
> + if (mdesc->node.next) {
> + /* Delete from the active list, add to completed list */
> + list_move_tail(&mdesc->node, &mchan->completed);
> + queued = true;
> + }
> + spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> + hidma_process_completed(dmadev);
> +
> + if (queued) {
> + pm_runtime_mark_last_busy(dmadev->ddev.dev);
> + pm_runtime_put_autosuspend(dmadev->ddev.dev);
> + }
> +}

Callback is typically referred to client callback upon dma completion, can
you explain what you are trying to do here

> +static int hidma_chan_init(struct hidma_dev *dmadev, u32 dma_sig)
> +{
> + struct hidma_chan *mchan;
> + struct dma_device *ddev;
> +
> + mchan = devm_kzalloc(dmadev->ddev.dev, sizeof(*mchan), GFP_KERNEL);
> + if (!mchan)
> + return -ENOMEM;
> +
> + ddev = &dmadev->ddev;
> + mchan->dma_sig = dma_sig;
> + mchan->dmadev = dmadev;
> + mchan->chan.device = ddev;
> + dma_cookie_init(&mchan->chan);
> +
> + INIT_LIST_HEAD(&mchan->free);
> + INIT_LIST_HEAD(&mchan->prepared);
> + INIT_LIST_HEAD(&mchan->active);
> + INIT_LIST_HEAD(&mchan->completed);
> +
> + spin_lock_init(&mchan->lock);
> + list_add_tail(&mchan->chan.device_node, &ddev->channels);
> + dmadev->ddev.chancnt++;

Have you looked at vchan implementation and moving list handling to this
layer ?

> + return 0;
> +}
> +
> +static void hidma_issue_pending(struct dma_chan *dmach)
> +{
> + struct hidma_chan *mchan = to_hidma_chan(dmach);
> + struct hidma_dev *dmadev = mchan->dmadev;
> +
> + /* PM will be released in hidma_callback function. */
> + pm_runtime_get_sync(dmadev->ddev.dev);

Oh no, this is not allowed. pm_runtime_get_sync() can sleep and
issue_pending can be invoked from atomic context.

> +static enum dma_status hidma_tx_status(struct dma_chan *dmach,
> + dma_cookie_t cookie, struct dma_tx_state *txstate)
> +{
> + struct hidma_chan *mchan = to_hidma_chan(dmach);
> + enum dma_status ret;
> +
> + if (mchan->paused)
> + ret = DMA_PAUSED;

This is not quite right. The status query is for a descriptor and NOT for
channel. Here if the descriptor queried was running then it would be paused
for the rest pending txn, it would be DMA_IN_PROGRESS.

> +static dma_cookie_t hidma_tx_submit(struct dma_async_tx_descriptor *txd)
> +{
> + struct hidma_chan *mchan = to_hidma_chan(txd->chan);
> + struct hidma_dev *dmadev = mchan->dmadev;
> + struct hidma_desc *mdesc;
> + unsigned long irqflags;
> + dma_cookie_t cookie;
> +
> + if (!hidma_ll_isenabled(dmadev->lldev))
> + return -ENODEV;

is this not too late for this check, you should fail this on prepare
method


> +static int hidma_alloc_chan_resources(struct dma_chan *dmach)
> +{
> + struct hidma_chan *mchan = to_hidma_chan(dmach);
> + struct hidma_dev *dmadev = mchan->dmadev;
> + struct hidma_desc *mdesc, *tmp;
> + unsigned long irqflags;
> + LIST_HEAD(descs);
> + unsigned int i;
> + int rc = 0;
> +
> + if (mchan->allocated)
> + return 0;
> +
> + /* Alloc descriptors for this channel */
> + for (i = 0; i < dmadev->nr_descriptors; i++) {
> + mdesc = kzalloc(sizeof(struct hidma_desc), GFP_KERNEL);

GFP_NOWAIT pls, this can be called from atomic context

> + if (!mdesc) {
> + rc = -ENOMEM;
> + break;
> + }
> + dma_async_tx_descriptor_init(&mdesc->desc, dmach);
> + mdesc->desc.flags = DMA_CTRL_ACK;

what is the purpose of setting this flag here

> +static void hidma_free_chan_resources(struct dma_chan *dmach)
> +{
> + struct hidma_chan *mchan = to_hidma_chan(dmach);
> + struct hidma_dev *mdma = mchan->dmadev;
> + struct hidma_desc *mdesc, *tmp;
> + unsigned long irqflags;
> + LIST_HEAD(descs);
> +
> + if (!list_empty(&mchan->prepared) || !list_empty(&mchan->active) ||
> + !list_empty(&mchan->completed)) {
> + /*
> + * We have unfinished requests waiting.
> + * Terminate the request from the hardware.

that sounds as bug.. free should be called when txn are completed/aborted

> + */
> + hidma_cleanup_pending_tre(mdma->lldev, ERR_INFO_SW,
> + ERR_CODE_UNEXPECTED_TERMINATE);
> +
> + /* Give enough time for completions to be called. */
> + msleep(100);

should we not poll/read after delay, also we are still in atomic context

> + }
> +
> + spin_lock_irqsave(&mchan->lock, irqflags);
> + /* Channel must be idle */

sorry I do not like that assumption

> + /* return all user requests */
> + list_for_each_entry_safe(mdesc, tmp, &list, node) {
> + struct dma_async_tx_descriptor *txd = &mdesc->desc;
> + dma_async_tx_callback callback = mdesc->desc.callback;
> + void *param = mdesc->desc.callback_param;
> + enum dma_status status;
> +
> + dma_descriptor_unmap(txd);
> +
> + status = hidma_ll_status(dmadev->lldev, mdesc->tre_ch);
> + /*
> + * The API requires that no submissions are done from a
> + * callback, so we don't need to drop the lock here
> + */

That is correct assumption for async_tx and which is deprecated now.
In fact the converse is true for rest!

> +static int hidma_pause(struct dma_chan *chan)
> +{
> + struct hidma_chan *mchan;
> + struct hidma_dev *dmadev;
> +
> + mchan = to_hidma_chan(chan);
> + dmadev = to_hidma_dev(mchan->chan.device);
> + if (!mchan->paused) {
> + pm_runtime_get_sync(dmadev->ddev.dev);
> + if (hidma_ll_pause(dmadev->lldev))
> + dev_warn(dmadev->ddev.dev, "channel did not stop\n");
> + mchan->paused = true;
> + pm_runtime_mark_last_busy(dmadev->ddev.dev);
> + pm_runtime_put_autosuspend(dmadev->ddev.dev);
> + }
> + return 0;
> +}

This is interesting, we associate pause/resume for slave dma ops and not for
memcpy ops, what is the reason for adding pause/resume here?

> +static int hidma_remove(struct platform_device *pdev)
> +{
> + struct hidma_dev *dmadev = platform_get_drvdata(pdev);
> +
> + pm_runtime_get_sync(dmadev->ddev.dev);
> + dma_async_device_unregister(&dmadev->ddev);
> + hidma_debug_uninit(dmadev);
> + hidma_ll_uninit(dmadev->lldev);
> + hidma_free(dmadev);
> +
> + dev_info(&pdev->dev, "HI-DMA engine removed\n");
> + pm_runtime_put_sync_suspend(&pdev->dev);
> + pm_runtime_disable(&pdev->dev);

and your irq is still active and can invoke your irq handler!

> +
> + return 0;
> +}
> +
> +#if IS_ENABLED(CONFIG_ACPI)
> +static const struct acpi_device_id hidma_acpi_ids[] = {
> + {"QCOM8061"},
Qcom and ACPI, that is v interesting combination !


--
~Vinod

2015-11-30 14:42:26

by Sinan Kaya

[permalink] [raw]
Subject: Re: [PATCH V7 2/3] dma: add Qualcomm Technologies HIDMA management driver

On 11/30/2015 3:21 AM, Vinod Koul wrote:
> On Sun, Nov 22, 2015 at 09:28:24PM -0500, Sinan Kaya wrote:
>
>> +++ b/Documentation/ABI/testing/sysfs-platform-hidma-mgmt
>> @@ -0,0 +1,97 @@
>> +What: /sys/devices/platform/hidma-mgmt*/chan*/priority
>> + /sys/devices/platform/QCOM8060:*/chan*/priority
>> +Date: Nov 2015
>> +KernelVersion: 4.4
>> +Contact: "Sinan Kaya <[email protected]>"
>> +Description:
>> + Contains either 0 or 1 and indicates if the DMA channel is a
>> + low priority (0) or high priority (1) channel.
>
> What is purpose of adding sysfs entries here ?
>

The goal is to run hidma channels in the guest machines and have
management driver adjust the runtime characteristics of the channels
such as round robing weight, priority, max read request etc. All of
these are runtime configurable.

There will be a userspace application that will configure this on behalf
of the system administrator via sysfs. System administrator will decide
how to allocate hardware resources to the guest machines.


>> +
>> +#define QOS_N_OFFSET 0x300
>> +#define CFG_OFFSET 0x400
>> +#define MAX_BUS_REQ_LEN_OFFSET 0x41C
>> +#define MAX_XACTIONS_OFFSET 0x420
>> +#define HW_VERSION_OFFSET 0x424
>> +#define CHRESET_TIMEOUT_OFFSET 0x418
>> +
>> +#define MAX_WR_XACTIONS_MASK GENMASK(4, 0)
>> +#define MAX_RD_XACTIONS_MASK GENMASK(4, 0)
>> +#define WEIGHT_MASK GENMASK(6, 0)
>> +#define MAX_BUS_REQ_LEN_MASK GENMASK(15, 0)
>> +#define CHRESET_TIMEOUUT_MASK GENMASK(19, 0)
>> +
>> +#define MAX_WR_XACTIONS_BIT_POS 16
>> +#define MAX_BUS_WR_REQ_BIT_POS 16
>> +#define WRR_BIT_POS 8
>> +#define PRIORITY_BIT_POS 15
>> +
>> +#define AUTOSUSPEND_TIMEOUT 2000
>> +#define MAX_CHANNEL_WEIGHT 15
>
> These names are quite generic and prone to collide with generic names,
> please prefix them with your driver name
>

OK

>> +}
>> +EXPORT_SYMBOL_GPL(hidma_mgmt_setup);
>
> Why is this exported or rather who would be users of this?

This driver consists of two files as hidma_mgmt.c and hidma_mgmt_sys.c.
I'm calling hidma_mgmt_setup from the hidma_mgmt_sys.c file when a
parameter such as priority and weight is changed to reconfigure the
hardware. I got linker errors without this export as this function is in
hidma_mgmt.c file.

>
>> +static int hidma_mgmt_probe(struct platform_device *pdev)
>> +{
>> + struct hidma_mgmt_dev *mgmtdev;
>> + struct resource *res;
>> + void __iomem *virtaddr;
>> + int irq;
>> + int rc;
>> + u32 val;
>> +
>> + pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
>> + pm_runtime_use_autosuspend(&pdev->dev);
>> + pm_runtime_set_active(&pdev->dev);
>> + pm_runtime_enable(&pdev->dev);
>
> at this time pm core will treat device as fully enabled and pm methods can
> be invoked, but you are not ready yet right. Typically these are done at the
> end of the probe unless you have a reason...

I need it here because the clocks are declared as ACPI power resources.
The kernel is turning off all power resources during initialization. In
order for this code to touch the hardware, I need to call enable so that
clocks are enabled once again.

>
>> +static ssize_t show_values(struct device *dev, struct device_attribute *attr,
>> + char *buf)
>
> Please fix the coding style here and other places as well.

what's the problem here?

> Specifically
> please read Chapter 2
>

Why is checkpatch not complaining about any of the coding style issues?
I'm checking my code with checkpatch before submitting. Is there any
other tool that would catch this?

--
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
Linux Foundation Collaborative Project

2015-11-30 20:06:45

by Sinan Kaya

[permalink] [raw]
Subject: Re: [PATCH V7 3/3] dma: add Qualcomm Technologies HIDMA channel driver

On 11/30/2015 3:59 AM, Vinod Koul wrote:
> On Sun, Nov 22, 2015 at 09:28:25PM -0500, Sinan Kaya wrote:
>> This patch adds support for hidma engine. The driver consists
>> of two logical blocks. The DMA engine interface and the
>> low-level interface. The hardware only supports memcpy/memset
>> and this driver only support memcpy interface. HW and driver
>> doesn't support slave interface.
>>
>> Signed-off-by: Sinan Kaya <[email protected]>
>> ---
>> .../devicetree/bindings/dma/qcom_hidma.txt | 18 +
>> drivers/dma/qcom/Kconfig | 10 +
>> drivers/dma/qcom/Makefile | 2 +
>> drivers/dma/qcom/hidma.c | 706 ++++++++++++++++
>> drivers/dma/qcom/hidma.h | 157 ++++
>> drivers/dma/qcom/hidma_dbg.c | 217 +++++
>> drivers/dma/qcom/hidma_ll.c | 924 +++++++++++++++++++++
>> 7 files changed, 2034 insertions(+)
>
> This was one of the reasons for slow review of this. I started few times but
> large patches made this getting pushed back
>
> Pls help reviewers by splitting your code which looking at this could have
> been done

I have split the debugfs support from this patch to its own patch. Any
other idea on how else you'd break the code? I can take one more step
and separate the lower layer from the OS layer by using stub functions
on the initial commit.

>
>> +#include <linux/dmaengine.h>
>> +#include <linux/dma-mapping.h>
>> +#include <linux/err.h>
>> +#include <linux/init.h>
>> +#include <linux/interrupt.h>
>> +#include <linux/list.h>
>> +#include <linux/module.h>
>> +#include <linux/platform_device.h>
>> +#include <linux/slab.h>
>> +#include <linux/spinlock.h>
>> +#include <linux/of_dma.h>
>> +#include <linux/property.h>
>> +#include <linux/delay.h>
>> +#include <linux/highmem.h>
>> +#include <linux/io.h>
>> +#include <linux/sched.h>
>> +#include <linux/wait.h>
>> +#include <linux/acpi.h>
>> +#include <linux/irq.h>
>> +#include <linux/atomic.h>
>> +#include <linux/pm_runtime.h>
>
> Do you need so many headers...?

probably not, let me see what I can do about them.

>
>> +MODULE_PARM_DESC(event_channel_idx,
>> + "event channel index array for the notifications");
>
> Can you explain this parameter in detail pls

OK. I added the following comment to the code.

/*
* Each DMA channel is associated with an event channel for interrupt
* delivery. The event channel index usually comes from the firmware through
* ACPI/DT. When a HIDMA channel is executed in the guest machine
context (QEMU)
* the device tree gets auto-generated based on the memory and IRQ resources
* this driver uses on the host machine. Any device specific parameter
such as
* event-channel gets ignored by the QEMU.
* We are using this command line parameter to pass the event channel
index to
* the guest machine.
*/
s

>
>> +static void hidma_callback(void *data)
>> +{
>> + struct hidma_desc *mdesc = data;
>> + struct hidma_chan *mchan = to_hidma_chan(mdesc->desc.chan);
>> + struct dma_device *ddev = mchan->chan.device;
>> + struct hidma_dev *dmadev = to_hidma_dev(ddev);
>> + unsigned long irqflags;
>> + bool queued = false;
>> +
>> + spin_lock_irqsave(&mchan->lock, irqflags);
>> + if (mdesc->node.next) {
>> + /* Delete from the active list, add to completed list */
>> + list_move_tail(&mdesc->node, &mchan->completed);
>> + queued = true;
>> + }
>> + spin_unlock_irqrestore(&mchan->lock, irqflags);
>> +
>> + hidma_process_completed(dmadev);
>> +
>> + if (queued) {
>> + pm_runtime_mark_last_busy(dmadev->ddev.dev);
>> + pm_runtime_put_autosuspend(dmadev->ddev.dev);
>> + }
>> +}
>
> Callback is typically referred to client callback upon dma completion, can
> you explain what you are trying to do here

When a DMA transfer completes, the hidma_callback function in the
hidma.c file gets called from the lower layer (hidma_ll.c) in order to
move this request from active queue into the completed queue.

hidma_process_completed eventually calls the "client callback" in it.

The PM stuff is for guaranteeing that clocks are not turned off while HW
is working on it.

I grab the PM lock in issue pending and release it in the hidma_callback.

>
>> +static int hidma_chan_init(struct hidma_dev *dmadev, u32 dma_sig)
>> +{
>> + struct hidma_chan *mchan;
>> + struct dma_device *ddev;
>> +
>> + mchan = devm_kzalloc(dmadev->ddev.dev, sizeof(*mchan), GFP_KERNEL);
>> + if (!mchan)
>> + return -ENOMEM;
>> +
>> + ddev = &dmadev->ddev;
>> + mchan->dma_sig = dma_sig;
>> + mchan->dmadev = dmadev;
>> + mchan->chan.device = ddev;
>> + dma_cookie_init(&mchan->chan);
>> +
>> + INIT_LIST_HEAD(&mchan->free);
>> + INIT_LIST_HEAD(&mchan->prepared);
>> + INIT_LIST_HEAD(&mchan->active);
>> + INIT_LIST_HEAD(&mchan->completed);
>> +
>> + spin_lock_init(&mchan->lock);
>> + list_add_tail(&mchan->chan.device_node, &ddev->channels);
>> + dmadev->ddev.chancnt++;
>
> Have you looked at vchan implementation and moving list handling to this
> layer ?

I'll add vchan support later on another revision. The original code had
vchan support in it. It became a terminology mess. So, I removed it for
the moment.

Using vchan, I can take multiple clients and serve them in a single
channel later.

>
>> + return 0;
>> +}
>> +
>> +static void hidma_issue_pending(struct dma_chan *dmach)
>> +{
>> + struct hidma_chan *mchan = to_hidma_chan(dmach);
>> + struct hidma_dev *dmadev = mchan->dmadev;
>> +
>> + /* PM will be released in hidma_callback function. */
>> + pm_runtime_get_sync(dmadev->ddev.dev);
>
> Oh no, this is not allowed. pm_runtime_get_sync() can sleep and
> issue_pending can be invoked from atomic context.

OK. I didn't know that. I'll try pm_runtime_get first. If it fails, I'll
post a task to do the real job with pm_runtime_get_sync.

The clocks may be off by the time the request is made. I need to grab
the PM lock before accessing the HW registers.

>
>> +static enum dma_status hidma_tx_status(struct dma_chan *dmach,
>> + dma_cookie_t cookie, struct dma_tx_state *txstate)
>> +{
>> + struct hidma_chan *mchan = to_hidma_chan(dmach);
>> + enum dma_status ret;
>> +
>> + if (mchan->paused)
>> + ret = DMA_PAUSED;
>
> This is not quite right. The status query is for a descriptor and NOT for
> channel. Here if the descriptor queried was running then it would be paused
> for the rest pending txn, it would be DMA_IN_PROGRESS.

The channel can be paused by the hypervisor. If it is paused, then no
other transaction will go through including the pending requests. That's
why, I'm checking the state first.

>
>> +static dma_cookie_t hidma_tx_submit(struct dma_async_tx_descriptor *txd)
>> +{
>> + struct hidma_chan *mchan = to_hidma_chan(txd->chan);
>> + struct hidma_dev *dmadev = mchan->dmadev;
>> + struct hidma_desc *mdesc;
>> + unsigned long irqflags;
>> + dma_cookie_t cookie;
>> +
>> + if (!hidma_ll_isenabled(dmadev->lldev))
>> + return -ENODEV;
>
> is this not too late for this check, you should fail this on prepare
> method

The channels can be paused by the hypervisor at runtime after the guest
OS boot. The client might have allocated the channels during guest
machine boot. If a channel is paused and client makes a request, client
will never get the callback. By placing this check, I'm looking at the
runtime status of the channel and rejecting the transmit request right
ahead.

>
>
>> +static int hidma_alloc_chan_resources(struct dma_chan *dmach)
>> +{
>> + struct hidma_chan *mchan = to_hidma_chan(dmach);
>> + struct hidma_dev *dmadev = mchan->dmadev;
>> + struct hidma_desc *mdesc, *tmp;
>> + unsigned long irqflags;
>> + LIST_HEAD(descs);
>> + unsigned int i;
>> + int rc = 0;
>> +
>> + if (mchan->allocated)
>> + return 0;
>> +
>> + /* Alloc descriptors for this channel */
>> + for (i = 0; i < dmadev->nr_descriptors; i++) {
>> + mdesc = kzalloc(sizeof(struct hidma_desc), GFP_KERNEL);
>
> GFP_NOWAIT pls, this can be called from atomic context

I'll change it, but why would anyone allocate a channel in an interrupt
handler?

>
>> + if (!mdesc) {
>> + rc = -ENOMEM;
>> + break;
>> + }
>> + dma_async_tx_descriptor_init(&mdesc->desc, dmach);
>> + mdesc->desc.flags = DMA_CTRL_ACK;
>
> what is the purpose of setting this flag here

It means that the code only supports interrupt. Maybe, you can correct
me here. I couldn't see a very useful info about the DMA engine flags.
My understanding is that DMA_CTRL_ACK means user wants an interrupt
based callback.

If DMA_CTRL_ACK is not specified, then user wants to poll for the
results. The current code only supports the interrupt based delivery for
callbacks. Polling support is another to-do in my list for small sized
transfers.

>
>> +static void hidma_free_chan_resources(struct dma_chan *dmach)
>> +{
>> + struct hidma_chan *mchan = to_hidma_chan(dmach);
>> + struct hidma_dev *mdma = mchan->dmadev;
>> + struct hidma_desc *mdesc, *tmp;
>> + unsigned long irqflags;
>> + LIST_HEAD(descs);
>> +
>> + if (!list_empty(&mchan->prepared) || !list_empty(&mchan->active) ||
>> + !list_empty(&mchan->completed)) {
>> + /*
>> + * We have unfinished requests waiting.
>> + * Terminate the request from the hardware.
>
> that sounds as bug.. free should be called when txn are completed/aborted

Let me see what other drivers are doing...

>
>> + */
>> + hidma_cleanup_pending_tre(mdma->lldev, ERR_INFO_SW,
>> + ERR_CODE_UNEXPECTED_TERMINATE);
>> +
>> + /* Give enough time for completions to be called. */
>> + msleep(100);
>
> should we not poll/read after delay, also we are still in atomic context
>
>> + }
>> +
>> + spin_lock_irqsave(&mchan->lock, irqflags);
>> + /* Channel must be idle */
>
> sorry I do not like that assumption
>
>> + /* return all user requests */
>> + list_for_each_entry_safe(mdesc, tmp, &list, node) {
>> + struct dma_async_tx_descriptor *txd = &mdesc->desc;
>> + dma_async_tx_callback callback = mdesc->desc.callback;
>> + void *param = mdesc->desc.callback_param;
>> + enum dma_status status;
>> +
>> + dma_descriptor_unmap(txd);
>> +
>> + status = hidma_ll_status(dmadev->lldev, mdesc->tre_ch);
>> + /*
>> + * The API requires that no submissions are done from a
>> + * callback, so we don't need to drop the lock here
>> + */
>
> That is correct assumption for async_tx and which is deprecated now.
> In fact the converse is true for rest!

I copied this function from another driver. The cookie/unmap business
seemed scary to me. Can you point me to a good/recent implementation if
this is deprecated?

>
>> +static int hidma_pause(struct dma_chan *chan)
>> +{
>> + struct hidma_chan *mchan;
>> + struct hidma_dev *dmadev;
>> +
>> + mchan = to_hidma_chan(chan);
>> + dmadev = to_hidma_dev(mchan->chan.device);
>> + if (!mchan->paused) {
>> + pm_runtime_get_sync(dmadev->ddev.dev);
>> + if (hidma_ll_pause(dmadev->lldev))
>> + dev_warn(dmadev->ddev.dev, "channel did not stop\n");
>> + mchan->paused = true;
>> + pm_runtime_mark_last_busy(dmadev->ddev.dev);
>> + pm_runtime_put_autosuspend(dmadev->ddev.dev);
>> + }
>> + return 0;
>> +}
>
> This is interesting, we associate pause/resume for slave dma ops and not for
> memcpy ops, what is the reason for adding pause/resume here?

Why is it limited to the slave device only? I can queue a bunch of
requests. I can pause and resume their execution with the current
implementation.

>
>> +static int hidma_remove(struct platform_device *pdev)
>> +{
>> + struct hidma_dev *dmadev = platform_get_drvdata(pdev);
>> +
>> + pm_runtime_get_sync(dmadev->ddev.dev);
>> + dma_async_device_unregister(&dmadev->ddev);
>> + hidma_debug_uninit(dmadev);
>> + hidma_ll_uninit(dmadev->lldev);
>> + hidma_free(dmadev);
>> +
>> + dev_info(&pdev->dev, "HI-DMA engine removed\n");
>> + pm_runtime_put_sync_suspend(&pdev->dev);
>> + pm_runtime_disable(&pdev->dev);
>
> and your irq is still active and can invoke your irq handler!
>

why? I shutdown the IRQ in hidma_ll_uninit and request the interrupt
with devm_request_irq. As soon as this driver is removed, the interrupt
should be released by the managed code.

>> +
>> + return 0;
>> +}
>> +
>> +#if IS_ENABLED(CONFIG_ACPI)
>> +static const struct acpi_device_id hidma_acpi_ids[] = {
>> + {"QCOM8061"},
> Qcom and ACPI, that is v interesting combination !
>
>

Thanks for the detailed review.


--
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
Linux Foundation Collaborative Project

2015-12-01 03:14:51

by Vinod Koul

[permalink] [raw]
Subject: Re: [PATCH V7 2/3] dma: add Qualcomm Technologies HIDMA management driver

On Mon, Nov 30, 2015 at 09:42:01AM -0500, Sinan Kaya wrote:

> >> +static int hidma_mgmt_probe(struct platform_device *pdev)
> >> +{
> >> + struct hidma_mgmt_dev *mgmtdev;
> >> + struct resource *res;
> >> + void __iomem *virtaddr;
> >> + int irq;
> >> + int rc;
> >> + u32 val;
> >> +
> >> + pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
> >> + pm_runtime_use_autosuspend(&pdev->dev);
> >> + pm_runtime_set_active(&pdev->dev);
> >> + pm_runtime_enable(&pdev->dev);
> >
> > at this time pm core will treat device as fully enabled and pm methods can
> > be invoked, but you are not ready yet right. Typically these are done at the
> > end of the probe unless you have a reason...
>
> I need it here because the clocks are declared as ACPI power resources.
> The kernel is turning off all power resources during initialization. In
> order for this code to touch the hardware, I need to call enable so that
> clocks are enabled once again.

The question is are you ready in your driver routines to be invoked by pm
core?

>
> >
> >> +static ssize_t show_values(struct device *dev, struct device_attribute *attr,
> >> + char *buf)
> >
> > Please fix the coding style here and other places as well.
>
> what's the problem here?
>
> > Specifically
> > please read Chapter 2
> >
>
> Why is checkpatch not complaining about any of the coding style issues?
> I'm checking my code with checkpatch before submitting. Is there any
> other tool that would catch this?

So did you read the Chapter 2.. Quoting here

"Statements longer than 80 columns will be broken into sensible chunks,
unless
exceeding 80 columns significantly increases readability and does not hide
information. Descendants are always substantially shorter than the parent
and
are placed substantially to the right. The same applies to function headers
with a long argument list. However, never break user-visible strings such as
printk messages, because that breaks the ability to grep for them."

Your breaking lines is not placed substantially to the right..
I do not think checkpatch is checking this..

--
~Vinod

2015-12-01 11:32:16

by Vinod Koul

[permalink] [raw]
Subject: Re: [PATCH V7 3/3] dma: add Qualcomm Technologies HIDMA channel driver

On Mon, Nov 30, 2015 at 03:06:36PM -0500, Sinan Kaya wrote:
> I have split the debugfs support from this patch to its own patch. Any
> other idea on how else you'd break the code? I can take one more step
> and separate the lower layer from the OS layer by using stub functions
> on the initial commit.

Yes the ll.c can be a separate patch and no need to add placeholders as
makefile can be last

> >
> >> +static enum dma_status hidma_tx_status(struct dma_chan *dmach,
> >> + dma_cookie_t cookie, struct dma_tx_state *txstate)
> >> +{
> >> + struct hidma_chan *mchan = to_hidma_chan(dmach);
> >> + enum dma_status ret;
> >> +
> >> + if (mchan->paused)
> >> + ret = DMA_PAUSED;
> >
> > This is not quite right. The status query is for a descriptor and NOT for
> > channel. Here if the descriptor queried was running then it would be paused
> > for the rest pending txn, it would be DMA_IN_PROGRESS.
>
> The channel can be paused by the hypervisor. If it is paused, then no
> other transaction will go through including the pending requests. That's
> why, I'm checking the state first.

You are missing the point. Channel can be paused, yes but the descriptor
is in queue and is not paused. The descriptor running is paused, yes.
There is subtle difference between these

> >> +static dma_cookie_t hidma_tx_submit(struct dma_async_tx_descriptor *txd)
> >> +{
> >> + struct hidma_chan *mchan = to_hidma_chan(txd->chan);
> >> + struct hidma_dev *dmadev = mchan->dmadev;
> >> + struct hidma_desc *mdesc;
> >> + unsigned long irqflags;
> >> + dma_cookie_t cookie;
> >> +
> >> + if (!hidma_ll_isenabled(dmadev->lldev))
> >> + return -ENODEV;
> >
> > is this not too late for this check, you should fail this on prepare
> > method
>
> The channels can be paused by the hypervisor at runtime after the guest
> OS boot. The client might have allocated the channels during guest
> machine boot. If a channel is paused and client makes a request, client
> will never get the callback. By placing this check, I'm looking at the
> runtime status of the channel and rejecting the transmit request right
> ahead.

In this case you have request already given to you, here the request is
submitted to the pending queue, if hypervisor has paused you, so the entire
txn queue is paused. But from API semantics I would argue that this should be
accepted and suffer the same fate as already submitted txns

>
> >
> >
> >> +static int hidma_alloc_chan_resources(struct dma_chan *dmach)
> >> +{
> >> + struct hidma_chan *mchan = to_hidma_chan(dmach);
> >> + struct hidma_dev *dmadev = mchan->dmadev;
> >> + struct hidma_desc *mdesc, *tmp;
> >> + unsigned long irqflags;
> >> + LIST_HEAD(descs);
> >> + unsigned int i;
> >> + int rc = 0;
> >> +
> >> + if (mchan->allocated)
> >> + return 0;
> >> +
> >> + /* Alloc descriptors for this channel */
> >> + for (i = 0; i < dmadev->nr_descriptors; i++) {
> >> + mdesc = kzalloc(sizeof(struct hidma_desc), GFP_KERNEL);
> >
> > GFP_NOWAIT pls, this can be called from atomic context
>
> I'll change it, but why would anyone allocate a channel in an interrupt
> handler?

remember this is dmaengine, where people traditionally wanted to offload
from cpu and were not allowed to sleep.

I think am okay with this one..

>
> >
> >> + if (!mdesc) {
> >> + rc = -ENOMEM;
> >> + break;
> >> + }
> >> + dma_async_tx_descriptor_init(&mdesc->desc, dmach);
> >> + mdesc->desc.flags = DMA_CTRL_ACK;
> >
> > what is the purpose of setting this flag here
>
> It means that the code only supports interrupt. Maybe, you can correct
> me here. I couldn't see a very useful info about the DMA engine flags.
> My understanding is that DMA_CTRL_ACK means user wants an interrupt
> based callback.

Right user wants, you are not user though

>
> If DMA_CTRL_ACK is not specified, then user wants to poll for the
> results. The current code only supports the interrupt based delivery for
> callbacks. Polling support is another to-do in my list for small sized
> transfers.

See the documentation for flags and usage. The point here is that user
should set this. I know some old driver do this but that is not right

>
> >
> >> +static void hidma_free_chan_resources(struct dma_chan *dmach)
> >> +{
> >> + struct hidma_chan *mchan = to_hidma_chan(dmach);
> >> + struct hidma_dev *mdma = mchan->dmadev;
> >> + struct hidma_desc *mdesc, *tmp;
> >> + unsigned long irqflags;
> >> + LIST_HEAD(descs);
> >> +
> >> + if (!list_empty(&mchan->prepared) || !list_empty(&mchan->active) ||
> >> + !list_empty(&mchan->completed)) {
> >> + /*
> >> + * We have unfinished requests waiting.
> >> + * Terminate the request from the hardware.
> >
> > that sounds as bug.. free should be called when txn are completed/aborted
>
> Let me see what other drivers are doing...
>
> >
> >> + */
> >> + hidma_cleanup_pending_tre(mdma->lldev, ERR_INFO_SW,
> >> + ERR_CODE_UNEXPECTED_TERMINATE);
> >> +
> >> + /* Give enough time for completions to be called. */
> >> + msleep(100);
> >
> > should we not poll/read after delay, also we are still in atomic context
> >
> >> + }
> >> +
> >> + spin_lock_irqsave(&mchan->lock, irqflags);
> >> + /* Channel must be idle */
> >
> > sorry I do not like that assumption
> >
> >> + /* return all user requests */
> >> + list_for_each_entry_safe(mdesc, tmp, &list, node) {
> >> + struct dma_async_tx_descriptor *txd = &mdesc->desc;
> >> + dma_async_tx_callback callback = mdesc->desc.callback;
> >> + void *param = mdesc->desc.callback_param;
> >> + enum dma_status status;
> >> +
> >> + dma_descriptor_unmap(txd);
> >> +
> >> + status = hidma_ll_status(dmadev->lldev, mdesc->tre_ch);
> >> + /*
> >> + * The API requires that no submissions are done from a
> >> + * callback, so we don't need to drop the lock here
> >> + */
> >
> > That is correct assumption for async_tx and which is deprecated now.
> > In fact the converse is true for rest!
>
> I copied this function from another driver. The cookie/unmap business
> seemed scary to me. Can you point me to a good/recent implementation if
> this is deprecated?


>
> >
> >> +static int hidma_pause(struct dma_chan *chan)
> >> +{
> >> + struct hidma_chan *mchan;
> >> + struct hidma_dev *dmadev;
> >> +
> >> + mchan = to_hidma_chan(chan);
> >> + dmadev = to_hidma_dev(mchan->chan.device);
> >> + if (!mchan->paused) {
> >> + pm_runtime_get_sync(dmadev->ddev.dev);
> >> + if (hidma_ll_pause(dmadev->lldev))
> >> + dev_warn(dmadev->ddev.dev, "channel did not stop\n");
> >> + mchan->paused = true;
> >> + pm_runtime_mark_last_busy(dmadev->ddev.dev);
> >> + pm_runtime_put_autosuspend(dmadev->ddev.dev);
> >> + }
> >> + return 0;
> >> +}
> >
> > This is interesting, we associate pause/resume for slave dma ops and not for
> > memcpy ops, what is the reason for adding pause/resume here?
>
> Why is it limited to the slave device only? I can queue a bunch of
> requests. I can pause and resume their execution with the current
> implementation.

Traditionally no one paused memcpy... People want the copy to be done with
as fast as possible :)

>
> >
> >> +static int hidma_remove(struct platform_device *pdev)
> >> +{
> >> + struct hidma_dev *dmadev = platform_get_drvdata(pdev);
> >> +
> >> + pm_runtime_get_sync(dmadev->ddev.dev);
> >> + dma_async_device_unregister(&dmadev->ddev);
> >> + hidma_debug_uninit(dmadev);
> >> + hidma_ll_uninit(dmadev->lldev);
> >> + hidma_free(dmadev);
> >> +
> >> + dev_info(&pdev->dev, "HI-DMA engine removed\n");
> >> + pm_runtime_put_sync_suspend(&pdev->dev);
> >> + pm_runtime_disable(&pdev->dev);
> >
> > and your irq is still active and can invoke your irq handler!
> >
>
> why? I shutdown the IRQ in hidma_ll_uninit and request the interrupt
> with devm_request_irq. As soon as this driver is removed, the interrupt
> should be released by the managed code.

but there is a delay between that and free getting called. Since you don't
free up explicitly you can still have irq and tasklets scheduled.

Recommendation is that free up irq or ensure none are scheduled and
disabled

--
~Vinod

2015-12-01 21:16:19

by Sinan Kaya

[permalink] [raw]
Subject: Re: [PATCH V7 3/3] dma: add Qualcomm Technologies HIDMA channel driver

On 12/1/2015 6:34 AM, Vinod Koul wrote:
> On Mon, Nov 30, 2015 at 03:06:36PM -0500, Sinan Kaya wrote:
>> I have split the debugfs support from this patch to its own patch. Any
>> other idea on how else you'd break the code? I can take one more step
>> and separate the lower layer from the OS layer by using stub functions
>> on the initial commit.
>
> Yes the ll.c can be a separate patch and no need to add placeholders as
> makefile can be last
>

ok. I'll create a new patch file with the Makefile and hidma_ll.c only.

>>>
>>>> +static enum dma_status hidma_tx_status(struct dma_chan *dmach,
>>>> + dma_cookie_t cookie, struct dma_tx_state *txstate)
>>>> +{
>>>> + struct hidma_chan *mchan = to_hidma_chan(dmach);
>>>> + enum dma_status ret;
>>>> +
>>>> + if (mchan->paused)
>>>> + ret = DMA_PAUSED;
>>>
>>> This is not quite right. The status query is for a descriptor and NOT for
>>> channel. Here if the descriptor queried was running then it would be paused
>>> for the rest pending txn, it would be DMA_IN_PROGRESS.
>>
>> The channel can be paused by the hypervisor. If it is paused, then no
>> other transaction will go through including the pending requests. That's
>> why, I'm checking the state first.
>
> You are missing the point. Channel can be paused, yes but the descriptor
> is in queue and is not paused. The descriptor running is paused, yes.
> There is subtle difference between these

I'll follow your recommendation. PAUSE for the currently active
descriptor and DMA_IN_PROGRESS for the rest.

>
>>>> +static dma_cookie_t hidma_tx_submit(struct dma_async_tx_descriptor *txd)
>>>> +{
>>>> + struct hidma_chan *mchan = to_hidma_chan(txd->chan);
>>>> + struct hidma_dev *dmadev = mchan->dmadev;
>>>> + struct hidma_desc *mdesc;
>>>> + unsigned long irqflags;
>>>> + dma_cookie_t cookie;
>>>> +
>>>> + if (!hidma_ll_isenabled(dmadev->lldev))
>>>> + return -ENODEV;
>>>
>>> is this not too late for this check, you should fail this on prepare
>>> method
>>
>> The channels can be paused by the hypervisor at runtime after the guest
>> OS boot. The client might have allocated the channels during guest
>> machine boot. If a channel is paused and client makes a request, client
>> will never get the callback. By placing this check, I'm looking at the
>> runtime status of the channel and rejecting the transmit request right
>> ahead.
>
> In this case you have request already given to you, here the request is
> submitted to the pending queue,

No, it is not. Putting into the pending queue is done after this check
not before.

I'm rejecting the queue request directly here as HIDMA channel is no
longer functional. I'd rather do active error handling rather than
passive error handling. Debugging of passive faults are much more difficult.

> if hypervisor has paused you, so the entire
> txn queue is paused. But from API semantics I would argue that this should be
> accepted and suffer the same fate as already submitted txns
>



>>
>>>
>>>
>>>> +static int hidma_alloc_chan_resources(struct dma_chan *dmach)
>>>> +{
>>>> + struct hidma_chan *mchan = to_hidma_chan(dmach);
>>>> + struct hidma_dev *dmadev = mchan->dmadev;
>>>> + struct hidma_desc *mdesc, *tmp;
>>>> + unsigned long irqflags;
>>>> + LIST_HEAD(descs);
>>>> + unsigned int i;
>>>> + int rc = 0;
>>>> +
>>>> + if (mchan->allocated)
>>>> + return 0;
>>>> +
>>>> + /* Alloc descriptors for this channel */
>>>> + for (i = 0; i < dmadev->nr_descriptors; i++) {
>>>> + mdesc = kzalloc(sizeof(struct hidma_desc), GFP_KERNEL);
>>>
>>> GFP_NOWAIT pls, this can be called from atomic context
>>
>> I'll change it, but why would anyone allocate a channel in an interrupt
>> handler?
>
> remember this is dmaengine, where people traditionally wanted to offload
> from cpu and were not allowed to sleep.
>
> I think am okay with this one..
>
>>
>>>
>>>> + if (!mdesc) {
>>>> + rc = -ENOMEM;
>>>> + break;
>>>> + }
>>>> + dma_async_tx_descriptor_init(&mdesc->desc, dmach);
>>>> + mdesc->desc.flags = DMA_CTRL_ACK;
>>>
>>> what is the purpose of setting this flag here
>>
>> It means that the code only supports interrupt. Maybe, you can correct
>> me here. I couldn't see a very useful info about the DMA engine flags.
>> My understanding is that DMA_CTRL_ACK means user wants an interrupt
>> based callback.
>
> Right user wants, you are not user though
>

Fair enough.

>>
>> If DMA_CTRL_ACK is not specified, then user wants to poll for the
>> results. The current code only supports the interrupt based delivery for
>> callbacks. Polling support is another to-do in my list for small sized
>> transfers.
>
> See the documentation for flags and usage. The point here is that user
> should set this. I know some old driver do this but that is not right
>

I'll remove it.

>>
>>>
>>>
>>>> +static int hidma_remove(struct platform_device *pdev)
>>>> +{
>>>> + struct hidma_dev *dmadev = platform_get_drvdata(pdev);
>>>> +
>>>> + pm_runtime_get_sync(dmadev->ddev.dev);
>>>> + dma_async_device_unregister(&dmadev->ddev);
>>>> + hidma_debug_uninit(dmadev);
>>>> + hidma_ll_uninit(dmadev->lldev);
>>>> + hidma_free(dmadev);
>>>> +
>>>> + dev_info(&pdev->dev, "HI-DMA engine removed\n");
>>>> + pm_runtime_put_sync_suspend(&pdev->dev);
>>>> + pm_runtime_disable(&pdev->dev);
>>>
>>> and your irq is still active and can invoke your irq handler!
>>>
>>
>> why? I shutdown the IRQ in hidma_ll_uninit and request the interrupt
>> with devm_request_irq. As soon as this driver is removed, the interrupt
>> should be released by the managed code.
>
> but there is a delay between that and free getting called. Since you don't
> free up explicitly you can still have irq and tasklets scheduled.
>
> Recommendation is that free up irq or ensure none are scheduled and
> disabled
>
I looked at other drivers. They seem to be disabling the IRQ right after
unregistering the DMA device from DMA engine. I'll follow the same by
adding an explicit dma_free_irq call into hidma_remove function.


--
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
Linux Foundation Collaborative Project

2015-12-02 04:57:42

by Sinan Kaya

[permalink] [raw]
Subject: Re: [PATCH V7 2/3] dma: add Qualcomm Technologies HIDMA management driver

On 11/30/2015 10:17 PM, Vinod Koul wrote:
> On Mon, Nov 30, 2015 at 09:42:01AM -0500, Sinan Kaya wrote:
>
>>>> +static int hidma_mgmt_probe(struct platform_device *pdev)
>>>> +{
>>>> + struct hidma_mgmt_dev *mgmtdev;
>>>> + struct resource *res;
>>>> + void __iomem *virtaddr;
>>>> + int irq;
>>>> + int rc;
>>>> + u32 val;
>>>> +
>>>> + pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
>>>> + pm_runtime_use_autosuspend(&pdev->dev);
>>>> + pm_runtime_set_active(&pdev->dev);
>>>> + pm_runtime_enable(&pdev->dev);
>>>
>>> at this time pm core will treat device as fully enabled and pm methods can
>>> be invoked, but you are not ready yet right. Typically these are done at the
>>> end of the probe unless you have a reason...
>>
>> I need it here because the clocks are declared as ACPI power resources.
>> The kernel is turning off all power resources during initialization. In
>> order for this code to touch the hardware, I need to call enable so that
>> clocks are enabled once again.
>
> The question is are you ready in your driver routines to be invoked by pm
> core?
>

I don't have any support for suspend and resume PM APIs. The only PM
interface I support is PM runtime. PM can turn on/off the clocks based
on the reference counts it maintains after get/set APIs. Since PM is
turning off the clocks during power up before my driver load, I do need
to grab this lock to re-enable it during HW initialization. Then, let PM
turn off the clocks again after the AUTOSUSPEND_TIMEOUT when I'm done.

Is there any other interaction with the PM that I'm not aware of?

>>
>>>
>>>> +static ssize_t show_values(struct device *dev, struct device_attribute *attr,
>>>> + char *buf)
>>>
>>> Please fix the coding style here and other places as well.
>>
>> what's the problem here?
>>
>>> Specifically
>>> please read Chapter 2
>>>
>>
>> Why is checkpatch not complaining about any of the coding style issues?
>> I'm checking my code with checkpatch before submitting. Is there any
>> other tool that would catch this?
>
> So did you read the Chapter 2.. Quoting here
>

I did read the chapter 2. Maybe, my lack of native english speaking but
I don't get from this sentence that function parameters need to be
aligned to the opening paranthesis.

> Descendants are always substantially shorter than the parent
> and are placed substantially to the right. The same applies to function headers
> with a long argument list.
>

I ran Lindent and manually cleaned up the junk it introduced. The result
is this

static ssize_t show_values(struct device *dev, struct device_attribute
*attr,
char *buf)


> Your breaking lines is not placed substantially to the right..
> I do not think checkpatch is checking this..
>


--
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
Linux Foundation Collaborative Project

2015-12-02 19:04:12

by Sinan Kaya

[permalink] [raw]
Subject: Re: [PATCH V7 3/3] dma: add Qualcomm Technologies HIDMA channel driver

On 12/1/2015 4:16 PM, Sinan Kaya wrote:
>>>> >>>
>>>>> >>>> +static enum dma_status hidma_tx_status(struct dma_chan *dmach,
>>>>> >>>> + dma_cookie_t cookie, struct dma_tx_state *txstate)
>>>>> >>>> +{
>>>>> >>>> + struct hidma_chan *mchan = to_hidma_chan(dmach);
>>>>> >>>> + enum dma_status ret;
>>>>> >>>> +
>>>>> >>>> + if (mchan->paused)
>>>>> >>>> + ret = DMA_PAUSED;
>>>> >>>
>>>> >>> This is not quite right. The status query is for a descriptor and NOT for
>>>> >>> channel. Here if the descriptor queried was running then it would be paused
>>>> >>> for the rest pending txn, it would be DMA_IN_PROGRESS.
>>> >>
>>> >> The channel can be paused by the hypervisor. If it is paused, then no
>>> >> other transaction will go through including the pending requests. That's
>>> >> why, I'm checking the state first.
>> >
>> > You are missing the point. Channel can be paused, yes but the descriptor
>> > is in queue and is not paused. The descriptor running is paused, yes.
>> > There is subtle difference between these
> I'll follow your recommendation. PAUSE for the currently active
> descriptor and DMA_IN_PROGRESS for the rest.
>

I'm now confused.

I looked at several DMA driver implementations.

1. They call dma_cookie_status function to see if the job is done.
2. If done, they return right ahead.
3. Otherwise, dma_cookie_status returns DMA_IN_PROGRESS.
4. Next the code checks if the channel is paused and return value is
DMA_IN_PROGRESS. The code changes return code to DMA_PAUSED.

Whereas, I was returning paused first before even checking if the
descriptor is done. Are you OK with the sequence 1..4 above?

--
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
Linux Foundation Collaborative Project

2015-12-05 07:55:27

by Vinod Koul

[permalink] [raw]
Subject: Re: [PATCH V7 2/3] dma: add Qualcomm Technologies HIDMA management driver

On Tue, Dec 01, 2015 at 11:57:34PM -0500, Sinan Kaya wrote:
> On 11/30/2015 10:17 PM, Vinod Koul wrote:
> > On Mon, Nov 30, 2015 at 09:42:01AM -0500, Sinan Kaya wrote:
> >
> >>>> +static int hidma_mgmt_probe(struct platform_device *pdev)
> >>>> +{
> >>>> + struct hidma_mgmt_dev *mgmtdev;
> >>>> + struct resource *res;
> >>>> + void __iomem *virtaddr;
> >>>> + int irq;
> >>>> + int rc;
> >>>> + u32 val;
> >>>> +
> >>>> + pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
> >>>> + pm_runtime_use_autosuspend(&pdev->dev);
> >>>> + pm_runtime_set_active(&pdev->dev);
> >>>> + pm_runtime_enable(&pdev->dev);
> >>>
> >>> at this time pm core will treat device as fully enabled and pm methods can
> >>> be invoked, but you are not ready yet right. Typically these are done at the
> >>> end of the probe unless you have a reason...
> >>
> >> I need it here because the clocks are declared as ACPI power resources.
> >> The kernel is turning off all power resources during initialization. In
> >> order for this code to touch the hardware, I need to call enable so that
> >> clocks are enabled once again.
> >
> > The question is are you ready in your driver routines to be invoked by pm
> > core?
> >
>
> I don't have any support for suspend and resume PM APIs. The only PM
> interface I support is PM runtime. PM can turn on/off the clocks based
> on the reference counts it maintains after get/set APIs. Since PM is
> turning off the clocks during power up before my driver load, I do need
> to grab this lock to re-enable it during HW initialization. Then, let PM
> turn off the clocks again after the AUTOSUSPEND_TIMEOUT when I'm done.
>
> Is there any other interaction with the PM that I'm not aware of?

No this is fine. The the runtime_resume will be onvoked and it will request
resources are those set before you enable the device?

--
~Vinod

2015-12-05 07:57:31

by Vinod Koul

[permalink] [raw]
Subject: Re: [PATCH V7 3/3] dma: add Qualcomm Technologies HIDMA channel driver

On Wed, Dec 02, 2015 at 02:04:05PM -0500, Sinan Kaya wrote:
> >> > You are missing the point. Channel can be paused, yes but the descriptor
> >> > is in queue and is not paused. The descriptor running is paused, yes.
> >> > There is subtle difference between these
> > I'll follow your recommendation. PAUSE for the currently active
> > descriptor and DMA_IN_PROGRESS for the rest.
> >
>
> I'm now confused.
>
> I looked at several DMA driver implementations.
>
> 1. They call dma_cookie_status function to see if the job is done.
> 2. If done, they return right ahead.
> 3. Otherwise, dma_cookie_status returns DMA_IN_PROGRESS.
> 4. Next the code checks if the channel is paused and return value is
> DMA_IN_PROGRESS. The code changes return code to DMA_PAUSED.
>
> Whereas, I was returning paused first before even checking if the
> descriptor is done. Are you OK with the sequence 1..4 above?

Yes am okay with this with slight change in 4.

You should set to PAUSED only for current descriptor and not for the ones in
queue

--
~Vinod

2015-12-08 14:33:55

by Sinan Kaya

[permalink] [raw]
Subject: Re: [PATCH V7 3/3] dma: add Qualcomm Technologies HIDMA channel driver

On 12/5/2015 3:00 AM, Vinod Koul wrote:
> On Wed, Dec 02, 2015 at 02:04:05PM -0500, Sinan Kaya wrote:
>>>>> You are missing the point. Channel can be paused, yes but the descriptor
>>>>> is in queue and is not paused. The descriptor running is paused, yes.
>>>>> There is subtle difference between these
>>> I'll follow your recommendation. PAUSE for the currently active
>>> descriptor and DMA_IN_PROGRESS for the rest.
>>>
>>
>> I'm now confused.
>>
>> I looked at several DMA driver implementations.
>>
>> 1. They call dma_cookie_status function to see if the job is done.
>> 2. If done, they return right ahead.
>> 3. Otherwise, dma_cookie_status returns DMA_IN_PROGRESS.
>> 4. Next the code checks if the channel is paused and return value is
>> DMA_IN_PROGRESS. The code changes return code to DMA_PAUSED.
>>
>> Whereas, I was returning paused first before even checking if the
>> descriptor is done. Are you OK with the sequence 1..4 above?
>
> Yes am okay with this with slight change in 4.
>
> You should set to PAUSED only for current descriptor and not for the ones in
> queue
>
OK, I'll work on it.

--
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
Linux Foundation Collaborative Project

2015-12-08 14:36:09

by Sinan Kaya

[permalink] [raw]
Subject: Re: [PATCH V7 2/3] dma: add Qualcomm Technologies HIDMA management driver

On 12/5/2015 2:58 AM, Vinod Koul wrote:
> On Tue, Dec 01, 2015 at 11:57:34PM -0500, Sinan Kaya wrote:
>> On 11/30/2015 10:17 PM, Vinod Koul wrote:
>>> On Mon, Nov 30, 2015 at 09:42:01AM -0500, Sinan Kaya wrote:
>>>
>>>>>> +static int hidma_mgmt_probe(struct platform_device *pdev)
>>>>>> +{
>>>>>> + struct hidma_mgmt_dev *mgmtdev;
>>>>>> + struct resource *res;
>>>>>> + void __iomem *virtaddr;
>>>>>> + int irq;
>>>>>> + int rc;
>>>>>> + u32 val;
>>>>>> +
>>>>>> + pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
>>>>>> + pm_runtime_use_autosuspend(&pdev->dev);
>>>>>> + pm_runtime_set_active(&pdev->dev);
>>>>>> + pm_runtime_enable(&pdev->dev);
>>>>>
>>>>> at this time pm core will treat device as fully enabled and pm methods can
>>>>> be invoked, but you are not ready yet right. Typically these are done at the
>>>>> end of the probe unless you have a reason...
>>>>
>>>> I need it here because the clocks are declared as ACPI power resources.
>>>> The kernel is turning off all power resources during initialization. In
>>>> order for this code to touch the hardware, I need to call enable so that
>>>> clocks are enabled once again.
>>>
>>> The question is are you ready in your driver routines to be invoked by pm
>>> core?
>>>
>>
>> I don't have any support for suspend and resume PM APIs. The only PM
>> interface I support is PM runtime. PM can turn on/off the clocks based
>> on the reference counts it maintains after get/set APIs. Since PM is
>> turning off the clocks during power up before my driver load, I do need
>> to grab this lock to re-enable it during HW initialization. Then, let PM
>> turn off the clocks again after the AUTOSUSPEND_TIMEOUT when I'm done.
>>
>> Is there any other interaction with the PM that I'm not aware of?
>
> No this is fine. The the runtime_resume will be onvoked and it will request
> resources are those set before you enable the device?
>

Yes, the only resource that this device needs for power management is
the ACPI power resources. The device does not support suspend/resume via
traditional _PS0 and _PS3 calls. ACPI power resources are initialized
during power up while ACPI is being enumerated.

The probing of the HIDMA driver happens much afterwards.


--
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
Linux Foundation Collaborative Project

2015-12-10 20:10:53

by Sinan Kaya

[permalink] [raw]
Subject: Re: [PATCH V7 3/3] dma: add Qualcomm Technologies HIDMA channel driver

On 12/5/2015 3:00 AM, Vinod Koul wrote:
> On Wed, Dec 02, 2015 at 02:04:05PM -0500, Sinan Kaya wrote:
>>>>> You are missing the point. Channel can be paused, yes but the descriptor
>>>>> is in queue and is not paused. The descriptor running is paused, yes.
>>>>> There is subtle difference between these
>>> I'll follow your recommendation. PAUSE for the currently active
>>> descriptor and DMA_IN_PROGRESS for the rest.
>>>
>>
>> I'm now confused.
>>
>> I looked at several DMA driver implementations.
>>
>> 1. They call dma_cookie_status function to see if the job is done.
>> 2. If done, they return right ahead.
>> 3. Otherwise, dma_cookie_status returns DMA_IN_PROGRESS.
>> 4. Next the code checks if the channel is paused and return value is
>> DMA_IN_PROGRESS. The code changes return code to DMA_PAUSED.
>>
>> Whereas, I was returning paused first before even checking if the
>> descriptor is done. Are you OK with the sequence 1..4 above?
>
> Yes am okay with this with slight change in 4.
>
> You should set to PAUSED only for current descriptor and not for the ones in
> queue
>

OK. I'll post a new version with this. Is there any other comment that
needed to be addressed?

--
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
Linux Foundation Collaborative Project

2015-12-11 09:31:57

by Vinod Koul

[permalink] [raw]
Subject: Re: [PATCH V7 3/3] dma: add Qualcomm Technologies HIDMA channel driver

On Thu, Dec 10, 2015 at 03:10:48PM -0500, Sinan Kaya wrote:
> On 12/5/2015 3:00 AM, Vinod Koul wrote:
> > On Wed, Dec 02, 2015 at 02:04:05PM -0500, Sinan Kaya wrote:
> >>>>> You are missing the point. Channel can be paused, yes but the descriptor
> >>>>> is in queue and is not paused. The descriptor running is paused, yes.
> >>>>> There is subtle difference between these
> >>> I'll follow your recommendation. PAUSE for the currently active
> >>> descriptor and DMA_IN_PROGRESS for the rest.
> >>>
> >>
> >> I'm now confused.
> >>
> >> I looked at several DMA driver implementations.
> >>
> >> 1. They call dma_cookie_status function to see if the job is done.
> >> 2. If done, they return right ahead.
> >> 3. Otherwise, dma_cookie_status returns DMA_IN_PROGRESS.
> >> 4. Next the code checks if the channel is paused and return value is
> >> DMA_IN_PROGRESS. The code changes return code to DMA_PAUSED.
> >>
> >> Whereas, I was returning paused first before even checking if the
> >> descriptor is done. Are you OK with the sequence 1..4 above?
> >
> > Yes am okay with this with slight change in 4.
> >
> > You should set to PAUSED only for current descriptor and not for the ones in
> > queue
> >
>
> OK. I'll post a new version with this. Is there any other comment that
> needed to be addressed?

Looks okay to me

--
~Vinod

2015-12-17 23:52:56

by Sinan Kaya

[permalink] [raw]
Subject: Re: [PATCH V7 3/3] dma: add Qualcomm Technologies HIDMA channel driver

On 12/11/2015 4:35 AM, Vinod Koul wrote:
> On Thu, Dec 10, 2015 at 03:10:48PM -0500, Sinan Kaya wrote:
>> On 12/5/2015 3:00 AM, Vinod Koul wrote:
>>> On Wed, Dec 02, 2015 at 02:04:05PM -0500, Sinan Kaya wrote:
>>>>>>> You are missing the point. Channel can be paused, yes but the descriptor
>>>>>>> is in queue and is not paused. The descriptor running is paused, yes.
>>>>>>> There is subtle difference between these
>>>>> I'll follow your recommendation. PAUSE for the currently active
>>>>> descriptor and DMA_IN_PROGRESS for the rest.
>>>>>
>>>>
>>>> I'm now confused.
>>>>
>>>> I looked at several DMA driver implementations.
>>>>
>>>> 1. They call dma_cookie_status function to see if the job is done.
>>>> 2. If done, they return right ahead.
>>>> 3. Otherwise, dma_cookie_status returns DMA_IN_PROGRESS.
>>>> 4. Next the code checks if the channel is paused and return value is
>>>> DMA_IN_PROGRESS. The code changes return code to DMA_PAUSED.
>>>>
>>>> Whereas, I was returning paused first before even checking if the
>>>> descriptor is done. Are you OK with the sequence 1..4 above?
>>>
>>> Yes am okay with this with slight change in 4.
>>>
>>> You should set to PAUSED only for current descriptor and not for the ones in
>>> queue
>>>
>>
>> OK. I'll post a new version with this. Is there any other comment that
>> needed to be addressed?
>
> Looks okay to me
>

Vinod, Mark;
Just posted a new series (V10) https://lkml.org/lkml/2015/12/17/447 that addresses
Mark Rutland and your concern.

Can you please review the series and queue for inclusion if you are OK?
Sinan

--
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project