2015-05-15 09:43:58

by Yong Wu (吴勇)

[permalink] [raw]
Subject: [RFC v2 PATCH 0/6] MT8173 IOMMU SUPPORT

This patch adds support for m4u(Multimedia Memory Management Unit),
Currently it only support the m4u with 2 levels of page table on mt8173.

It is based on Robin Murphy's arm64: IOMMU-backed DMA mapping[1].

Please check the hardware block diagram of Mediatek IOMMU.

EMI (External Memory Interface)
|
m4u (Multimedia Memory Management Unit)
|
smi (Smart Multimedia Interface)
|
+---------------+-------
| |
| |
vdec larb disp larb ... SoCs have different local arbiter(larb).
| |
| |
+----+----+ +-----+-----+
| | | | | | ...
| | | | | | ...
| | | | | | ...
MC PP VLD OVL0 RDMA0 WDMA0 ... There are different ports in each larb.

Normally we specify a local arbiter(larb) for each multimedia hardware like
display, video decode, video encode and camera. And there are different ports in
each larb. Take a example, there are some ports like MC, PP, UFO, VLD, AVC_MV,
PRED_RD, PRED_WR in video larb, all the ports are according to the video hardware.

From the diagram, all the multimedia module connect with m4u via smi.
SMI is responsible to enable/disable iommu and control the clocks for each local
arbiter. If we should enable the iommu of video decode, it should config the
video's ports. And if the video hardware work wether enable/disable iommu,
it should enable the clock of its larb's clock. So we add a special driver for smi.

[1] http://lists.linuxfoundation.org/pipermail/iommu/2015-February/012236.html

v2:
-add arm short descriptor support.
-seperate smi common from smi and change the clock-names according
to smi HW.
-delete the hardcode of the port-names in mt8173.
replace this with larb-portes-nr in dtsi.
-fix some coding style issues.

v1:
http://lists.infradead.org/pipermail/linux-mediatek/2015-March/000058.html

Yong Wu (6):
dt-bindings: iommu: Add binding for mediatek IOMMU
dt-bindings: mediatek: Add smi dts binding
iommu: add ARM short descriptor page table allocator.
iommu/mediatek: Add mt8173 IOMMU driver
soc: mediatek: Add SMI driver
dts: mt8173: Add iommu/smi nodes for mt8173

.../devicetree/bindings/iommu/mediatek,iommu.txt | 51 ++
.../bindings/soc/mediatek/mediatek,smi-larb.txt | 24 +
.../bindings/soc/mediatek/mediatek,smi.txt | 22 +
arch/arm64/boot/dts/mediatek/mt8173.dtsi | 79 +++
drivers/iommu/Kconfig | 18 +
drivers/iommu/Makefile | 2 +
drivers/iommu/io-pgtable-arm-short.c | 490 +++++++++++++++
drivers/iommu/io-pgtable.c | 4 +
drivers/iommu/io-pgtable.h | 6 +
drivers/iommu/mtk_iommu.c | 657 +++++++++++++++++++++
drivers/soc/mediatek/Kconfig | 6 +
drivers/soc/mediatek/Makefile | 1 +
drivers/soc/mediatek/mt8173-smi.c | 298 ++++++++++
include/dt-bindings/iommu/mt8173-iommu-port.h | 112 ++++
include/linux/mtk-smi.h | 39 ++
15 files changed, 1809 insertions(+)
create mode 100644 Documentation/devicetree/bindings/iommu/mediatek,iommu.txt
create mode 100644 Documentation/devicetree/bindings/soc/mediatek/mediatek,smi-larb.txt
create mode 100644 Documentation/devicetree/bindings/soc/mediatek/mediatek,smi.txt
create mode 100644 drivers/iommu/io-pgtable-arm-short.c
create mode 100644 drivers/iommu/mtk_iommu.c
create mode 100644 drivers/soc/mediatek/mt8173-smi.c
create mode 100644 include/dt-bindings/iommu/mt8173-iommu-port.h
create mode 100644 include/linux/mtk-smi.h

--
1.8.1.1.dirty


2015-05-15 09:44:06

by Yong Wu (吴勇)

[permalink] [raw]
Subject: [PATCH v2 1/6] dt-bindings: iommu: Add binding for mediatek IOMMU

This patch add mediatek iommu dts binding document.

Signed-off-by: Yong Wu <[email protected]>
---
.../devicetree/bindings/iommu/mediatek,iommu.txt | 51 ++++++++++
include/dt-bindings/iommu/mt8173-iommu-port.h | 112 +++++++++++++++++++++
2 files changed, 163 insertions(+)
create mode 100644 Documentation/devicetree/bindings/iommu/mediatek,iommu.txt
create mode 100644 include/dt-bindings/iommu/mt8173-iommu-port.h

diff --git a/Documentation/devicetree/bindings/iommu/mediatek,iommu.txt b/Documentation/devicetree/bindings/iommu/mediatek,iommu.txt
new file mode 100644
index 0000000..f2cc7c0
--- /dev/null
+++ b/Documentation/devicetree/bindings/iommu/mediatek,iommu.txt
@@ -0,0 +1,51 @@
+/******************************************************/
+/* Mediatek IOMMU Hardware Block Diagram */
+/******************************************************/
+ EMI (External Memory Interface)
+ |
+ m4u (Multimedia Memory Management Unit)
+ |
+ smi (Smart Multimedia Interface)
+ |
+ +---------------+-------
+ | |
+ | |
+ vdec larb disp larb ... SoCs have different local arbiter(larb).
+ | |
+ | |
+ +----+----+ +-----+-----+
+ | | | | | | ...
+ | | | | | | ...
+ | | | | | | ...
+ MC PP VLD OVL0 RDMA0 WDMA0 ... There are different ports in each larb.
+
+Required properties:
+- compatible : must be "mediatek,mt8173-m4u".
+- reg : m4u register base and size.
+- interrupts : the interrupt of m4u.
+- clocks : must contain one entry for each clock-names.
+- clock-names : must be "bclk", It is the block clock of m4u.
+- larb-portes-nr : must contain the number of the portes for each larb(local
+ arbiter). The number is defined in dt-binding/iommu/mt8173-iommu-port.h.
+- larb : must contain the local arbiters of the current platform. Refer to
+ bindings/soc/mediatek/mediatek,smi.txt. It must sort according to the
+ local arbiter index, like larb0, larb1, larb2...
+- iommu-cells : must be 1. Specifies the client PortID as defined in
+ dt-binding/iommu/mt8173-iommu-port.h
+
+Example:
+ iommu: mmsys_iommu@10205000 {
+ compatible = "mediatek,mt8173-m4u";
+ reg = <0 0x10205000 0 0x1000>;
+ interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_LOW>;
+ clocks = <&infrasys INFRA_M4U>;
+ clock-names = "bclk";
+ larb-portes-nr = <M4U_LARB0_PORT_NR
+ M4U_LARB1_PORT_NR
+ M4U_LARB2_PORT_NR
+ M4U_LARB3_PORT_NR
+ M4U_LARB4_PORT_NR
+ M4U_LARB5_PORT_NR>;
+ larb = <&larb0 &larb1 &larb2 &larb3 &larb4 &larb5>;
+ #iommu-cells = <1>;
+ };
\ No newline at end of file
diff --git a/include/dt-bindings/iommu/mt8173-iommu-port.h b/include/dt-bindings/iommu/mt8173-iommu-port.h
new file mode 100644
index 0000000..09bac4f
--- /dev/null
+++ b/include/dt-bindings/iommu/mt8173-iommu-port.h
@@ -0,0 +1,112 @@
+/*
+ * Copyright (c) 2014-2015 MediaTek Inc.
+ * Author: Yong Wu <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#ifndef __DTS_IOMMU_PORT_MT8173_H
+#define __DTS_IOMMU_PORT_MT8173_H
+
+#define M4U_LARB0_PORT_NR 8
+#define M4U_LARB1_PORT_NR 10
+#define M4U_LARB2_PORT_NR 21
+#define M4U_LARB3_PORT_NR 15
+#define M4U_LARB4_PORT_NR 6
+#define M4U_LARB5_PORT_NR 9
+
+#define M4U_LARB0_PORT(n) (n)
+#define M4U_LARB1_PORT(n) ((n) + M4U_LARB0_PORT_NR + M4U_LARB0_PORT(0))
+#define M4U_LARB2_PORT(n) ((n) + M4U_LARB1_PORT_NR + M4U_LARB1_PORT(0))
+#define M4U_LARB3_PORT(n) ((n) + M4U_LARB2_PORT_NR + M4U_LARB2_PORT(0))
+#define M4U_LARB4_PORT(n) ((n) + M4U_LARB3_PORT_NR + M4U_LARB3_PORT(0))
+#define M4U_LARB5_PORT(n) ((n) + M4U_LARB4_PORT_NR + M4U_LARB4_PORT(0))
+
+/* larb0 */
+#define M4U_PORT_DISP_OVL0 M4U_LARB0_PORT(0)
+#define M4U_PORT_DISP_RDMA0 M4U_LARB0_PORT(1)
+#define M4U_PORT_DISP_WDMA0 M4U_LARB0_PORT(2)
+#define M4U_PORT_DISP_OD_R M4U_LARB0_PORT(3)
+#define M4U_PORT_DISP_OD_W M4U_LARB0_PORT(4)
+#define M4U_PORT_MDP_RDMA0 M4U_LARB0_PORT(5)
+#define M4U_PORT_MDP_WDMA M4U_LARB0_PORT(6)
+#define M4U_PORT_MDP_WROT0 M4U_LARB0_PORT(7)
+
+/* larb1 */
+#define M4U_PORT_HW_VDEC_MC_EXT M4U_LARB1_PORT(0)
+#define M4U_PORT_HW_VDEC_PP_EXT M4U_LARB1_PORT(1)
+#define M4U_PORT_HW_VDEC_UFO_EXT M4U_LARB1_PORT(2)
+#define M4U_PORT_HW_VDEC_VLD_EXT M4U_LARB1_PORT(3)
+#define M4U_PORT_HW_VDEC_VLD2_EXT M4U_LARB1_PORT(4)
+#define M4U_PORT_HW_VDEC_AVC_MV_EXT M4U_LARB1_PORT(5)
+#define M4U_PORT_HW_VDEC_PRED_RD_EXT M4U_LARB1_PORT(6)
+#define M4U_PORT_HW_VDEC_PRED_WR_EXT M4U_LARB1_PORT(7)
+#define M4U_PORT_HW_VDEC_PPWRAP_EXT M4U_LARB1_PORT(8)
+#define M4U_PORT_HW_VDEC_TILE M4U_LARB1_PORT(9)
+
+/* larb2 */
+#define M4U_PORT_IMGO M4U_LARB2_PORT(0)
+#define M4U_PORT_RRZO M4U_LARB2_PORT(1)
+#define M4U_PORT_AAO M4U_LARB2_PORT(2)
+#define M4U_PORT_LCSO M4U_LARB2_PORT(3)
+#define M4U_PORT_ESFKO M4U_LARB2_PORT(4)
+#define M4U_PORT_IMGO_D M4U_LARB2_PORT(5)
+#define M4U_PORT_LSCI M4U_LARB2_PORT(6)
+#define M4U_PORT_LSCI_D M4U_LARB2_PORT(7)
+#define M4U_PORT_BPCI M4U_LARB2_PORT(8)
+#define M4U_PORT_BPCI_D M4U_LARB2_PORT(9)
+#define M4U_PORT_UFDI M4U_LARB2_PORT(10)
+#define M4U_PORT_IMGI M4U_LARB2_PORT(11)
+#define M4U_PORT_IMG2O M4U_LARB2_PORT(12)
+#define M4U_PORT_IMG3O M4U_LARB2_PORT(13)
+#define M4U_PORT_VIPI M4U_LARB2_PORT(14)
+#define M4U_PORT_VIP2I M4U_LARB2_PORT(15)
+#define M4U_PORT_VIP3I M4U_LARB2_PORT(16)
+#define M4U_PORT_LCEI M4U_LARB2_PORT(17)
+#define M4U_PORT_RB M4U_LARB2_PORT(18)
+#define M4U_PORT_RP M4U_LARB2_PORT(19)
+#define M4U_PORT_WR M4U_LARB2_PORT(20)
+
+/* larb3 */
+#define M4U_PORT_VENC_RCPU M4U_LARB3_PORT(0)
+#define M4U_PORT_VENC_REC M4U_LARB3_PORT(1)
+#define M4U_PORT_VENC_BSDMA M4U_LARB3_PORT(2)
+#define M4U_PORT_VENC_SV_COMV M4U_LARB3_PORT(3)
+#define M4U_PORT_VENC_RD_COMV M4U_LARB3_PORT(4)
+#define M4U_PORT_JPGENC_RDMA M4U_LARB3_PORT(5)
+#define M4U_PORT_JPGENC_BSDMA M4U_LARB3_PORT(6)
+#define M4U_PORT_JPGDEC_WDMA M4U_LARB3_PORT(7)
+#define M4U_PORT_JPGDEC_BSDMA M4U_LARB3_PORT(8)
+#define M4U_PORT_VENC_CUR_LUMA M4U_LARB3_PORT(9)
+#define M4U_PORT_VENC_CUR_CHROMA M4U_LARB3_PORT(10)
+#define M4U_PORT_VENC_REF_LUMA M4U_LARB3_PORT(11)
+#define M4U_PORT_VENC_REF_CHROMA M4U_LARB3_PORT(12)
+#define M4U_PORT_VENC_NBM_RDMA M4U_LARB3_PORT(13)
+#define M4U_PORT_VENC_NBM_WDMA M4U_LARB3_PORT(14)
+
+/* larb4 */
+#define M4U_PORT_DISP_OVL1 M4U_LARB4_PORT(0)
+#define M4U_PORT_DISP_RDMA1 M4U_LARB4_PORT(1)
+#define M4U_PORT_DISP_RDMA2 M4U_LARB4_PORT(2)
+#define M4U_PORT_DISP_WDMA1 M4U_LARB4_PORT(3)
+#define M4U_PORT_MDP_RDMA1 M4U_LARB4_PORT(4)
+#define M4U_PORT_MDP_WROT1 M4U_LARB4_PORT(5)
+
+/* larb5 */
+#define M4U_PORT_VENC_RCPU_SET2 M4U_LARB5_PORT(0)
+#define M4U_PORT_VENC_REC_FRM_SET2 M4U_LARB5_PORT(1)
+#define M4U_PORT_VENC_REF_LUMA_SET2 M4U_LARB5_PORT(2)
+#define M4U_PORT_VENC_REC_CHROMA_SET2 M4U_LARB5_PORT(3)
+#define M4U_PORT_VENC_BSDMA_SET2 M4U_LARB5_PORT(4)
+#define M4U_PORT_VENC_CUR_LUMA_SET2 M4U_LARB5_PORT(5)
+#define M4U_PORT_VENC_CUR_CHROMA_SET2 M4U_LARB5_PORT(6)
+#define M4U_PORT_VENC_RD_COMA_SET2 M4U_LARB5_PORT(7)
+#define M4U_PORT_VENC_SV_COMA_SET2 M4U_LARB5_PORT(8)
+
+#endif
--
1.8.1.1.dirty

2015-05-15 09:44:12

by Yong Wu (吴勇)

[permalink] [raw]
Subject: [PATCH v2 2/6] dt-bindings: mediatek: Add smi dts binding

This patch add smi binding document.

Signed-off-by: Yong Wu <[email protected]>
---
.../bindings/soc/mediatek/mediatek,smi-larb.txt | 24 ++++++++++++++++++++++
.../bindings/soc/mediatek/mediatek,smi.txt | 22 ++++++++++++++++++++
2 files changed, 46 insertions(+)
create mode 100644 Documentation/devicetree/bindings/soc/mediatek/mediatek,smi-larb.txt
create mode 100644 Documentation/devicetree/bindings/soc/mediatek/mediatek,smi.txt

diff --git a/Documentation/devicetree/bindings/soc/mediatek/mediatek,smi-larb.txt b/Documentation/devicetree/bindings/soc/mediatek/mediatek,smi-larb.txt
new file mode 100644
index 0000000..c06c5b6
--- /dev/null
+++ b/Documentation/devicetree/bindings/soc/mediatek/mediatek,smi-larb.txt
@@ -0,0 +1,24 @@
+SMI(Smart Multimedia Interface) Local Arbiter
+
+The hardware block diagram please check bindings/iommu/mediatek,iommu.txt
+
+Required properties:
+- compatible : must be "mediatek,mt8173-smi-larb"
+- reg : the register and size of each local arbiter.
+- smi : must be "&smi_common". Refer to bindings/soc/mediatek,smi.txt.
+- clocks : must contain one entry for each clock-names.
+ There are 2 clockes:
+ APB clock : Advanced Peripheral Bus clock, It's the clock for setting
+ the register.
+ SMI clock : It's the clock for transfer data and command.
+- clock-name: must be "apb" and "smi".
+
+Example:
+ larb0:larb@14021000 {
+ compatible = "mediatek,mt8173-smi-larb";
+ reg = <0 0x14021000 0 0x1000>;
+ smi = <&smi_common>;
+ clocks = <&mmsys MM_SMI_LARB0>,
+ <&mmsys MM_SMI_LARB0>;
+ clock-names = "apb", "smi";
+ };
diff --git a/Documentation/devicetree/bindings/soc/mediatek/mediatek,smi.txt b/Documentation/devicetree/bindings/soc/mediatek/mediatek,smi.txt
new file mode 100644
index 0000000..c2389b4
--- /dev/null
+++ b/Documentation/devicetree/bindings/soc/mediatek/mediatek,smi.txt
@@ -0,0 +1,22 @@
+SMI(Smart Multimedia Interface)
+
+The hardware block diagram please check bindings/iommu/mediatek,iommu.txt
+
+Required properties:
+- compatible : must be "mediatek,mt8173-smi"
+- reg : the register and size of the smi.
+- clocks : must contain one entry for each clock-names.
+ There are 2 clockes:
+ APB clock : Advanced Peripheral Bus clock, It's the clock for setting
+ the register.
+ SMI clock : It's the clock for transfer data and command.
+- clock-name: must be "apb" and "smi".
+
+Example:
+ smi_common:smi@14022000 {
+ compatible = "mediatek,mt8173-smi";
+ reg = <0 0x14022000 0 0x1000>;
+ clocks = <&mmsys MM_SMI_COMMON>,
+ <&mmsys MM_SMI_COMMON>;
+ clock-names = "apb", "smi";
+ };
--
1.8.1.1.dirty

2015-05-15 09:45:40

by Yong Wu (吴勇)

[permalink] [raw]
Subject: [PATCH v2 3/6] iommu: add ARM short descriptor page table allocator.

This patch is for ARM Short Descriptor Format.It has 2-levels
pagetable and the allocator supports 4K/64K/1M/16M.

Signed-off-by: Yong Wu <[email protected]>
---
drivers/iommu/Kconfig | 7 +
drivers/iommu/Makefile | 1 +
drivers/iommu/io-pgtable-arm-short.c | 490 +++++++++++++++++++++++++++++++++++
drivers/iommu/io-pgtable.c | 4 +
drivers/iommu/io-pgtable.h | 6 +
5 files changed, 508 insertions(+)
create mode 100644 drivers/iommu/io-pgtable-arm-short.c

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 1ae4e54..3d2eac6 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -39,6 +39,13 @@ config IOMMU_IO_PGTABLE_LPAE_SELFTEST

If unsure, say N here.

+config IOMMU_IO_PGTABLE_SHORT
+ bool "ARMv7/v8 Short Descriptor Format"
+ select IOMMU_IO_PGTABLE
+ help
+ Enable support for the ARM Short descriptor pagetable format.
+ It has 2-levels pagetable and The allocator supports 4K/64K/1M/16M.
+
endmenu

config IOMMU_IOVA
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index 080ffab..815b3c8 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -3,6 +3,7 @@ obj-$(CONFIG_IOMMU_API) += iommu-traces.o
obj-$(CONFIG_IOMMU_API) += iommu-sysfs.o
obj-$(CONFIG_IOMMU_IO_PGTABLE) += io-pgtable.o
obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o
+obj-$(CONFIG_IOMMU_IO_PGTABLE_SHORT) += io-pgtable-arm-short.o
obj-$(CONFIG_IOMMU_IOVA) += iova.o
obj-$(CONFIG_OF_IOMMU) += of_iommu.o
obj-$(CONFIG_MSM_IOMMU) += msm_iommu.o msm_iommu_dev.o
diff --git a/drivers/iommu/io-pgtable-arm-short.c b/drivers/iommu/io-pgtable-arm-short.c
new file mode 100644
index 0000000..cc286ce5
--- /dev/null
+++ b/drivers/iommu/io-pgtable-arm-short.c
@@ -0,0 +1,490 @@
+/*
+ * Copyright (c) 2014-2015 MediaTek Inc.
+ * Author: Yong Wu <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#define pr_fmt(fmt) "arm-short-desc io-pgtable: "fmt
+
+#include <linux/err.h>
+#include <linux/mm.h>
+#include <linux/iommu.h>
+#include <linux/errno.h>
+
+#include "io-pgtable.h"
+
+typedef u32 arm_short_iopte;
+
+struct arm_short_io_pgtable {
+ struct io_pgtable iop;
+ struct kmem_cache *ptekmem;
+ size_t pgd_size;
+ void *pgd;
+};
+
+#define io_pgtable_short_to_data(x) \
+ container_of((x), struct arm_short_io_pgtable, iop)
+
+#define io_pgtable_ops_to_pgtable(x) \
+ container_of((x), struct io_pgtable, ops)
+
+#define io_pgtable_short_ops_to_data(x) \
+ io_pgtable_short_to_data(io_pgtable_ops_to_pgtable(x))
+
+#define ARM_SHORT_MAX_ADDR_BITS 32
+
+#define ARM_SHORT_PGDIR_SHIFT 20
+#define ARM_SHORT_PAGE_SHIFT 12
+#define ARM_SHORT_PTRS_PER_PTE 256
+#define ARM_SHORT_BYTES_PER_PTE 1024
+
+/* 1 level pagetable */
+#define ARM_SHORT_F_PGD_TYPE_PAGE (0x1)
+#define ARM_SHORT_F_PGD_TYPE_PAGE_MSK (0x3)
+#define ARM_SHORT_F_PGD_TYPE_SECTION (0x2)
+#define ARM_SHORT_F_PGD_TYPE_SUPERSECTION (0x2 | (1 << 18))
+#define ARM_SHORT_F_PGD_TYPE_SECTION_MSK (0x3 | (1 << 18))
+#define ARM_SHORT_F_PGD_TYPE_IS_PAGE(pgd) (((pgd) & 0x3) == 1)
+#define ARM_SHORT_F_PGD_TYPE_IS_SECTION(pgd) \
+ (((pgd) & ARM_SHORT_F_PGD_TYPE_SECTION_MSK) \
+ == ARM_SHORT_F_PGD_TYPE_SECTION)
+#define ARM_SHORT_F_PGD_TYPE_IS_SUPERSECTION(pgd) \
+ (((pgd) & ARM_SHORT_F_PGD_TYPE_SECTION_MSK) \
+ == ARM_SHORT_F_PGD_TYPE_SUPERSECTION)
+
+#define ARM_SHORT_F_PGD_B_BIT BIT(2)
+#define ARM_SHORT_F_PGD_C_BIT BIT(3)
+#define ARM_SHORT_F_PGD_IMPLE_BIT BIT(9)
+#define ARM_SHORT_F_PGD_S_BIT BIT(16)
+#define ARM_SHORT_F_PGD_NG_BIT BIT(17)
+#define ARM_SHORT_F_PGD_NS_BIT_PAGE BIT(3)
+#define ARM_SHORT_F_PGD_NS_BIT_SECTION BIT(19)
+
+#define ARM_SHORT_F_PGD_PA_PAGETABLE_MSK 0xfffffc00
+#define ARM_SHORT_F_PGD_PA_SECTION_MSK 0xfff00000
+#define ARM_SHORT_F_PGD_PA_SUPERSECTION_MSK 0xff000000
+
+/* 2 level pagetable */
+#define ARM_SHORT_F_PTE_TYPE_GET(val) ((val) & 0x3)
+#define ARM_SHORT_F_PTE_TYPE_LARGE BIT(0)
+#define ARM_SHORT_F_PTE_TYPE_SMALL BIT(1)
+#define ARM_SHORT_F_PTE_B_BIT BIT(2)
+#define ARM_SHORT_F_PTE_C_BIT BIT(3)
+#define ARM_SHORT_F_PTE_IMPLE_BIT BIT(9)
+#define ARM_SHORT_F_PTE_S_BIT BIT(10)
+#define ARM_SHORT_F_PTE_PA_LARGE_MSK 0xffff0000
+#define ARM_SHORT_F_PTE_PA_SMALL_MSK 0xfffff000
+
+#define ARM_SHORT_PGD_IDX(a) ((a) >> ARM_SHORT_PGDIR_SHIFT)
+#define ARM_SHORT_PTE_IDX(a) \
+ (((a) >> ARM_SHORT_PAGE_SHIFT) & 0xff)
+#define ARM_SHORT_GET_PTE_VA(pgd) \
+ (phys_to_virt((unsigned long)pgd & ARM_SHORT_F_PGD_PA_PAGETABLE_MSK))
+
+static arm_short_iopte *
+arm_short_get_pte_in_pgd(arm_short_iopte curpgd, unsigned int iova)
+{
+ arm_short_iopte *pte;
+
+ pte = ARM_SHORT_GET_PTE_VA(curpgd);
+ pte += ARM_SHORT_PTE_IDX(iova);
+ return pte;
+}
+
+static arm_short_iopte *
+arm_short_supersection_start(arm_short_iopte *pgd)
+{
+ return (arm_short_iopte *)(round_down((unsigned long)pgd, (16 * 4)));
+}
+
+static int _arm_short_check_free_pte(struct arm_short_io_pgtable *data,
+ arm_short_iopte *pgd)
+{
+ arm_short_iopte *pte;
+ int i;
+
+ pte = ARM_SHORT_GET_PTE_VA(*pgd);
+
+ for (i = 0; i < ARM_SHORT_PTRS_PER_PTE; i++) {
+ if (pte[i] != 0)
+ return 1;
+ }
+
+ /* Free PTE */
+ kmem_cache_free(data->ptekmem, pte);
+ *pgd = 0;
+
+ return 0;
+}
+
+static phys_addr_t arm_short_iova_to_phys(struct io_pgtable_ops *ops,
+ unsigned long iova)
+{
+ struct arm_short_io_pgtable *data = io_pgtable_short_ops_to_data(ops);
+ arm_short_iopte *pte, *pgd = data->pgd;
+ phys_addr_t pa = 0;
+
+ pgd += ARM_SHORT_PGD_IDX(iova);
+
+ if (ARM_SHORT_F_PGD_TYPE_IS_PAGE(*pgd)) {
+ u8 pte_type;
+
+ pte = arm_short_get_pte_in_pgd(*pgd, iova);
+ pte_type = ARM_SHORT_F_PTE_TYPE_GET(*pte);
+
+ if (pte_type == ARM_SHORT_F_PTE_TYPE_LARGE) {
+ pa = (*pte) & ARM_SHORT_F_PTE_PA_LARGE_MSK;
+ pa |= iova & (~ARM_SHORT_F_PTE_PA_LARGE_MSK);
+ } else if (pte_type == ARM_SHORT_F_PTE_TYPE_SMALL) {
+ pa = (*pte) & ARM_SHORT_F_PTE_PA_SMALL_MSK;
+ pa |= iova & (~ARM_SHORT_F_PTE_PA_SMALL_MSK);
+ }
+ } else {
+ if (ARM_SHORT_F_PGD_TYPE_IS_SECTION(*pgd)) {
+ pa = (*pgd) & ARM_SHORT_F_PGD_PA_SECTION_MSK;
+ pa |= iova & (~ARM_SHORT_F_PGD_PA_SECTION_MSK);
+ } else if (ARM_SHORT_F_PGD_TYPE_IS_SUPERSECTION(*pgd)) {
+ pa = (*pgd) & ARM_SHORT_F_PGD_PA_SUPERSECTION_MSK;
+ pa |= iova & (~ARM_SHORT_F_PGD_PA_SUPERSECTION_MSK);
+ }
+ }
+
+ return pa;
+}
+
+static int arm_short_unmap(struct io_pgtable_ops *ops, unsigned long iova,
+ size_t size)
+{
+ struct arm_short_io_pgtable *data = io_pgtable_short_ops_to_data(ops);
+ arm_short_iopte *pgd;
+ unsigned long iova_start = iova;
+ unsigned long long end_plus_1 = iova + size;
+ const struct iommu_gather_ops *tlb = data->iop.cfg.tlb;
+ void *cookie = data->iop.cookie;
+ int ret;
+
+ do {
+ pgd = (arm_short_iopte *)data->pgd + ARM_SHORT_PGD_IDX(iova);
+
+ if (ARM_SHORT_F_PGD_TYPE_IS_PAGE(*pgd)) {
+ arm_short_iopte *pte;
+ unsigned int pte_offset;
+ unsigned int num_to_clean;
+
+ pte_offset = ARM_SHORT_PTE_IDX(iova);
+ num_to_clean =
+ min((unsigned int)((end_plus_1 - iova) / PAGE_SIZE),
+ (ARM_SHORT_PTRS_PER_PTE - pte_offset));
+
+ pte = arm_short_get_pte_in_pgd(*pgd, iova);
+
+ memset(pte, 0, num_to_clean * sizeof(arm_short_iopte));
+
+ ret = _arm_short_check_free_pte(data, pgd);
+ if (ret == 1)/* pte is not freed, need to flush pte */
+ tlb->flush_pgtable(
+ pte,
+ num_to_clean * sizeof(arm_short_iopte),
+ cookie);
+ else
+ tlb->flush_pgtable(pgd, sizeof(arm_short_iopte),
+ cookie);
+
+ iova += num_to_clean << PAGE_SHIFT;
+ } else if (ARM_SHORT_F_PGD_TYPE_IS_SECTION(*pgd)) {
+ *pgd = 0;
+
+ tlb->flush_pgtable(pgd, sizeof(arm_short_iopte),
+ cookie);
+ iova += SZ_1M;
+ } else if (ARM_SHORT_F_PGD_TYPE_IS_SUPERSECTION(*pgd)) {
+ arm_short_iopte *start;
+
+ start = arm_short_supersection_start(pgd);
+ if (unlikely(start != pgd))
+ pr_warn("%s:suppersection start isn't aligned.iova=0x%lx,pgd=0x%x\n",
+ __func__, iova, *pgd);
+
+ memset(start, 0, 16 * sizeof(arm_short_iopte));
+
+ tlb->flush_pgtable(start, 16 * sizeof(arm_short_iopte),
+ cookie);
+
+ iova = (iova + SZ_16M) & (~(SZ_16M - 1));
+ } else {
+ break;
+ }
+ } while (iova < end_plus_1 && iova);
+
+ tlb->tlb_add_flush(iova_start, size, true, cookie);
+
+ return 0;
+}
+
+static arm_short_iopte __arm_short_pte_port(unsigned int prot, bool large)
+{
+ arm_short_iopte pteprot;
+
+ pteprot = ARM_SHORT_F_PTE_S_BIT;
+
+ pteprot |= large ? ARM_SHORT_F_PTE_TYPE_LARGE :
+ ARM_SHORT_F_PTE_TYPE_SMALL;
+
+ if (prot & IOMMU_CACHE)
+ pteprot |= ARM_SHORT_F_PTE_B_BIT | ARM_SHORT_F_PTE_C_BIT;
+
+ return pteprot;
+}
+
+static arm_short_iopte __arm_short_pgd_port(int prot, bool super)
+{
+ arm_short_iopte pgdprot;
+
+ pgdprot = ARM_SHORT_F_PGD_S_BIT;
+ pgdprot |= super ? ARM_SHORT_F_PGD_TYPE_SUPERSECTION :
+ ARM_SHORT_F_PGD_TYPE_SECTION;
+ if (prot & IOMMU_CACHE)
+ pgdprot |= ARM_SHORT_F_PGD_C_BIT | ARM_SHORT_F_PGD_B_BIT;
+
+ return pgdprot;
+}
+
+static int _arm_short_map_page(struct arm_short_io_pgtable *data,
+ unsigned int iova, phys_addr_t pa,
+ unsigned int prot, bool largepage)
+{
+ arm_short_iopte *pgd = data->pgd;
+ arm_short_iopte *pte;
+ arm_short_iopte pgdprot, pteprot;
+ arm_short_iopte mask = largepage ? ARM_SHORT_F_PTE_PA_LARGE_MSK :
+ ARM_SHORT_F_PTE_PA_SMALL_MSK;
+ int i, ptenum = largepage ? 16 : 1;
+ bool ptenew = false;
+ void *pte_new_va;
+ void *cookie = data->iop.cookie;
+
+ if ((iova | pa) & (~mask)) {
+ pr_err("IOVA|PA Not Aligned(iova=0x%x pa=0x%pa type=%s)\n",
+ iova, &pa, largepage ? "large page" : "small page");
+ return -EINVAL;
+ }
+
+ pgdprot = ARM_SHORT_F_PGD_TYPE_PAGE;
+ if (data->iop.cfg.quirks & IO_PGTABLE_QUIRK_ARM_NS)
+ pgdprot |= ARM_SHORT_F_PGD_NS_BIT_PAGE;
+
+ pgd += ARM_SHORT_PGD_IDX(iova);
+
+ if (!(*pgd)) {
+ pte_new_va = kmem_cache_zalloc(data->ptekmem, GFP_KERNEL);
+ if (unlikely(!pte_new_va)) {
+ pr_err("Failed to alloc pte\n");
+ return -ENOMEM;
+ }
+
+ /* Check pte alignment -- must 1K align */
+ if (unlikely((unsigned long)pte_new_va &
+ (ARM_SHORT_BYTES_PER_PTE - 1))) {
+ pr_err("The new pte is not aligned! (va=0x%p)\n",
+ pte_new_va);
+ kmem_cache_free(data->ptekmem, (void *)pte_new_va);
+ return -ENOMEM;
+ }
+ ptenew = true;
+ *pgd = virt_to_phys(pte_new_va) | pgdprot;
+ kmemleak_ignore(pte_new_va);
+ data->iop.cfg.tlb->flush_pgtable(pgd, sizeof(arm_short_iopte),
+ cookie);
+ } else {
+ /* Someone else may have allocated for this pgd */
+ if (((*pgd) & (~ARM_SHORT_F_PGD_PA_PAGETABLE_MSK)) != pgdprot) {
+ pr_err("The prot of old pgd is not Right!iova=0x%x pgd=0x%x pgprot=0x%x\n",
+ iova, (*pgd), pgdprot);
+ return -EEXIST;
+ }
+ }
+
+ pteprot = (arm_short_iopte)pa;
+ pteprot |= __arm_short_pte_port(prot, largepage);
+
+ pte = arm_short_get_pte_in_pgd(*pgd, iova);
+
+ pr_debug("iova:0x%x,pte:0x%p(0x%x),prot:0x%x-%s\n",
+ iova, pte, ARM_SHORT_PTE_IDX(iova), pteprot,
+ largepage ? "large page" : "small page");
+
+ for (i = 0; i < ptenum; i++) {
+ if (pte[i]) {
+ pr_err("The To-Map pte exists!(iova=0x%x pte=0x%x i=%d)\n",
+ iova, pte[i], i);
+ goto err_out;
+ }
+ pte[i] = pteprot;
+ }
+
+ data->iop.cfg.tlb->flush_pgtable(pte, ptenum * sizeof(arm_short_iopte),
+ cookie);
+ return 0;
+
+ err_out:
+ for (i--; i >= 0; i--)
+ pte[i] = 0;
+ if (ptenew)
+ kmem_cache_free(data->ptekmem, pte_new_va);
+ return -EEXIST;
+}
+
+static int _arm_short_map_section(struct arm_short_io_pgtable *data,
+ unsigned int iova, phys_addr_t pa,
+ int prot, bool supersection)
+{
+ arm_short_iopte pgprot;
+ arm_short_iopte mask = supersection ?
+ ARM_SHORT_F_PGD_PA_SUPERSECTION_MSK :
+ ARM_SHORT_F_PGD_PA_SECTION_MSK;
+ arm_short_iopte *pgd = data->pgd;
+ int i;
+ unsigned int pgdnum = supersection ? 16 : 1;
+
+ if ((iova | pa) & (~mask)) {
+ pr_err("IOVA|PA Not Aligned(iova=0x%x pa=0x%pa type=%s)\n",
+ iova, &pa, supersection ? "supersection" : "section");
+ return -EINVAL;
+ }
+
+ pgprot = (arm_short_iopte)pa;
+
+ if (data->iop.cfg.quirks & IO_PGTABLE_QUIRK_ARM_NS)
+ pgprot |= ARM_SHORT_F_PGD_NS_BIT_SECTION;
+
+ pgprot |= __arm_short_pgd_port(prot, supersection);
+
+ pgd += ARM_SHORT_PGD_IDX(iova);
+
+ pr_debug("iova:0x%x,pgd:0x%p(0x%p+0x%x),value:0x%x-%s\n",
+ iova, pgd, data->pgd, ARM_SHORT_PGD_IDX(iova),
+ pgprot, supersection ? "supersection" : "section");
+
+ for (i = 0; i < pgdnum; i++) {
+ if (unlikely(*pgd)) {
+ pr_err("The To-Map pdg exists!(iova=0x%x pgd=0x%x i=%d)\n",
+ iova, pgd[i], i);
+ goto err_out;
+ }
+ pgd[i] = pgprot;
+ }
+ data->iop.cfg.tlb->flush_pgtable(pgd,
+ pgdnum * sizeof(arm_short_iopte),
+ data->iop.cookie);
+ return 0;
+
+ err_out:
+ for (i--; i >= 0; i--)
+ pgd[i] = 0;
+ return -EEXIST;
+}
+
+static int arm_short_map(struct io_pgtable_ops *ops, unsigned long iova,
+ phys_addr_t paddr, size_t size, int prot)
+{
+ struct arm_short_io_pgtable *data = io_pgtable_short_ops_to_data(ops);
+ const struct iommu_gather_ops *tlb = data->iop.cfg.tlb;
+ int ret;
+
+ if (!(prot & (IOMMU_READ | IOMMU_WRITE)))
+ return -EINVAL;
+
+ if (size == SZ_4K) {/* most case */
+ ret = _arm_short_map_page(data, iova, paddr, prot, false);
+ } else if (size == SZ_64K) {
+ ret = _arm_short_map_page(data, iova, paddr, prot, true);
+ } else if (size == SZ_1M) {
+ ret = _arm_short_map_section(data, iova, paddr, prot, false);
+ } else if (size == SZ_16M) {
+ ret = _arm_short_map_section(data, iova, paddr, prot, true);
+ } else {
+ ret = -EINVAL;
+ }
+ tlb->tlb_add_flush(iova, size, true, data->iop.cookie);
+ return ret;
+}
+
+static struct io_pgtable *
+arm_short_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
+{
+ struct arm_short_io_pgtable *data;
+
+ if (cfg->ias != 32)
+ return NULL;
+
+ if (cfg->oas > ARM_SHORT_MAX_ADDR_BITS)
+ return NULL;
+
+ cfg->pgsize_bitmap &= SZ_4K | SZ_64K | SZ_1M | SZ_16M;
+
+ data = kzalloc(sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return NULL;
+
+ data->pgd_size = SZ_16K;
+
+ data->pgd = alloc_pages_exact(data->pgd_size, GFP_KERNEL | __GFP_ZERO);
+ if (!data->pgd)
+ goto out_free_data;
+
+ cfg->tlb->flush_pgtable(data->pgd, data->pgd_size, cookie);
+
+ /* kmem for pte */
+ data->ptekmem = kmem_cache_create("short-descriptor-pte",
+ ARM_SHORT_BYTES_PER_PTE,
+ ARM_SHORT_BYTES_PER_PTE,
+ 0, NULL);
+
+ if (IS_ERR_OR_NULL(data->ptekmem)) {
+ pr_err("Failed to Create cached mem for PTE %ld\n",
+ PTR_ERR(data->ptekmem));
+ goto out_free_pte;
+ }
+
+ /* TTBRs */
+ cfg->arm_short_cfg.ttbr[0] = virt_to_phys(data->pgd);
+ cfg->arm_short_cfg.ttbr[1] = 0;
+
+ cfg->arm_short_cfg.tcr = 0;
+
+ data->iop.ops = (struct io_pgtable_ops) {
+ .map = arm_short_map,
+ .unmap = arm_short_unmap,
+ .iova_to_phys = arm_short_iova_to_phys,
+ };
+
+ return &data->iop;
+
+out_free_pte:
+ free_pages_exact(data->pgd, data->pgd_size);
+out_free_data:
+ kfree(data);
+ return NULL;
+}
+
+static void arm_short_free_pgtable(struct io_pgtable *iop)
+{
+ struct arm_short_io_pgtable *data = io_pgtable_short_to_data(iop);
+
+ kmem_cache_destroy(data->ptekmem);
+ free_pages_exact(data->pgd, data->pgd_size);
+ kfree(data);
+}
+
+struct io_pgtable_init_fns io_pgtable_arm_short_init_fns = {
+ .alloc = arm_short_alloc_pgtable,
+ .free = arm_short_free_pgtable,
+};
+
diff --git a/drivers/iommu/io-pgtable.c b/drivers/iommu/io-pgtable.c
index 6436fe2..14a9b3a 100644
--- a/drivers/iommu/io-pgtable.c
+++ b/drivers/iommu/io-pgtable.c
@@ -28,6 +28,7 @@ extern struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s1_init_fns;
extern struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s2_init_fns;
extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s1_init_fns;
extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s2_init_fns;
+extern struct io_pgtable_init_fns io_pgtable_arm_short_init_fns;

static const struct io_pgtable_init_fns *
io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] =
@@ -38,6 +39,9 @@ io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] =
[ARM_64_LPAE_S1] = &io_pgtable_arm_64_lpae_s1_init_fns,
[ARM_64_LPAE_S2] = &io_pgtable_arm_64_lpae_s2_init_fns,
#endif
+#ifdef CONFIG_IOMMU_IO_PGTABLE_SHORT
+ [ARM_SHORT_DESC] = &io_pgtable_arm_short_init_fns,
+#endif
};

struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt,
diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h
index 10e32f6..47efaab 100644
--- a/drivers/iommu/io-pgtable.h
+++ b/drivers/iommu/io-pgtable.h
@@ -9,6 +9,7 @@ enum io_pgtable_fmt {
ARM_32_LPAE_S2,
ARM_64_LPAE_S1,
ARM_64_LPAE_S2,
+ ARM_SHORT_DESC,
IO_PGTABLE_NUM_FMTS,
};

@@ -62,6 +63,11 @@ struct io_pgtable_cfg {
u64 vttbr;
u64 vtcr;
} arm_lpae_s2_cfg;
+
+ struct {
+ u64 ttbr[2];
+ u64 tcr;
+ } arm_short_cfg;
};
};

--
1.8.1.1.dirty

2015-05-15 09:44:19

by Yong Wu (吴勇)

[permalink] [raw]
Subject: [PATCH v2 4/6] soc: mediatek: Add SMI driver

This patch add SMI(Smart Multimedia Interface) driver. This driver is
responsible to enable/disable iommu and control the clocks of each local arbiter.

Signed-off-by: Yong Wu <[email protected]>
---
drivers/soc/mediatek/Kconfig | 6 +
drivers/soc/mediatek/Makefile | 1 +
drivers/soc/mediatek/mt8173-smi.c | 298 ++++++++++++++++++++++++++++++++++++++
include/linux/mtk-smi.h | 39 +++++
4 files changed, 344 insertions(+)
create mode 100644 drivers/soc/mediatek/mt8173-smi.c
create mode 100644 include/linux/mtk-smi.h

diff --git a/drivers/soc/mediatek/Kconfig b/drivers/soc/mediatek/Kconfig
index 1d34819..5935ead 100644
--- a/drivers/soc/mediatek/Kconfig
+++ b/drivers/soc/mediatek/Kconfig
@@ -15,3 +15,9 @@ config MTK_SCPSYS
help
Say yes here to add support for the MediaTek SCPSYS power domain
driver.
+
+config MTK_SMI
+ bool
+ help
+ Smi help enable/disable iommu in MediaTek SoCs and control the clock
+ for each local arbiter.
diff --git a/drivers/soc/mediatek/Makefile b/drivers/soc/mediatek/Makefile
index ce88693..c086261 100644
--- a/drivers/soc/mediatek/Makefile
+++ b/drivers/soc/mediatek/Makefile
@@ -1,2 +1,3 @@
obj-$(CONFIG_MTK_PMIC_WRAP) += mtk-pmic-wrap.o
obj-$(CONFIG_MTK_SCPSYS) += mtk-scpsys.o
+obj-$(CONFIG_MTK_SMI) += mt8173-smi.o
diff --git a/drivers/soc/mediatek/mt8173-smi.c b/drivers/soc/mediatek/mt8173-smi.c
new file mode 100644
index 0000000..67bccec
--- /dev/null
+++ b/drivers/soc/mediatek/mt8173-smi.c
@@ -0,0 +1,298 @@
+/*
+ * Copyright (c) 2014-2015 MediaTek Inc.
+ * Author: Yong Wu <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/platform_device.h>
+#include <linux/clk.h>
+#include <linux/err.h>
+#include <linux/mm.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/of_platform.h>
+#include <linux/mtk-smi.h>
+
+#define SMI_LARB_MMU_EN (0xf00)
+#define F_SMI_MMU_EN(port) (1 << (port))
+
+enum {
+ MTK_CLK_APB,
+ MTK_CLK_SMI,
+ MTK_CLK_MAX,
+};
+
+struct mtk_smi_common {
+ void __iomem *base;
+ struct clk *clk[MTK_CLK_MAX];
+};
+
+struct mtk_smi_larb {
+ void __iomem *base;
+ spinlock_t portlock; /* lock for config port */
+ struct clk *clk[MTK_CLK_MAX];
+ struct device *smi;
+};
+
+static const char * const mtk_smi_clk_name[MTK_CLK_MAX] = {
+ "apb", "smi"
+};
+
+static int mtk_smi_common_get(struct device *smidev)
+{
+ struct mtk_smi_common *smipriv = dev_get_drvdata(smidev);
+ int i, ret;
+
+ for (i = MTK_CLK_APB; i < MTK_CLK_MAX; i++) {
+ ret = clk_enable(smipriv->clk[i]);
+ if (ret) {
+ dev_err(smidev,
+ "Failed to enable the clock of smi_common\n");
+ goto err_smi_clk;
+ }
+ }
+ return ret;
+
+err_smi_clk:
+ if (i == MTK_CLK_SMI)
+ clk_disable(smipriv->clk[MTK_CLK_APB]);
+ return ret;
+}
+
+static void mtk_smi_common_put(struct device *smidev)
+{
+ struct mtk_smi_common *smipriv = dev_get_drvdata(smidev);
+ int i;
+
+ for (i = MTK_CLK_SMI; i >= MTK_CLK_APB; i--)
+ clk_disable(smipriv->clk[i]);
+}
+
+int mtk_smi_larb_get(struct device *larbdev)
+{
+ struct mtk_smi_larb *larbpriv = dev_get_drvdata(larbdev);
+ int i, ret = 0;
+
+ ret = mtk_smi_common_get(larbpriv->smi);
+ if (ret)
+ return ret;
+
+ for (i = MTK_CLK_APB; i < MTK_CLK_MAX; i++) {
+ ret = clk_enable(larbpriv->clk[i]);
+ if (ret) {
+ dev_err(larbdev,
+ "Failed to enable larb clock%d. ret 0x%x\n",
+ i, ret);
+ goto err_larb_clk;
+ }
+ }
+
+ return ret;
+
+err_larb_clk:
+ if (i == MTK_CLK_SMI)
+ clk_disable(larbpriv->clk[MTK_CLK_APB]);
+ mtk_smi_common_put(larbpriv->smi);
+ return ret;
+}
+
+void mtk_smi_larb_put(struct device *larbdev)
+{
+ struct mtk_smi_larb *larbpriv = dev_get_drvdata(larbdev);
+ int i;
+
+ for (i = MTK_CLK_SMI; i >= MTK_CLK_APB; i--)
+ clk_disable(larbpriv->clk[i]);
+
+ mtk_smi_common_put(larbpriv->smi);
+}
+
+int mtk_smi_config_port(struct device *larbdev, unsigned int larbportid,
+ bool iommuen)
+{
+ struct mtk_smi_larb *larbpriv = dev_get_drvdata(larbdev);
+ unsigned long flags;
+ int ret;
+ u32 reg;
+
+ ret = mtk_smi_larb_get(larbdev);
+ if (ret)
+ return ret;
+
+ spin_lock_irqsave(&larbpriv->portlock, flags);
+ reg = readl(larbpriv->base + SMI_LARB_MMU_EN);
+ reg &= ~F_SMI_MMU_EN(larbportid);
+ if (iommuen)
+ reg |= F_SMI_MMU_EN(larbportid);
+ writel(reg, larbpriv->base + SMI_LARB_MMU_EN);
+ spin_unlock_irqrestore(&larbpriv->portlock, flags);
+
+ mtk_smi_larb_put(larbdev);
+
+ return 0;
+}
+
+static int mtk_smi_larb_probe(struct platform_device *pdev)
+{
+ struct mtk_smi_larb *larbpriv;
+ struct resource *res;
+ struct device *dev = &pdev->dev;
+ struct device_node *smi_node;
+ struct platform_device *smi_pdev;
+ int i, ret;
+
+ larbpriv = devm_kzalloc(dev, sizeof(struct mtk_smi_larb), GFP_KERNEL);
+ if (!larbpriv)
+ return -ENOMEM;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ larbpriv->base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(larbpriv->base)) {
+ dev_err(dev, "Failed to get the larbbase (0x%lx)\n",
+ PTR_ERR(larbpriv->base));
+ return PTR_ERR(larbpriv->base);
+ }
+
+ for (i = MTK_CLK_APB; i < MTK_CLK_MAX; i++) {
+ larbpriv->clk[i] = devm_clk_get(dev, mtk_smi_clk_name[i]);
+
+ if (IS_ERR(larbpriv->clk[i])) {
+ ret = PTR_ERR(larbpriv->clk[i]);
+ dev_err(dev, "Failed to get larb%d clock (0x%x)\n",
+ i, ret);
+ goto fail_larb_clk;
+ } else {
+ ret = clk_prepare(larbpriv->clk[i]);
+ if (ret) {
+ dev_err(dev, "Failed to prepare larb clock%d (0x%x)\n",
+ i, ret);
+ goto fail_larb_clk;
+ }
+ }
+ }
+
+ smi_node = of_parse_phandle(dev->of_node, "smi", 0);
+ if (!smi_node) {
+ dev_err(dev, "Failed to get smi node\n");
+ ret = -EINVAL;
+ goto fail_larb_clk;
+ }
+
+ smi_pdev = of_find_device_by_node(smi_node);
+ of_node_put(smi_node);
+ if (smi_pdev) {
+ larbpriv->smi = &smi_pdev->dev;
+ } else {
+ dev_err(dev, "Failed to get the smi_common device\n");
+ ret = -EINVAL;
+ goto fail_larb_clk;
+ }
+
+ spin_lock_init(&larbpriv->portlock);
+ dev_set_drvdata(dev, larbpriv);
+ return 0;
+
+fail_larb_clk:
+ while (--i >= MTK_CLK_APB)
+ clk_unprepare(larbpriv->clk[i]);
+ return ret;
+}
+
+static const struct of_device_id mtk_smi_larb_of_ids[] = {
+ { .compatible = "mediatek,mt8173-smi-larb",
+ },
+ {}
+};
+
+static struct platform_driver mtk_smi_larb_driver = {
+ .probe = mtk_smi_larb_probe,
+ .driver = {
+ .name = "mtksmilarb",
+ .of_match_table = mtk_smi_larb_of_ids,
+ }
+};
+
+static int mtk_smi_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct mtk_smi_common *smipriv;
+ int ret, i;
+
+ smipriv = devm_kzalloc(dev, sizeof(*smipriv), GFP_KERNEL);
+ if (!smipriv)
+ return -ENOMEM;
+
+ for (i = MTK_CLK_APB; i < MTK_CLK_MAX; i++) {
+ smipriv->clk[i] = devm_clk_get(dev, mtk_smi_clk_name[i]);
+
+ if (IS_ERR(smipriv->clk[i])) {
+ ret = PTR_ERR(smipriv->clk[i]);
+ dev_err(dev, "Failed to get smi-%s clock (0x%x)\n",
+ mtk_smi_clk_name[i], ret);
+ goto fail_smi_clk;
+ } else {
+ ret = clk_prepare(smipriv->clk[i]);
+ if (ret) {
+ dev_err(dev, "Failed to prepare smi%d clock 0x%x\n",
+ i, ret);
+ goto fail_smi_clk;
+ }
+ }
+ }
+
+ dev_set_drvdata(dev, smipriv);
+ return ret;
+
+fail_smi_clk:
+ if (i == MTK_CLK_SMI)
+ clk_unprepare(smipriv->clk[MTK_CLK_APB]);
+ return ret;
+}
+
+static const struct of_device_id mtk_smi_of_ids[] = {
+ { .compatible = "mediatek,mt8173-smi",
+ },
+ {}
+};
+
+static struct platform_driver mtk_smi_driver = {
+ .probe = mtk_smi_probe,
+ .driver = {
+ .name = "mtksmi",
+ .of_match_table = mtk_smi_of_ids,
+ }
+};
+
+static int __init mtk_smi_init(void)
+{
+ int ret;
+
+ ret = platform_driver_register(&mtk_smi_driver);
+ if (ret != 0) {
+ pr_err("Failed to register SMI driver\n");
+ return ret;
+ }
+
+ ret = platform_driver_register(&mtk_smi_larb_driver);
+ if (ret != 0) {
+ pr_err("Failed to register SMI-LARB driver\n");
+ goto fail_smi_larb;
+ }
+ return ret;
+
+fail_smi_larb:
+ platform_driver_unregister(&mtk_smi_driver);
+ return ret;
+}
+
+subsys_initcall(mtk_smi_init);
+
diff --git a/include/linux/mtk-smi.h b/include/linux/mtk-smi.h
new file mode 100644
index 0000000..ad07e6a
--- /dev/null
+++ b/include/linux/mtk-smi.h
@@ -0,0 +1,39 @@
+/*
+ * Copyright (c) 2014-2015 MediaTek Inc.
+ * Author: Yong Wu <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#ifndef MTK_IOMMU_SMI_H
+#define MTK_IOMMU_SMI_H
+#include <linux/device.h>
+
+/*
+ * Enable iommu for each port, it is only for iommu.
+ *
+ * Returns 0 if successfully, others if failed.
+ */
+int mtk_smi_config_port(struct device *larbdev, unsigned int larbportid,
+ bool iommuen);
+/*
+ * The multimedia module should call the two function below
+ * which help open/close the clock of the larb.
+ * so the client dtsi should add the larb like "larb = <&larb0>"
+ * to get platform_device.
+ *
+ * mtk_smi_larb_get should be called before the multimedia h/w work.
+ * mtk_smi_larb_put should be called after h/w done.
+ *
+ * Returns 0 if successfully, others if failed.
+ */
+int mtk_smi_larb_get(struct device *plarbdev);
+void mtk_smi_larb_put(struct device *plarbdev);
+
+#endif
--
1.8.1.1.dirty

2015-05-15 09:45:12

by Yong Wu (吴勇)

[permalink] [raw]
Subject: [PATCH v2 5/6] iommu/mediatek: Add mt8173 IOMMU driver

This patch adds support for mediatek m4u (MultiMedia Memory Management Unit).

Signed-off-by: Yong Wu <[email protected]>
---
drivers/iommu/Kconfig | 11 +
drivers/iommu/Makefile | 1 +
drivers/iommu/mtk_iommu.c | 657 ++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 669 insertions(+)
create mode 100644 drivers/iommu/mtk_iommu.c

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 3d2eac6..da83704 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -359,4 +359,15 @@ config ARM_SMMU
Say Y here if your SoC includes an IOMMU device implementing
the ARM SMMU architecture.

+config MTK_IOMMU
+ bool "MTK IOMMU Support"
+ select IOMMU_API
+ select IOMMU_DMA
+ select IOMMU_IO_PGTABLE_SHORT
+ select MTK_SMI
+ help
+ Support for the IOMMUs on certain Mediatek SOCs.
+
+ If unsure, say N here.
+
endif # IOMMU_SUPPORT
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index 815b3c8..3224d29 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -17,6 +17,7 @@ obj-$(CONFIG_IRQ_REMAP) += intel_irq_remapping.o irq_remapping.o
obj-$(CONFIG_OMAP_IOMMU) += omap-iommu.o
obj-$(CONFIG_OMAP_IOMMU_DEBUG) += omap-iommu-debug.o
obj-$(CONFIG_ROCKCHIP_IOMMU) += rockchip-iommu.o
+obj-$(CONFIG_MTK_IOMMU) += mtk_iommu.o
obj-$(CONFIG_TEGRA_IOMMU_GART) += tegra-gart.o
obj-$(CONFIG_TEGRA_IOMMU_SMMU) += tegra-smmu.o
obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
new file mode 100644
index 0000000..378e53e
--- /dev/null
+++ b/drivers/iommu/mtk_iommu.c
@@ -0,0 +1,657 @@
+/*
+ * Copyright (c) 2014-2015 MediaTek Inc.
+ * Author: Yong Wu <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/platform_device.h>
+#include <linux/iommu.h>
+#include <linux/of_iommu.h>
+#include <linux/dma-mapping.h>
+#include <linux/dma-iommu.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/of_platform.h>
+#include <linux/clk.h>
+#include <linux/iopoll.h>
+#include <linux/mtk-smi.h>
+#include <asm/cacheflush.h>
+
+#include "io-pgtable.h"
+
+#define REG_MMU_PT_BASE_ADDR 0x000
+
+#define REG_MMU_INVALIDATE 0x020
+#define F_ALL_INVLD 0x2
+#define F_MMU_INV_RANGE 0x1
+
+#define REG_MMU_INVLD_START_A 0x024
+#define REG_MMU_INVLD_END_A 0x028
+
+#define REG_MMU_INV_SEL 0x038
+#define F_INVLD_EN0 BIT(0)
+#define F_INVLD_EN1 BIT(1)
+
+#define REG_MMU_STANDARD_AXI_MODE 0x048
+#define REG_MMU_DCM_DIS 0x050
+
+#define REG_MMU_CTRL_REG 0x110
+#define F_MMU_PREFETCH_RT_REPLACE_MOD BIT(4)
+#define F_MMU_TF_PROTECT_SEL(prot) (((prot) & 0x3) << 5)
+#define F_COHERENCE_EN BIT(8)
+
+#define REG_MMU_IVRP_PADDR 0x114
+#define F_MMU_IVRP_PA_SET(pa) ((pa) >> 1)
+
+#define REG_MMU_INT_CONTROL0 0x120
+#define F_L2_MULIT_HIT_EN BIT(0)
+#define F_TABLE_WALK_FAULT_INT_EN BIT(1)
+#define F_PREETCH_FIFO_OVERFLOW_INT_EN BIT(2)
+#define F_MISS_FIFO_OVERFLOW_INT_EN BIT(3)
+#define F_PREFETCH_FIFO_ERR_INT_EN BIT(5)
+#define F_MISS_FIFO_ERR_INT_EN BIT(6)
+#define F_INT_L2_CLR_BIT BIT(12)
+
+#define REG_MMU_INT_MAIN_CONTROL 0x124
+#define F_INT_TRANSLATION_FAULT BIT(0)
+#define F_INT_MAIN_MULTI_HIT_FAULT BIT(1)
+#define F_INT_INVALID_PA_FAULT BIT(2)
+#define F_INT_ENTRY_REPLACEMENT_FAULT BIT(3)
+#define F_INT_TLB_MISS_FAULT BIT(4)
+#define F_INT_MISS_TRANSATION_FIFO_FAULT BIT(5)
+#define F_INT_PRETETCH_TRANSATION_FIFO_FAULT BIT(6)
+
+#define REG_MMU_CPE_DONE 0x12C
+
+#define REG_MMU_FAULT_ST1 0x134
+
+#define REG_MMU_FAULT_VA 0x13c
+#define F_MMU_FAULT_VA_MSK 0xfffff000
+#define F_MMU_FAULT_VA_WRITE_BIT BIT(1)
+#define F_MMU_FAULT_VA_LAYER_BIT BIT(0)
+
+#define REG_MMU_INVLD_PA 0x140
+#define REG_MMU_INT_ID 0x150
+#define F_MMU0_INT_ID_LARB_ID(a) (((a) >> 7) & 0x7)
+#define F_MMU0_INT_ID_PORT_ID(a) (((a) >> 2) & 0x1f)
+
+#define MTK_PROTECT_PA_ALIGN (128)
+
+#define MTK_IOMMU_LARB_MAX_NR 8
+
+struct mtk_iommu_info {
+ void __iomem *base;
+ int irq;
+ struct device *dev;
+ struct device *larbdev[MTK_IOMMU_LARB_MAX_NR];
+ struct clk *bclk;
+ dma_addr_t protect_base; /* protect memory base */
+ unsigned int larb_nr; /* local arbiter number */
+ unsigned int larb_portid_base[MTK_IOMMU_LARB_MAX_NR];
+ /* the port id base for each local arbiter */
+};
+
+struct mtk_iommu_domain {
+ struct imu_pgd_t *pgd;
+ spinlock_t pgtlock; /* lock for modifying page table */
+
+ struct io_pgtable_cfg cfg;
+ struct io_pgtable_ops *iop;
+
+ struct mtk_iommu_info *imuinfo;
+ struct iommu_domain domain;
+};
+
+static struct mtk_iommu_domain *to_mtk_domain(struct iommu_domain *dom)
+{
+ return container_of(dom, struct mtk_iommu_domain, domain);
+}
+
+static void mtk_iommu_clear_intr(void __iomem *m4u_base)
+{
+ u32 val;
+
+ val = readl(m4u_base + REG_MMU_INT_CONTROL0);
+ val |= F_INT_L2_CLR_BIT;
+ writel(val, m4u_base + REG_MMU_INT_CONTROL0);
+}
+
+static void mtk_iommu_tlb_flush_all(void *cookie)
+{
+ struct mtk_iommu_domain *domain = cookie;
+ u32 val;
+
+ val = F_INVLD_EN1 | F_INVLD_EN0;
+ writel(val, domain->imuinfo->base + REG_MMU_INV_SEL);
+ writel(F_ALL_INVLD, domain->imuinfo->base + REG_MMU_INVALIDATE);
+}
+
+static void mtk_iommu_tlb_add_flush(unsigned long iova, size_t size,
+ bool leaf, void *cookie)
+{
+ struct mtk_iommu_domain *domain = cookie;
+ void __iomem *m4u_base = domain->imuinfo->base;
+ unsigned int iova_start = iova, iova_end = iova + size - 1;
+ int ret;
+ u32 val;
+
+ val = F_INVLD_EN1 | F_INVLD_EN0;
+ writel(val, m4u_base + REG_MMU_INV_SEL);
+
+ writel(iova_start, m4u_base + REG_MMU_INVLD_START_A);
+ writel(iova_end, m4u_base + REG_MMU_INVLD_END_A);
+ writel(F_MMU_INV_RANGE, m4u_base + REG_MMU_INVALIDATE);
+
+ ret = readl_poll_timeout_atomic(m4u_base + REG_MMU_CPE_DONE, val,
+ val != 0, 10, 1000000);
+ if (ret) {
+ dev_warn(domain->imuinfo->dev, "Invalid tlb don't done\n");
+ mtk_iommu_tlb_flush_all(cookie);
+ }
+ writel(0, m4u_base + REG_MMU_CPE_DONE);
+}
+
+static void mtk_iommu_flush_pgtable(void *ptr, size_t size, void *cookie)
+{
+ /*
+ * After delete arch_setup_dma_ops,
+ * This will be replaced with dma_map_page
+ */
+ __dma_flush_range(ptr, ptr + size);
+}
+
+static struct iommu_gather_ops mtk_iommu_gather_ops = {
+ .tlb_flush_all = mtk_iommu_tlb_flush_all,
+ .tlb_add_flush = mtk_iommu_tlb_add_flush,
+ .tlb_sync = mtk_iommu_tlb_flush_all,
+ .flush_pgtable = mtk_iommu_flush_pgtable,
+};
+
+static irqreturn_t mtk_iommu_isr(int irq, void *dev_id)
+{
+ struct mtk_iommu_domain *mtkdomain = dev_id;
+ struct mtk_iommu_info *piommu = mtkdomain->imuinfo;
+ struct device *dev = piommu->dev;
+ void __iomem *m4u_base = piommu->base;
+ u32 int_state, regval, fault_iova, fault_pa;
+ unsigned int fault_larb, fault_port;
+ bool layer, write;
+
+ int_state = readl(m4u_base + REG_MMU_FAULT_ST1);
+
+ /* read error info from registers */
+ fault_iova = readl(m4u_base + REG_MMU_FAULT_VA);
+ layer = fault_iova & F_MMU_FAULT_VA_LAYER_BIT;
+ write = fault_iova & F_MMU_FAULT_VA_WRITE_BIT;
+ fault_iova &= F_MMU_FAULT_VA_MSK;
+ fault_pa = readl(m4u_base + REG_MMU_INVLD_PA);
+ regval = readl(m4u_base + REG_MMU_INT_ID);
+ fault_larb = F_MMU0_INT_ID_LARB_ID(regval);
+ fault_port = F_MMU0_INT_ID_PORT_ID(regval);
+
+ report_iommu_fault(&mtkdomain->domain, dev, fault_iova,
+ write ? IOMMU_FAULT_WRITE : IOMMU_FAULT_READ);
+
+ if (int_state & F_INT_TRANSLATION_FAULT) {
+ dev_err_ratelimited(
+ dev,
+ "fault:iova=0x%x pa=0x%x larb=%d port=%d layer=%d %s\n",
+ fault_iova, fault_pa, fault_larb, fault_port,
+ layer, write ? "write" : "read");
+ }
+
+ if (int_state & F_INT_MAIN_MULTI_HIT_FAULT)
+ dev_err_ratelimited(dev,
+ "multi-hit!iova=0x%x larb=%d port=%d\n",
+ fault_iova, fault_larb, fault_port);
+
+ if (int_state & F_INT_INVALID_PA_FAULT) {
+ if (!(int_state & F_INT_TRANSLATION_FAULT))
+ dev_err_ratelimited(
+ dev,
+ "invalid pa!iova=0x%x larb=%d port=%d\n",
+ fault_iova, fault_larb, fault_port);
+ }
+ if (int_state & F_INT_ENTRY_REPLACEMENT_FAULT)
+ dev_err_ratelimited(dev,
+ "replace-fault!iova=0x%x larb=%d port=%d\n",
+ fault_iova, fault_larb, fault_port);
+
+ if (int_state & F_INT_TLB_MISS_FAULT)
+ dev_err_ratelimited(dev,
+ "tlb miss-fault!iova=0x%x larb=%d port=%d\n",
+ fault_iova, fault_larb, fault_port);
+
+ mtk_iommu_tlb_flush_all(mtkdomain);
+
+ mtk_iommu_clear_intr(m4u_base);
+
+ return IRQ_HANDLED;
+}
+
+static int mtk_iommu_parse_dt(struct platform_device *pdev,
+ struct mtk_iommu_info *piommu)
+{
+ struct device *dev = &pdev->dev;
+ struct device_node *ofnode;
+ struct resource *res;
+ unsigned int i, port_nr, lastportnr = 0;
+ int ret;
+
+ ofnode = dev->of_node;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ piommu->base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(piommu->base)) {
+ dev_err(dev, "Failed to get base (%lx)\n",
+ PTR_ERR(piommu->base));
+ ret = PTR_ERR(piommu->base);
+ goto err_parse_dt;
+ }
+
+ piommu->irq = platform_get_irq(pdev, 0);
+ if (piommu->irq < 0) {
+ dev_err(dev, "Failed to get IRQ (%d)\n", piommu->irq);
+ ret = piommu->irq;
+ goto err_parse_dt;
+ }
+
+ piommu->bclk = devm_clk_get(dev, "bclk");
+ if (IS_ERR(piommu->bclk)) {
+ dev_err(dev, "Failed to get the bclk (%lx)\n",
+ PTR_ERR(piommu->bclk));
+ ret = PTR_ERR(piommu->bclk);
+ goto err_parse_dt;
+ }
+
+ ret = of_property_count_u32_elems(ofnode, "larb-portes-nr");
+ if (ret < 0) {
+ dev_err(dev, "Failed to get larb-portes-nr\n");
+ goto err_parse_dt;
+ } else {
+ piommu->larb_nr = ret;
+ }
+
+ for (i = 0; i < piommu->larb_nr; i++) {
+ ret = of_property_read_u32_index(ofnode, "larb-portes-nr",
+ i, &port_nr);
+ if (ret) {
+ dev_err(dev, "Failed to get the port nr@larb%d\n", i);
+ goto err_parse_dt;
+ } else {
+ piommu->larb_portid_base[i] = lastportnr;
+ lastportnr += port_nr;
+ }
+ }
+ piommu->larb_portid_base[i] = lastportnr;
+
+ for (i = 0; i < piommu->larb_nr; i++) {
+ struct device_node *larbnode;
+ struct platform_device *plarbdev;
+
+ larbnode = of_parse_phandle(ofnode, "larb", i);
+ if (!larbnode) {
+ dev_err(dev, "Failed to get the larb property\n");
+ ret = -EINVAL;
+ goto err_parse_dt;
+ }
+ plarbdev = of_find_device_by_node(larbnode);
+ of_node_put(larbnode);
+ if (!plarbdev) {
+ dev_err(dev, "Failed to get the dev@larb%d\n", i);
+ ret = -EINVAL;
+ goto err_parse_dt;
+ } else {
+ piommu->larbdev[i] = &plarbdev->dev;
+ }
+ }
+
+ return 0;
+
+err_parse_dt:
+ return ret;
+}
+
+static int mtk_iommu_hw_init(const struct mtk_iommu_domain *mtkdomain)
+{
+ struct mtk_iommu_info *piommu = mtkdomain->imuinfo;
+ void __iomem *m4u_base = piommu->base;
+ u32 regval;
+ int ret = 0;
+
+ ret = clk_prepare_enable(piommu->bclk);
+ if (ret)
+ return -ENODEV;
+
+ writel(mtkdomain->cfg.arm_short_cfg.ttbr[0],
+ m4u_base + REG_MMU_PT_BASE_ADDR);
+
+ regval = F_MMU_PREFETCH_RT_REPLACE_MOD |
+ F_MMU_TF_PROTECT_SEL(2) |
+ F_COHERENCE_EN;
+ writel(regval, m4u_base + REG_MMU_CTRL_REG);
+
+ regval = F_L2_MULIT_HIT_EN |
+ F_TABLE_WALK_FAULT_INT_EN |
+ F_PREETCH_FIFO_OVERFLOW_INT_EN |
+ F_MISS_FIFO_OVERFLOW_INT_EN |
+ F_PREFETCH_FIFO_ERR_INT_EN |
+ F_MISS_FIFO_ERR_INT_EN;
+ writel(regval, m4u_base + REG_MMU_INT_CONTROL0);
+
+ regval = F_INT_TRANSLATION_FAULT |
+ F_INT_MAIN_MULTI_HIT_FAULT |
+ F_INT_INVALID_PA_FAULT |
+ F_INT_ENTRY_REPLACEMENT_FAULT |
+ F_INT_TLB_MISS_FAULT |
+ F_INT_MISS_TRANSATION_FIFO_FAULT |
+ F_INT_PRETETCH_TRANSATION_FIFO_FAULT;
+ writel(regval, m4u_base + REG_MMU_INT_MAIN_CONTROL);
+
+ regval = ALIGN(piommu->protect_base, MTK_PROTECT_PA_ALIGN);
+ regval = (u32)F_MMU_IVRP_PA_SET(regval);
+ writel(regval, m4u_base + REG_MMU_IVRP_PADDR);
+
+ writel(0, m4u_base + REG_MMU_DCM_DIS);
+ writel(0, m4u_base + REG_MMU_STANDARD_AXI_MODE);
+
+ if (devm_request_irq(piommu->dev, piommu->irq, mtk_iommu_isr, 0,
+ dev_name(piommu->dev), (void *)mtkdomain)) {
+ dev_err(piommu->dev, "Failed @ IRQ-%d Request\n", piommu->irq);
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int mtk_iommu_config_port(struct mtk_iommu_info *piommu,
+ int portid, bool iommuen)
+{
+ bool validlarb = false;
+ int i, larbid, larbportid;
+ int ret;
+
+ if (portid >= piommu->larb_portid_base[piommu->larb_nr]) {
+ dev_err(piommu->dev, "Invalid portid (%d)\n", portid);
+ return -EINVAL;
+ }
+
+ for (i = piommu->larb_nr - 1; i >= 0; i--) {
+ if (portid >= piommu->larb_portid_base[i]) {
+ larbid = i;
+ larbportid = portid -
+ piommu->larb_portid_base[i];
+ validlarb = true;
+ break;
+ }
+ }
+
+ if (validlarb) {
+ ret = mtk_smi_config_port(piommu->larbdev[larbid],
+ larbportid, iommuen);
+ if (ret) {
+ dev_err(piommu->dev,
+ "Failed to config port portid-%d\n", portid);
+ }
+ } else {
+ ret = -EINVAL;
+ dev_err(piommu->dev, "Failed to find out portid-%d\n", portid);
+ }
+ return ret;
+}
+
+static int mtk_iommu_dev_enable_iommu(struct mtk_iommu_info *imuinfo,
+ struct device *dev, bool enable)
+{
+ struct of_phandle_args out_args = {0};
+ struct device *imudev = imuinfo->dev;
+ unsigned int i = 0;
+ int ret = 0;
+
+ while (!of_parse_phandle_with_args(dev->of_node, "iommus",
+ "#iommu-cells", i++, &out_args)) {
+ if (out_args.np != imudev->of_node)
+ continue;
+ if (out_args.args_count != 1) {
+ dev_err(imudev,
+ "invalid #iommu-cells property for IOMMU\n");
+ }
+
+ dev_dbg(imudev, "%s iommu @ port:%d\n",
+ enable ? "enable" : "disable", out_args.args[0]);
+
+ ret = mtk_iommu_config_port(imuinfo, out_args.args[0], enable);
+ if (!ret)
+ dev->archdata.dma_ops = (enable) ?
+ imudev->archdata.dma_ops : NULL;
+ else
+ break;
+ }
+ return ret;
+}
+
+static struct iommu_domain *mtk_iommu_domain_alloc(unsigned type)
+{
+ struct mtk_iommu_domain *priv;
+
+ /* We only support unmanaged domains for now */
+ if (type != IOMMU_DOMAIN_UNMANAGED)
+ return NULL;
+
+ priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return ERR_PTR(-ENOMEM);
+
+ priv->cfg.quirks = IO_PGTABLE_QUIRK_ARM_NS;
+ priv->cfg.pgsize_bitmap = SZ_16M | SZ_1M | SZ_64K | SZ_4K,
+ priv->cfg.ias = 32;
+ priv->cfg.oas = 32;
+ priv->cfg.tlb = &mtk_iommu_gather_ops;
+
+ priv->iop = alloc_io_pgtable_ops(ARM_SHORT_DESC, &priv->cfg, priv);
+ if (!priv->iop) {
+ pr_err("Failed to alloc io pgtable@MTK iommu\n");
+ goto err_free_priv;
+ }
+
+ spin_lock_init(&priv->pgtlock);
+
+ priv->domain.geometry.aperture_start = 0;
+ priv->domain.geometry.aperture_end = ~0;
+ priv->domain.geometry.force_aperture = true;
+
+ return &priv->domain;
+
+err_free_priv:
+ kfree(priv);
+ return NULL;
+}
+
+static void mtk_iommu_domain_free(struct iommu_domain *domain)
+{
+ struct mtk_iommu_domain *priv = to_mtk_domain(domain);
+
+ free_io_pgtable_ops(priv->iop);
+ kfree(priv);
+}
+
+static int mtk_iommu_attach_device(struct iommu_domain *domain,
+ struct device *dev)
+{
+ struct mtk_iommu_domain *priv = to_mtk_domain(domain);
+ int ret;
+
+ /* word around for arch_setup_dma_ops */
+ if (!priv->imuinfo)
+ ret = 0;
+ else
+ ret = mtk_iommu_dev_enable_iommu(priv->imuinfo, dev, true);
+ return ret;
+}
+
+static void mtk_iommu_detach_device(struct iommu_domain *domain,
+ struct device *dev)
+{
+ struct mtk_iommu_domain *priv = to_mtk_domain(domain);
+
+ mtk_iommu_dev_enable_iommu(priv->imuinfo, dev, false);
+}
+
+static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova,
+ phys_addr_t paddr, size_t size, int prot)
+{
+ struct mtk_iommu_domain *priv = to_mtk_domain(domain);
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&priv->pgtlock, flags);
+ ret = priv->iop->map(priv->iop, iova, paddr, size, prot);
+ spin_unlock_irqrestore(&priv->pgtlock, flags);
+
+ return ret;
+}
+
+static size_t mtk_iommu_unmap(struct iommu_domain *domain,
+ unsigned long iova, size_t size)
+{
+ struct mtk_iommu_domain *priv = to_mtk_domain(domain);
+ unsigned long flags;
+
+ spin_lock_irqsave(&priv->pgtlock, flags);
+ priv->iop->unmap(priv->iop, iova, size);
+ spin_unlock_irqrestore(&priv->pgtlock, flags);
+
+ return size;
+}
+
+static phys_addr_t mtk_iommu_iova_to_phys(struct iommu_domain *domain,
+ dma_addr_t iova)
+{
+ struct mtk_iommu_domain *priv = to_mtk_domain(domain);
+ unsigned long flags;
+ phys_addr_t pa;
+
+ spin_lock_irqsave(&priv->pgtlock, flags);
+ pa = priv->iop->iova_to_phys(priv->iop, iova);
+ spin_unlock_irqrestore(&priv->pgtlock, flags);
+
+ return pa;
+}
+
+static struct iommu_ops mtk_iommu_ops = {
+ .domain_alloc = mtk_iommu_domain_alloc,
+ .domain_free = mtk_iommu_domain_free,
+ .attach_dev = mtk_iommu_attach_device,
+ .detach_dev = mtk_iommu_detach_device,
+ .map = mtk_iommu_map,
+ .unmap = mtk_iommu_unmap,
+ .map_sg = default_iommu_map_sg,
+ .iova_to_phys = mtk_iommu_iova_to_phys,
+ .pgsize_bitmap = SZ_4K | SZ_64K | SZ_1M | SZ_16M,
+};
+
+static const struct of_device_id mtk_iommu_of_ids[] = {
+ { .compatible = "mediatek,mt8173-m4u",
+ },
+ {}
+};
+
+static int mtk_iommu_probe(struct platform_device *pdev)
+{
+ struct iommu_domain *domain;
+ struct mtk_iommu_domain *mtk_domain;
+ struct mtk_iommu_info *piommu;
+ void __iomem *protect;
+ int ret;
+
+ piommu = devm_kzalloc(&pdev->dev, sizeof(*piommu), GFP_KERNEL);
+ if (!piommu)
+ return -ENOMEM;
+
+ piommu->dev = &pdev->dev;
+
+ /*
+ * Alloc for Protect memory.
+ * HW will access here while translation fault.
+ */
+ protect = devm_kzalloc(piommu->dev, MTK_PROTECT_PA_ALIGN * 2,
+ GFP_KERNEL);
+ if (!protect)
+ return -ENOMEM;
+ piommu->protect_base = dma_map_single(piommu->dev, protect,
+ MTK_PROTECT_PA_ALIGN * 2,
+ DMA_TO_DEVICE); /* clean cache */
+
+ ret = mtk_iommu_parse_dt(pdev, piommu);
+ if (ret)
+ return ret;
+
+ /*
+ * arch_setup_dma_ops is not proper.
+ * It should be replaced it with iommu_dma_create_domain
+ * in next dma patch.
+ */
+ arch_setup_dma_ops(piommu->dev, 0, (1ULL << 32) - 1, &mtk_iommu_ops, 0);
+
+ domain = iommu_dma_raw_domain(get_dma_domain(piommu->dev));
+ if (!domain) {
+ dev_err(piommu->dev, "Failed to get iommu domain\n");
+ ret = -EINVAL;
+ goto err_free_domain;
+ }
+
+ mtk_domain = to_mtk_domain(domain);
+ mtk_domain->imuinfo = piommu;
+
+ dev_set_drvdata(piommu->dev, piommu);
+
+ ret = mtk_iommu_hw_init(mtk_domain);
+ if (ret < 0) {
+ dev_err(piommu->dev, "Failed @ HW init\n");
+ goto err_free_domain;
+ }
+
+ return 0;
+
+err_free_domain:
+ arch_teardown_dma_ops(piommu->dev);
+ return ret;
+}
+
+static int mtk_iommu_remove(struct platform_device *pdev)
+{
+ struct mtk_iommu_info *piommu = dev_get_drvdata(&pdev->dev);
+
+ arch_teardown_dma_ops(piommu->dev);
+
+ dma_unmap_single(piommu->dev, piommu->protect_base,
+ MTK_PROTECT_PA_ALIGN * 2, DMA_TO_DEVICE);
+
+ return 0;
+}
+
+static struct platform_driver mtk_iommu_driver = {
+ .probe = mtk_iommu_probe,
+ .remove = mtk_iommu_remove,
+ .driver = {
+ .name = "mtkiommu",
+ .of_match_table = mtk_iommu_of_ids,
+ }
+};
+
+static int __init mtk_iommu_init(void)
+{
+ return platform_driver_register(&mtk_iommu_driver);
+}
+
+subsys_initcall(mtk_iommu_init);
+
--
1.8.1.1.dirty

2015-05-15 09:44:25

by Yong Wu (吴勇)

[permalink] [raw]
Subject: [PATCH v2 6/6] dts: mt8173: Add iommu/smi nodes for mt8173

This patch add the iommu/larbs nodes for mt8173

Signed-off-by: Yong Wu <[email protected]>
---
arch/arm64/boot/dts/mediatek/mt8173.dtsi | 79 ++++++++++++++++++++++++++++++++
1 file changed, 79 insertions(+)

diff --git a/arch/arm64/boot/dts/mediatek/mt8173.dtsi b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
index 3fca624..95a8a15 100644
--- a/arch/arm64/boot/dts/mediatek/mt8173.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
@@ -16,6 +16,7 @@
#include <dt-bindings/reset-controller/mt8173-resets.h>
#include <dt-bindings/clock/mt8173-clk.h>
#include <dt-bindings/power/mt8173-power.h>
+#include <dt-bindings/iommu/mt8173-iommu-port.h>
#include "mt8173-pinfunc.h"

/ {
@@ -200,6 +201,22 @@
reg = <0 0x10200620 0 0x20>;
};

+ iommu: mmsys_iommu@10205000 {
+ compatible = "mediatek,mt8173-m4u";
+ reg = <0 0x10205000 0 0x1000>;
+ interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_LOW>;
+ clocks = <&infracfg CLK_INFRA_M4U>;
+ clock-names = "bclk";
+ larb-portes-nr = <M4U_LARB0_PORT_NR
+ M4U_LARB1_PORT_NR
+ M4U_LARB2_PORT_NR
+ M4U_LARB3_PORT_NR
+ M4U_LARB4_PORT_NR
+ M4U_LARB5_PORT_NR>;
+ larb = <&larb0 &larb1 &larb2 &larb3 &larb4 &larb5>;
+ #iommu-cells = <1>;
+ };
+
apmixedsys: apmixedsys@10209000 {
compatible = "mediatek,mt8173-apmixedsys";
reg = <0 0x10209000 0 0x1000>;
@@ -258,6 +275,68 @@
clock-names = "baud", "bus";
status = "disabled";
};
+
+ larb0:larb@14021000 {
+ compatible = "mediatek,mt8173-smi-larb";
+ reg = <0 0x14021000 0 0x1000>;
+ smi = <&smi_common>;
+ clocks = <&mmsys MM_SMI_LARB0>,
+ <&mmsys MM_SMI_LARB0>;
+ clock-names = "apb", "smi";
+ };
+
+ smi_common:smi@14022000 {
+ compatible = "mediatek,mt8173-smi";
+ reg = <0 0x14022000 0 0x1000>;
+ clocks = <&mmsys MM_SMI_COMMON>,
+ <&mmsys MM_SMI_COMMON>;
+ clock-names = "apb", "smi";
+ };
+
+ larb4:larb@14027000 {
+ compatible = "mediatek,mt8173-smi-larb";
+ reg = <0 0x14027000 0 0x1000>;
+ smi = <&smi_common>;
+ clocks = <&mmsys MM_SMI_LARB4>,
+ <&mmsys MM_SMI_LARB4>;
+ clock-names = "apb", "smi";
+ };
+
+ larb2:larb@15001000 {
+ compatible = "mediatek,mt8173-smi-larb";
+ reg = <0 0x15001000 0 0x1000>;
+ smi = <&smi_common>;
+ clocks = <&imgsys IMG_LARB2_SMI>,
+ <&imgsys IMG_LARB2_SMI>;
+ clock-names = "apb", "smi";
+ };
+
+ larb1:larb@16010000 {
+ compatible = "mediatek,mt8173-smi-larb";
+ reg = <0 0x16010000 0 0x1000>;
+ smi = <&smi_common>;
+ clocks = <&vdecsys VDEC_CKEN>,
+ <&vdecsys VDEC_LARB_CKEN>;
+ clock-names = "apb", "smi";
+ };
+
+ larb3:larb@18001000 {
+ compatible = "mediatek,mt8173-smi-larb";
+ reg = <0 0x18001000 0 0x1000>;
+ smi = <&smi_common>;
+ clocks = <&vencsys VENC_CKE1>,
+ <&vencsys VENC_CKE0>;
+ clock-names = "apb", "smi";
+ };
+
+ larb5:larb@19001000 {
+ compatible = "mediatek,mt8173-smi-larb";
+ reg = <0 0x19001000 0 0x1000>;
+ smi = <&smi_common>;
+ clocks = <&vencltsys VENCLT_CKE1>,
+ <&vencltsys VENCLT_CKE0>;
+ clock-names = "apb", "smi";
+ };
};

};
--
1.8.1.1.dirty

2015-05-15 10:02:25

by Yong Wu (吴勇)

[permalink] [raw]
Subject: Re: [PATCH v2 2/6] dt-bindings: mediatek: Add smi dts binding

On Fri, 2015-05-15 at 17:43 +0800, Yong Wu wrote:
> This patch add smi binding document.
>
> Signed-off-by: Yong Wu <[email protected]>
> ---
> .../bindings/soc/mediatek/mediatek,smi-larb.txt | 24 ++++++++++++++++++++++
> .../bindings/soc/mediatek/mediatek,smi.txt | 22 ++++++++++++++++++++
> 2 files changed, 46 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/soc/mediatek/mediatek,smi-larb.txt
> create mode 100644 Documentation/devicetree/bindings/soc/mediatek/mediatek,smi.txt
>
> diff --git a/Documentation/devicetree/bindings/soc/mediatek/mediatek,smi-larb.txt b/Documentation/devicetree/bindings/soc/mediatek/mediatek,smi-larb.txt
> new file mode 100644
> index 0000000..c06c5b6
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/soc/mediatek/mediatek,smi-larb.txt
> @@ -0,0 +1,24 @@
> +SMI(Smart Multimedia Interface) Local Arbiter
> +
> +The hardware block diagram please check bindings/iommu/mediatek,iommu.txt
> +
> +Required properties:
> +- compatible : must be "mediatek,mt8173-smi-larb"
> +- reg : the register and size of each local arbiter.
> +- smi : must be "&smi_common". Refer to bindings/soc/mediatek,smi.txt.
> +- clocks : must contain one entry for each clock-names.
> + There are 2 clockes:
> + APB clock : Advanced Peripheral Bus clock, It's the clock for setting
> + the register.
> + SMI clock : It's the clock for transfer data and command.
> +- clock-name: must be "apb" and "smi".
Sorry, Here should be clock-names. I will fix the next version.
> +
> +Example:
> + larb0:larb@14021000 {
> + compatible = "mediatek,mt8173-smi-larb";
> + reg = <0 0x14021000 0 0x1000>;
> + smi = <&smi_common>;
> + clocks = <&mmsys MM_SMI_LARB0>,
> + <&mmsys MM_SMI_LARB0>;
> + clock-names = "apb", "smi";
> + };
> diff --git a/Documentation/devicetree/bindings/soc/mediatek/mediatek,smi.txt b/Documentation/devicetree/bindings/soc/mediatek/mediatek,smi.txt
> new file mode 100644
> index 0000000..c2389b4
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/soc/mediatek/mediatek,smi.txt
> @@ -0,0 +1,22 @@
> +SMI(Smart Multimedia Interface)
> +
> +The hardware block diagram please check bindings/iommu/mediatek,iommu.txt
> +
> +Required properties:
> +- compatible : must be "mediatek,mt8173-smi"
> +- reg : the register and size of the smi.
> +- clocks : must contain one entry for each clock-names.
> + There are 2 clockes:
> + APB clock : Advanced Peripheral Bus clock, It's the clock for setting
> + the register.
> + SMI clock : It's the clock for transfer data and command.
> +- clock-name: must be "apb" and "smi".
the same. should be clock-names.
> +
> +Example:
> + smi_common:smi@14022000 {
> + compatible = "mediatek,mt8173-smi";
> + reg = <0 0x14022000 0 0x1000>;
> + clocks = <&mmsys MM_SMI_COMMON>,
> + <&mmsys MM_SMI_COMMON>;
> + clock-names = "apb", "smi";
> + };

2015-05-15 15:31:05

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH v2 3/6] iommu: add ARM short descriptor page table allocator.

Oops, seems I'm rather behind on things - I started this review on the
RFC, but I'll finish it here...

On 15/05/15 10:43, Yong Wu wrote:
> This patch is for ARM Short Descriptor Format.It has 2-levels
> pagetable and the allocator supports 4K/64K/1M/16M.
>

From the look of the code, this doesn't fully support partial unmaps
(i.e. splitting block entries), am I right? That's OK for DMA-API use,
since that doesn't permit partial unmaps anyway, but I'd say it's worth
making it clear that that's still a TODO in order for short-descriptor
mappings to fully support arbitrary raw IOMMU API usage.

> Signed-off-by: Yong Wu <[email protected]>
> ---
> drivers/iommu/Kconfig | 7 +
> drivers/iommu/Makefile | 1 +
> drivers/iommu/io-pgtable-arm-short.c | 490 +++++++++++++++++++++++++++++++++++
> drivers/iommu/io-pgtable.c | 4 +
> drivers/iommu/io-pgtable.h | 6 +
> 5 files changed, 508 insertions(+)
> create mode 100644 drivers/iommu/io-pgtable-arm-short.c
>
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index 1ae4e54..3d2eac6 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -39,6 +39,13 @@ config IOMMU_IO_PGTABLE_LPAE_SELFTEST
>
> If unsure, say N here.
>
> +config IOMMU_IO_PGTABLE_SHORT
> + bool "ARMv7/v8 Short Descriptor Format"
> + select IOMMU_IO_PGTABLE
> + help
> + Enable support for the ARM Short descriptor pagetable format.
> + It has 2-levels pagetable and The allocator supports 4K/64K/1M/16M.
> +
> endmenu
>
> config IOMMU_IOVA
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> index 080ffab..815b3c8 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -3,6 +3,7 @@ obj-$(CONFIG_IOMMU_API) += iommu-traces.o
> obj-$(CONFIG_IOMMU_API) += iommu-sysfs.o
> obj-$(CONFIG_IOMMU_IO_PGTABLE) += io-pgtable.o
> obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o
> +obj-$(CONFIG_IOMMU_IO_PGTABLE_SHORT) += io-pgtable-arm-short.o
> obj-$(CONFIG_IOMMU_IOVA) += iova.o
> obj-$(CONFIG_OF_IOMMU) += of_iommu.o
> obj-$(CONFIG_MSM_IOMMU) += msm_iommu.o msm_iommu_dev.o
> diff --git a/drivers/iommu/io-pgtable-arm-short.c b/drivers/iommu/io-pgtable-arm-short.c
> new file mode 100644
> index 0000000..cc286ce5
> --- /dev/null
> +++ b/drivers/iommu/io-pgtable-arm-short.c
> @@ -0,0 +1,490 @@
> +/*
> + * Copyright (c) 2014-2015 MediaTek Inc.
> + * Author: Yong Wu <[email protected]>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + */
> +#define pr_fmt(fmt) "arm-short-desc io-pgtable: "fmt
> +
> +#include <linux/err.h>
> +#include <linux/mm.h>
> +#include <linux/iommu.h>
> +#include <linux/errno.h>

Alphabetically-sorted includes, please. Also, this list doesn't look
particularly correct - e.g. I don't think you're actually using anything
from mm.h, but you are relying on stuff from kernel.h, slab.h, gfp.h,
etc. being pulled in indirectly.

> +
> +#include "io-pgtable.h"
> +
> +typedef u32 arm_short_iopte;
> +
> +struct arm_short_io_pgtable {
> + struct io_pgtable iop;
> + struct kmem_cache *ptekmem;
> + size_t pgd_size;
> + void *pgd;
> +};
> +
> +#define io_pgtable_short_to_data(x) \
> + container_of((x), struct arm_short_io_pgtable, iop)
> +
> +#define io_pgtable_ops_to_pgtable(x) \
> + container_of((x), struct io_pgtable, ops)

This macro may as well be factored out into io-pgtable.h before
duplication spreads any further. I don't see any reason for it not to
live alongside the definition of struct io_pgtable, anyway.

> +
> +#define io_pgtable_short_ops_to_data(x) \
> + io_pgtable_short_to_data(io_pgtable_ops_to_pgtable(x))
> +
> +#define ARM_SHORT_MAX_ADDR_BITS 32
> +
> +#define ARM_SHORT_PGDIR_SHIFT 20
> +#define ARM_SHORT_PAGE_SHIFT 12
> +#define ARM_SHORT_PTRS_PER_PTE 256
> +#define ARM_SHORT_BYTES_PER_PTE 1024
> +
> +/* 1 level pagetable */
> +#define ARM_SHORT_F_PGD_TYPE_PAGE (0x1)
> +#define ARM_SHORT_F_PGD_TYPE_PAGE_MSK (0x3)
> +#define ARM_SHORT_F_PGD_TYPE_SECTION (0x2)
> +#define ARM_SHORT_F_PGD_TYPE_SUPERSECTION (0x2 | (1 << 18))
> +#define ARM_SHORT_F_PGD_TYPE_SECTION_MSK (0x3 | (1 << 18))
> +#define ARM_SHORT_F_PGD_TYPE_IS_PAGE(pgd) (((pgd) & 0x3) == 1)

This confused me on first glance looking at the places it's used,
because it's not actually referring to a thing which is a page. Maybe
..._IS_TABLE would be a better name?

> +#define ARM_SHORT_F_PGD_TYPE_IS_SECTION(pgd) \
> + (((pgd) & ARM_SHORT_F_PGD_TYPE_SECTION_MSK) \
> + == ARM_SHORT_F_PGD_TYPE_SECTION)
> +#define ARM_SHORT_F_PGD_TYPE_IS_SUPERSECTION(pgd) \
> + (((pgd) & ARM_SHORT_F_PGD_TYPE_SECTION_MSK) \
> + == ARM_SHORT_F_PGD_TYPE_SUPERSECTION)
> +
> +#define ARM_SHORT_F_PGD_B_BIT BIT(2)
> +#define ARM_SHORT_F_PGD_C_BIT BIT(3)
> +#define ARM_SHORT_F_PGD_IMPLE_BIT BIT(9)
> +#define ARM_SHORT_F_PGD_S_BIT BIT(16)
> +#define ARM_SHORT_F_PGD_NG_BIT BIT(17)
> +#define ARM_SHORT_F_PGD_NS_BIT_PAGE BIT(3)
> +#define ARM_SHORT_F_PGD_NS_BIT_SECTION BIT(19)
> +
> +#define ARM_SHORT_F_PGD_PA_PAGETABLE_MSK 0xfffffc00
> +#define ARM_SHORT_F_PGD_PA_SECTION_MSK 0xfff00000
> +#define ARM_SHORT_F_PGD_PA_SUPERSECTION_MSK 0xff000000
> +
> +/* 2 level pagetable */
> +#define ARM_SHORT_F_PTE_TYPE_GET(val) ((val) & 0x3)
> +#define ARM_SHORT_F_PTE_TYPE_LARGE BIT(0)
> +#define ARM_SHORT_F_PTE_TYPE_SMALL BIT(1)
> +#define ARM_SHORT_F_PTE_B_BIT BIT(2)
> +#define ARM_SHORT_F_PTE_C_BIT BIT(3)
> +#define ARM_SHORT_F_PTE_IMPLE_BIT BIT(9)
> +#define ARM_SHORT_F_PTE_S_BIT BIT(10)
> +#define ARM_SHORT_F_PTE_PA_LARGE_MSK 0xffff0000
> +#define ARM_SHORT_F_PTE_PA_SMALL_MSK 0xfffff000
> +
> +#define ARM_SHORT_PGD_IDX(a) ((a) >> ARM_SHORT_PGDIR_SHIFT)
> +#define ARM_SHORT_PTE_IDX(a) \
> + (((a) >> ARM_SHORT_PAGE_SHIFT) & 0xff)
> +#define ARM_SHORT_GET_PTE_VA(pgd) \
> + (phys_to_virt((unsigned long)pgd & ARM_SHORT_F_PGD_PA_PAGETABLE_MSK))
> +
> +static arm_short_iopte *
> +arm_short_get_pte_in_pgd(arm_short_iopte curpgd, unsigned int iova)
> +{
> + arm_short_iopte *pte;
> +
> + pte = ARM_SHORT_GET_PTE_VA(curpgd);
> + pte += ARM_SHORT_PTE_IDX(iova);
> + return pte;
> +}
> +
> +static arm_short_iopte *
> +arm_short_supersection_start(arm_short_iopte *pgd)
> +{
> + return (arm_short_iopte *)(round_down((unsigned long)pgd, (16 * 4)));
> +}
> +
> +static int _arm_short_check_free_pte(struct arm_short_io_pgtable *data,
> + arm_short_iopte *pgd)

Given that this is only returning success/failure, it should probably be
bool rather than int.

> +{
> + arm_short_iopte *pte;
> + int i;
> +
> + pte = ARM_SHORT_GET_PTE_VA(*pgd);
> +
> + for (i = 0; i < ARM_SHORT_PTRS_PER_PTE; i++) {
> + if (pte[i] != 0)
> + return 1;
> + }
> +
> + /* Free PTE */
> + kmem_cache_free(data->ptekmem, pte);
> + *pgd = 0;
> +
> + return 0;
> +}
> +
> +static phys_addr_t arm_short_iova_to_phys(struct io_pgtable_ops *ops,
> + unsigned long iova)
> +{
> + struct arm_short_io_pgtable *data = io_pgtable_short_ops_to_data(ops);
> + arm_short_iopte *pte, *pgd = data->pgd;
> + phys_addr_t pa = 0;
> +
> + pgd += ARM_SHORT_PGD_IDX(iova);
> +
> + if (ARM_SHORT_F_PGD_TYPE_IS_PAGE(*pgd)) {
> + u8 pte_type;
> +
> + pte = arm_short_get_pte_in_pgd(*pgd, iova);
> + pte_type = ARM_SHORT_F_PTE_TYPE_GET(*pte);
> +
> + if (pte_type == ARM_SHORT_F_PTE_TYPE_LARGE) {
> + pa = (*pte) & ARM_SHORT_F_PTE_PA_LARGE_MSK;
> + pa |= iova & (~ARM_SHORT_F_PTE_PA_LARGE_MSK);
> + } else if (pte_type == ARM_SHORT_F_PTE_TYPE_SMALL) {
> + pa = (*pte) & ARM_SHORT_F_PTE_PA_SMALL_MSK;
> + pa |= iova & (~ARM_SHORT_F_PTE_PA_SMALL_MSK);
> + }
> + } else {
> + if (ARM_SHORT_F_PGD_TYPE_IS_SECTION(*pgd)) {
> + pa = (*pgd) & ARM_SHORT_F_PGD_PA_SECTION_MSK;
> + pa |= iova & (~ARM_SHORT_F_PGD_PA_SECTION_MSK);
> + } else if (ARM_SHORT_F_PGD_TYPE_IS_SUPERSECTION(*pgd)) {
> + pa = (*pgd) & ARM_SHORT_F_PGD_PA_SUPERSECTION_MSK;
> + pa |= iova & (~ARM_SHORT_F_PGD_PA_SUPERSECTION_MSK);
> + }
> + }
> +
> + return pa;
> +}
> +
> +static int arm_short_unmap(struct io_pgtable_ops *ops, unsigned long iova,
> + size_t size)
> +{
> + struct arm_short_io_pgtable *data = io_pgtable_short_ops_to_data(ops);
> + arm_short_iopte *pgd;
> + unsigned long iova_start = iova;
> + unsigned long long end_plus_1 = iova + size;

Since everything's at page granularity, working with IOVA PFNs rather
than raw addresses might be more convenient, and also sidesteps the
32-bit overflow problem. On 64-bit platforms, we're wasting a whole 95
bits of a long long here ;)

> + const struct iommu_gather_ops *tlb = data->iop.cfg.tlb;
> + void *cookie = data->iop.cookie;
> + int ret;
> +
> + do {
> + pgd = (arm_short_iopte *)data->pgd + ARM_SHORT_PGD_IDX(iova);
> +
> + if (ARM_SHORT_F_PGD_TYPE_IS_PAGE(*pgd)) {
> + arm_short_iopte *pte;
> + unsigned int pte_offset;
> + unsigned int num_to_clean;
> +
> + pte_offset = ARM_SHORT_PTE_IDX(iova);
> + num_to_clean =
> + min((unsigned int)((end_plus_1 - iova) / PAGE_SIZE),

Shouldn't this be page size for the IOMMU, not the CPU? I'm a bit slow
today, but this looks like it might go wrong when PAGE_SIZE > 4K.

> + (ARM_SHORT_PTRS_PER_PTE - pte_offset));
> +
> + pte = arm_short_get_pte_in_pgd(*pgd, iova);
> +
> + memset(pte, 0, num_to_clean * sizeof(arm_short_iopte));
> +
> + ret = _arm_short_check_free_pte(data, pgd);
> + if (ret == 1)/* pte is not freed, need to flush pte */
> + tlb->flush_pgtable(
> + pte,
> + num_to_clean * sizeof(arm_short_iopte),
> + cookie);
> + else
> + tlb->flush_pgtable(pgd, sizeof(arm_short_iopte),
> + cookie);
> +
> + iova += num_to_clean << PAGE_SHIFT;
> + } else if (ARM_SHORT_F_PGD_TYPE_IS_SECTION(*pgd)) {
> + *pgd = 0;
> +
> + tlb->flush_pgtable(pgd, sizeof(arm_short_iopte),
> + cookie);
> + iova += SZ_1M;
> + } else if (ARM_SHORT_F_PGD_TYPE_IS_SUPERSECTION(*pgd)) {
> + arm_short_iopte *start;
> +
> + start = arm_short_supersection_start(pgd);
> + if (unlikely(start != pgd))
> + pr_warn("%s:suppersection start isn't aligned.iova=0x%lx,pgd=0x%x\n",

Nit: typo in "supersection" here.

> + __func__, iova, *pgd);
> +
> + memset(start, 0, 16 * sizeof(arm_short_iopte));
> +
> + tlb->flush_pgtable(start, 16 * sizeof(arm_short_iopte),
> + cookie);
> +
> + iova = (iova + SZ_16M) & (~(SZ_16M - 1));

iova = ALIGN(iova + SZ_16M, SZ_16M);

Except that being unaligned in the first place is an error condition, so
it would make more sense to just have "iova += SZ_16M;" here, and put
"iova = ALIGN(iova, SZ_16M) after the warning in the error path above.

Since you don't handle splitting block mappings, and you also seem to be
missing an equivalent warning for unaligned second-level large pages, it
might be better to simply return an error if the requested size and
alignment don't exactly match the type of entry found, rather than let
the page tables get into an unexpectedly inconsistent state.

> + } else {
> + break;
> + }
> + } while (iova < end_plus_1 && iova);
> +
> + tlb->tlb_add_flush(iova_start, size, true, cookie);
> +
> + return 0;
> +}
> +
> +static arm_short_iopte __arm_short_pte_port(unsigned int prot, bool large)

I assume _port is a typo of _prot

> +{
> + arm_short_iopte pteprot;
> +
> + pteprot = ARM_SHORT_F_PTE_S_BIT;
> +
> + pteprot |= large ? ARM_SHORT_F_PTE_TYPE_LARGE :
> + ARM_SHORT_F_PTE_TYPE_SMALL;
> +
> + if (prot & IOMMU_CACHE)
> + pteprot |= ARM_SHORT_F_PTE_B_BIT | ARM_SHORT_F_PTE_C_BIT;
> +
> + return pteprot;
> +}
> +
> +static arm_short_iopte __arm_short_pgd_port(int prot, bool super)

Ditto

> +{
> + arm_short_iopte pgdprot;
> +
> + pgdprot = ARM_SHORT_F_PGD_S_BIT;
> + pgdprot |= super ? ARM_SHORT_F_PGD_TYPE_SUPERSECTION :
> + ARM_SHORT_F_PGD_TYPE_SECTION;
> + if (prot & IOMMU_CACHE)
> + pgdprot |= ARM_SHORT_F_PGD_C_BIT | ARM_SHORT_F_PGD_B_BIT;
> +
> + return pgdprot;
> +}
> +
> +static int _arm_short_map_page(struct arm_short_io_pgtable *data,
> + unsigned int iova, phys_addr_t pa,
> + unsigned int prot, bool largepage)
> +{
> + arm_short_iopte *pgd = data->pgd;
> + arm_short_iopte *pte;
> + arm_short_iopte pgdprot, pteprot;
> + arm_short_iopte mask = largepage ? ARM_SHORT_F_PTE_PA_LARGE_MSK :
> + ARM_SHORT_F_PTE_PA_SMALL_MSK;
> + int i, ptenum = largepage ? 16 : 1;
> + bool ptenew = false;
> + void *pte_new_va;
> + void *cookie = data->iop.cookie;
> +
> + if ((iova | pa) & (~mask)) {
> + pr_err("IOVA|PA Not Aligned(iova=0x%x pa=0x%pa type=%s)\n",
> + iova, &pa, largepage ? "large page" : "small page");

Nit: you may as well just have largepage ? "large" : "small" here and
"...type=%s page..." in the format string.

> + return -EINVAL;
> + }
> +
> + pgdprot = ARM_SHORT_F_PGD_TYPE_PAGE;
> + if (data->iop.cfg.quirks & IO_PGTABLE_QUIRK_ARM_NS)
> + pgdprot |= ARM_SHORT_F_PGD_NS_BIT_PAGE;
> +
> + pgd += ARM_SHORT_PGD_IDX(iova);
> +
> + if (!(*pgd)) {
> + pte_new_va = kmem_cache_zalloc(data->ptekmem, GFP_KERNEL);
> + if (unlikely(!pte_new_va)) {
> + pr_err("Failed to alloc pte\n");

The allocator should already print the details of a failure, so I don't
think this adds much.

> + return -ENOMEM;
> + }
> +
> + /* Check pte alignment -- must 1K align */
> + if (unlikely((unsigned long)pte_new_va &
> + (ARM_SHORT_BYTES_PER_PTE - 1))) {
> + pr_err("The new pte is not aligned! (va=0x%p)\n",
> + pte_new_va);
> + kmem_cache_free(data->ptekmem, (void *)pte_new_va);
> + return -ENOMEM;
> + }

Can this ever actually happen? Besides, if kmem_cache_alloc isn't
honouring the alignment specified in kmem_cache_create, I think the
kernel might have bigger problems anyway...

> + ptenew = true;
> + *pgd = virt_to_phys(pte_new_va) | pgdprot;
> + kmemleak_ignore(pte_new_va);
> + data->iop.cfg.tlb->flush_pgtable(pgd, sizeof(arm_short_iopte),
> + cookie);
> + } else {
> + /* Someone else may have allocated for this pgd */
> + if (((*pgd) & (~ARM_SHORT_F_PGD_PA_PAGETABLE_MSK)) != pgdprot) {
> + pr_err("The prot of old pgd is not Right!iova=0x%x pgd=0x%x pgprot=0x%x\n",
> + iova, (*pgd), pgdprot);
> + return -EEXIST;
> + }
> + }
> +
> + pteprot = (arm_short_iopte)pa;
> + pteprot |= __arm_short_pte_port(prot, largepage);
> +
> + pte = arm_short_get_pte_in_pgd(*pgd, iova);
> +
> + pr_debug("iova:0x%x,pte:0x%p(0x%x),prot:0x%x-%s\n",
> + iova, pte, ARM_SHORT_PTE_IDX(iova), pteprot,
> + largepage ? "large page" : "small page");

String redundancy nit again, assuming we actually need the debug output
at all.

> +
> + for (i = 0; i < ptenum; i++) {
> + if (pte[i]) {
> + pr_err("The To-Map pte exists!(iova=0x%x pte=0x%x i=%d)\n",
> + iova, pte[i], i);
> + goto err_out;
> + }
> + pte[i] = pteprot;
> + }
> +
> + data->iop.cfg.tlb->flush_pgtable(pte, ptenum * sizeof(arm_short_iopte),
> + cookie);
> + return 0;
> +
> + err_out:
> + for (i--; i >= 0; i--)

Probably bikeshedding, but I actually had to stop and think to work out
that this wasn't anything more special than while(i--)...

> + pte[i] = 0;
> + if (ptenew)
> + kmem_cache_free(data->ptekmem, pte_new_va);
> + return -EEXIST;
> +}
> +
> +static int _arm_short_map_section(struct arm_short_io_pgtable *data,
> + unsigned int iova, phys_addr_t pa,
> + int prot, bool supersection)
> +{
> + arm_short_iopte pgprot;
> + arm_short_iopte mask = supersection ?
> + ARM_SHORT_F_PGD_PA_SUPERSECTION_MSK :
> + ARM_SHORT_F_PGD_PA_SECTION_MSK;
> + arm_short_iopte *pgd = data->pgd;
> + int i;
> + unsigned int pgdnum = supersection ? 16 : 1;
> +
> + if ((iova | pa) & (~mask)) {
> + pr_err("IOVA|PA Not Aligned(iova=0x%x pa=0x%pa type=%s)\n",
> + iova, &pa, supersection ? "supersection" : "section");
> + return -EINVAL;
> + }
> +
> + pgprot = (arm_short_iopte)pa;
> +
> + if (data->iop.cfg.quirks & IO_PGTABLE_QUIRK_ARM_NS)
> + pgprot |= ARM_SHORT_F_PGD_NS_BIT_SECTION;
> +
> + pgprot |= __arm_short_pgd_port(prot, supersection);
> +
> + pgd += ARM_SHORT_PGD_IDX(iova);
> +
> + pr_debug("iova:0x%x,pgd:0x%p(0x%p+0x%x),value:0x%x-%s\n",
> + iova, pgd, data->pgd, ARM_SHORT_PGD_IDX(iova),
> + pgprot, supersection ? "supersection" : "section");
> +
> + for (i = 0; i < pgdnum; i++) {
> + if (unlikely(*pgd)) {
> + pr_err("The To-Map pdg exists!(iova=0x%x pgd=0x%x i=%d)\n",

Typo of "pgd" here

> + iova, pgd[i], i);
> + goto err_out;
> + }
> + pgd[i] = pgprot;
> + }
> + data->iop.cfg.tlb->flush_pgtable(pgd,
> + pgdnum * sizeof(arm_short_iopte),
> + data->iop.cookie);
> + return 0;
> +
> + err_out:
> + for (i--; i >= 0; i--)
> + pgd[i] = 0;
> + return -EEXIST;
> +}
> +
> +static int arm_short_map(struct io_pgtable_ops *ops, unsigned long iova,
> + phys_addr_t paddr, size_t size, int prot)
> +{
> + struct arm_short_io_pgtable *data = io_pgtable_short_ops_to_data(ops);
> + const struct iommu_gather_ops *tlb = data->iop.cfg.tlb;
> + int ret;
> +
> + if (!(prot & (IOMMU_READ | IOMMU_WRITE)))
> + return -EINVAL;
> +
> + if (size == SZ_4K) {/* most case */
> + ret = _arm_short_map_page(data, iova, paddr, prot, false);
> + } else if (size == SZ_64K) {
> + ret = _arm_short_map_page(data, iova, paddr, prot, true);
> + } else if (size == SZ_1M) {
> + ret = _arm_short_map_section(data, iova, paddr, prot, false);
> + } else if (size == SZ_16M) {
> + ret = _arm_short_map_section(data, iova, paddr, prot, true);
> + } else {
> + ret = -EINVAL;
> + }

I think this might be nicer as a switch statement. You could perhaps
move the add_flush beforehand (since it's unconditional here anyway) and
get rid of ret altogether.

Alternatively, given that map_page and map_section are so similar, maybe
it's worth sorting out the pgprot and pgd/pte pointer for the
page/section distinction here, then just passing those to a single
function which maps compound/non-compound leaf entries.

> + tlb->tlb_add_flush(iova, size, true, data->iop.cookie);
> + return ret;
> +}
> +
> +static struct io_pgtable *
> +arm_short_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
> +{
> + struct arm_short_io_pgtable *data;
> +
> + if (cfg->ias != 32)
> + return NULL;
> +
> + if (cfg->oas > ARM_SHORT_MAX_ADDR_BITS)
> + return NULL;
> +
> + cfg->pgsize_bitmap &= SZ_4K | SZ_64K | SZ_1M | SZ_16M;
> +
> + data = kzalloc(sizeof(*data), GFP_KERNEL);
> + if (!data)
> + return NULL;
> +
> + data->pgd_size = SZ_16K;
> +
> + data->pgd = alloc_pages_exact(data->pgd_size, GFP_KERNEL | __GFP_ZERO);

I think this needs __GFP_DMA too, to ensure the pgd lies below the 4GB
boundary on arm64/LPAE systems with memory above that.

> + if (!data->pgd)
> + goto out_free_data;
> +
> + cfg->tlb->flush_pgtable(data->pgd, data->pgd_size, cookie);
> +
> + /* kmem for pte */
> + data->ptekmem = kmem_cache_create("short-descriptor-pte",
> + ARM_SHORT_BYTES_PER_PTE,
> + ARM_SHORT_BYTES_PER_PTE,
> + 0, NULL);
> +
> + if (IS_ERR_OR_NULL(data->ptekmem)) {
> + pr_err("Failed to Create cached mem for PTE %ld\n",
> + PTR_ERR(data->ptekmem));
> + goto out_free_pte;
> + }
> +
> + /* TTBRs */
> + cfg->arm_short_cfg.ttbr[0] = virt_to_phys(data->pgd);
> + cfg->arm_short_cfg.ttbr[1] = 0;
> +
> + cfg->arm_short_cfg.tcr = 0;
> +
> + data->iop.ops = (struct io_pgtable_ops) {
> + .map = arm_short_map,
> + .unmap = arm_short_unmap,
> + .iova_to_phys = arm_short_iova_to_phys,
> + };
> +
> + return &data->iop;
> +
> +out_free_pte:
> + free_pages_exact(data->pgd, data->pgd_size);
> +out_free_data:
> + kfree(data);
> + return NULL;
> +}
> +
> +static void arm_short_free_pgtable(struct io_pgtable *iop)
> +{
> + struct arm_short_io_pgtable *data = io_pgtable_short_to_data(iop);
> +
> + kmem_cache_destroy(data->ptekmem);
> + free_pages_exact(data->pgd, data->pgd_size);
> + kfree(data);
> +}
> +
> +struct io_pgtable_init_fns io_pgtable_arm_short_init_fns = {
> + .alloc = arm_short_alloc_pgtable,
> + .free = arm_short_free_pgtable,
> +};
> +
> diff --git a/drivers/iommu/io-pgtable.c b/drivers/iommu/io-pgtable.c
> index 6436fe2..14a9b3a 100644
> --- a/drivers/iommu/io-pgtable.c
> +++ b/drivers/iommu/io-pgtable.c
> @@ -28,6 +28,7 @@ extern struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s1_init_fns;
> extern struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s2_init_fns;
> extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s1_init_fns;
> extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s2_init_fns;
> +extern struct io_pgtable_init_fns io_pgtable_arm_short_init_fns;
>
> static const struct io_pgtable_init_fns *
> io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] =
> @@ -38,6 +39,9 @@ io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] =
> [ARM_64_LPAE_S1] = &io_pgtable_arm_64_lpae_s1_init_fns,
> [ARM_64_LPAE_S2] = &io_pgtable_arm_64_lpae_s2_init_fns,
> #endif
> +#ifdef CONFIG_IOMMU_IO_PGTABLE_SHORT
> + [ARM_SHORT_DESC] = &io_pgtable_arm_short_init_fns,
> +#endif
> };
>
> struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt,
> diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h
> index 10e32f6..47efaab 100644
> --- a/drivers/iommu/io-pgtable.h
> +++ b/drivers/iommu/io-pgtable.h
> @@ -9,6 +9,7 @@ enum io_pgtable_fmt {
> ARM_32_LPAE_S2,
> ARM_64_LPAE_S1,
> ARM_64_LPAE_S2,
> + ARM_SHORT_DESC,
> IO_PGTABLE_NUM_FMTS,
> };
>
> @@ -62,6 +63,11 @@ struct io_pgtable_cfg {
> u64 vttbr;
> u64 vtcr;
> } arm_lpae_s2_cfg;
> +
> + struct {
> + u64 ttbr[2];
> + u64 tcr;

The ARM ARM defines these all as 32-bit registers for the
short-descriptor format, so I think u32 would be more appropriate -
better to let the compiler truncate things and warn about it, than have
the hardware do it silently at runtime ;)

> + } arm_short_cfg;
> };
> };
>
> --
> 1.8.1.1.dirty
>

2015-05-16 07:09:15

by Daniel Kurtz

[permalink] [raw]
Subject: Re: [RFC v2 PATCH 0/6] MT8173 IOMMU SUPPORT

On Fri, May 15, 2015 at 5:43 PM, Yong Wu <[email protected]> wrote:
> This patch adds support for m4u(Multimedia Memory Management Unit),
> Currently it only support the m4u with 2 levels of page table on mt8173.
>
> It is based on Robin Murphy's arm64: IOMMU-backed DMA mapping[1].


Hi Yong, Robin,

The quoted "arm64 iommu-backed DMA mapping" patch set is from 2015-February.
That was over 3 months ago.
What is the status of this work?

Is that still the most recent version of these patches?
If so, is there a plan when an update will be posted?
Or, is there an updated public git repo with a more recent version of
these patches?

My apologies if these questions have been asked and answered before
and/or elsewhere.

Thanks,
-Dan

>
> Please check the hardware block diagram of Mediatek IOMMU.
>
> EMI (External Memory Interface)
> |
> m4u (Multimedia Memory Management Unit)
> |
> smi (Smart Multimedia Interface)
> |
> +---------------+-------
> | |
> | |
> vdec larb disp larb ... SoCs have different local arbiter(larb).
> | |
> | |
> +----+----+ +-----+-----+
> | | | | | | ...
> | | | | | | ...
> | | | | | | ...
> MC PP VLD OVL0 RDMA0 WDMA0 ... There are different ports in each larb.
>
> Normally we specify a local arbiter(larb) for each multimedia hardware like
> display, video decode, video encode and camera. And there are different ports in
> each larb. Take a example, there are some ports like MC, PP, UFO, VLD, AVC_MV,
> PRED_RD, PRED_WR in video larb, all the ports are according to the video hardware.
>
> From the diagram, all the multimedia module connect with m4u via smi.
> SMI is responsible to enable/disable iommu and control the clocks for each local
> arbiter. If we should enable the iommu of video decode, it should config the
> video's ports. And if the video hardware work wether enable/disable iommu,
> it should enable the clock of its larb's clock. So we add a special driver for smi.
>
> [1] http://lists.linuxfoundation.org/pipermail/iommu/2015-February/012236.html
>
> v2:
> -add arm short descriptor support.
> -seperate smi common from smi and change the clock-names according
> to smi HW.
> -delete the hardcode of the port-names in mt8173.
> replace this with larb-portes-nr in dtsi.
> -fix some coding style issues.
>
> v1:
> http://lists.infradead.org/pipermail/linux-mediatek/2015-March/000058.html
>
> Yong Wu (6):
> dt-bindings: iommu: Add binding for mediatek IOMMU
> dt-bindings: mediatek: Add smi dts binding
> iommu: add ARM short descriptor page table allocator.
> iommu/mediatek: Add mt8173 IOMMU driver
> soc: mediatek: Add SMI driver
> dts: mt8173: Add iommu/smi nodes for mt8173
>
> .../devicetree/bindings/iommu/mediatek,iommu.txt | 51 ++
> .../bindings/soc/mediatek/mediatek,smi-larb.txt | 24 +
> .../bindings/soc/mediatek/mediatek,smi.txt | 22 +
> arch/arm64/boot/dts/mediatek/mt8173.dtsi | 79 +++
> drivers/iommu/Kconfig | 18 +
> drivers/iommu/Makefile | 2 +
> drivers/iommu/io-pgtable-arm-short.c | 490 +++++++++++++++
> drivers/iommu/io-pgtable.c | 4 +
> drivers/iommu/io-pgtable.h | 6 +
> drivers/iommu/mtk_iommu.c | 657 +++++++++++++++++++++
> drivers/soc/mediatek/Kconfig | 6 +
> drivers/soc/mediatek/Makefile | 1 +
> drivers/soc/mediatek/mt8173-smi.c | 298 ++++++++++
> include/dt-bindings/iommu/mt8173-iommu-port.h | 112 ++++
> include/linux/mtk-smi.h | 39 ++
> 15 files changed, 1809 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/iommu/mediatek,iommu.txt
> create mode 100644 Documentation/devicetree/bindings/soc/mediatek/mediatek,smi-larb.txt
> create mode 100644 Documentation/devicetree/bindings/soc/mediatek/mediatek,smi.txt
> create mode 100644 drivers/iommu/io-pgtable-arm-short.c
> create mode 100644 drivers/iommu/mtk_iommu.c
> create mode 100644 drivers/soc/mediatek/mt8173-smi.c
> create mode 100644 include/dt-bindings/iommu/mt8173-iommu-port.h
> create mode 100644 include/linux/mtk-smi.h
>
> --
> 1.8.1.1.dirty
>



--
Daniel Kurtz | Software Engineer | [email protected] | 650.204.0722

2015-05-18 12:09:26

by Matthias Brugger

[permalink] [raw]
Subject: Re: [RFC v2 PATCH 0/6] MT8173 IOMMU SUPPORT

Hi Yong,

2015-05-15 11:43 GMT+02:00 Yong Wu <[email protected]>:
> This patch adds support for m4u(Multimedia Memory Management Unit),
> Currently it only support the m4u with 2 levels of page table on mt8173.
>
> It is based on Robin Murphy's arm64: IOMMU-backed DMA mapping[1].
>
> Please check the hardware block diagram of Mediatek IOMMU.
>
> EMI (External Memory Interface)
> |
> m4u (Multimedia Memory Management Unit)
> |
> smi (Smart Multimedia Interface)
> |
> +---------------+-------
> | |
> | |
> vdec larb disp larb ... SoCs have different local arbiter(larb).
> | |
> | |
> +----+----+ +-----+-----+
> | | | | | | ...
> | | | | | | ...
> | | | | | | ...
> MC PP VLD OVL0 RDMA0 WDMA0 ... There are different ports in each larb.
>
> Normally we specify a local arbiter(larb) for each multimedia hardware like
> display, video decode, video encode and camera. And there are different ports in
> each larb. Take a example, there are some ports like MC, PP, UFO, VLD, AVC_MV,
> PRED_RD, PRED_WR in video larb, all the ports are according to the video hardware.
>
> From the diagram, all the multimedia module connect with m4u via smi.
> SMI is responsible to enable/disable iommu and control the clocks for each local
> arbiter. If we should enable the iommu of video decode, it should config the
> video's ports. And if the video hardware work wether enable/disable iommu,
> it should enable the clock of its larb's clock. So we add a special driver for smi.
>
> [1] http://lists.linuxfoundation.org/pipermail/iommu/2015-February/012236.html
>
> v2:
> -add arm short descriptor support.
> -seperate smi common from smi and change the clock-names according
> to smi HW.
> -delete the hardcode of the port-names in mt8173.
> replace this with larb-portes-nr in dtsi.
> -fix some coding style issues.
>
> v1:
> http://lists.infradead.org/pipermail/linux-mediatek/2015-March/000058.html
>
> Yong Wu (6):
> dt-bindings: iommu: Add binding for mediatek IOMMU
> dt-bindings: mediatek: Add smi dts binding
> iommu: add ARM short descriptor page table allocator.
> iommu/mediatek: Add mt8173 IOMMU driver
> soc: mediatek: Add SMI driver
> dts: mt8173: Add iommu/smi nodes for mt8173
>

Some general comments.
It seems that your patches are not based on v4.1.-rc1, can you please
fix this in the next version, they don't apply cleanly.
Please use the checkpatch script before sending the next version, I
had some whitespace errors reported when applying the series.

Thanks,
Matthias

--
motzblog.wordpress.com

2015-05-19 11:14:31

by Matthias Brugger

[permalink] [raw]
Subject: Re: [PATCH v2 4/6] soc: mediatek: Add SMI driver

2015-05-15 11:43 GMT+02:00 Yong Wu <[email protected]>:
> This patch add SMI(Smart Multimedia Interface) driver. This driver is
> responsible to enable/disable iommu and control the clocks of each local arbiter.
>
> Signed-off-by: Yong Wu <[email protected]>
> ---
> drivers/soc/mediatek/Kconfig | 6 +
> drivers/soc/mediatek/Makefile | 1 +
> drivers/soc/mediatek/mt8173-smi.c | 298 ++++++++++++++++++++++++++++++++++++++
> include/linux/mtk-smi.h | 39 +++++
> 4 files changed, 344 insertions(+)
> create mode 100644 drivers/soc/mediatek/mt8173-smi.c
> create mode 100644 include/linux/mtk-smi.h
>
> diff --git a/drivers/soc/mediatek/Kconfig b/drivers/soc/mediatek/Kconfig
> index 1d34819..5935ead 100644
> --- a/drivers/soc/mediatek/Kconfig
> +++ b/drivers/soc/mediatek/Kconfig
> @@ -15,3 +15,9 @@ config MTK_SCPSYS
> help
> Say yes here to add support for the MediaTek SCPSYS power domain
> driver.
> +
> +config MTK_SMI
> + bool
> + help
> + Smi help enable/disable iommu in MediaTek SoCs and control the clock
> + for each local arbiter.
> diff --git a/drivers/soc/mediatek/Makefile b/drivers/soc/mediatek/Makefile
> index ce88693..c086261 100644
> --- a/drivers/soc/mediatek/Makefile
> +++ b/drivers/soc/mediatek/Makefile
> @@ -1,2 +1,3 @@
> obj-$(CONFIG_MTK_PMIC_WRAP) += mtk-pmic-wrap.o
> obj-$(CONFIG_MTK_SCPSYS) += mtk-scpsys.o
> +obj-$(CONFIG_MTK_SMI) += mt8173-smi.o
> diff --git a/drivers/soc/mediatek/mt8173-smi.c b/drivers/soc/mediatek/mt8173-smi.c
> new file mode 100644
> index 0000000..67bccec
> --- /dev/null
> +++ b/drivers/soc/mediatek/mt8173-smi.c
> @@ -0,0 +1,298 @@
> +/*
> + * Copyright (c) 2014-2015 MediaTek Inc.
> + * Author: Yong Wu <[email protected]>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + */
> +#include <linux/io.h>
> +#include <linux/interrupt.h>
> +#include <linux/platform_device.h>
> +#include <linux/clk.h>
> +#include <linux/err.h>
> +#include <linux/mm.h>
> +#include <linux/of_address.h>
> +#include <linux/of_irq.h>
> +#include <linux/of_platform.h>
> +#include <linux/mtk-smi.h>
> +
> +#define SMI_LARB_MMU_EN (0xf00)
> +#define F_SMI_MMU_EN(port) (1 << (port))
> +
> +enum {
> + MTK_CLK_APB,
> + MTK_CLK_SMI,
> + MTK_CLK_MAX,

Maybe add something like:
MTK_CLK_FIRST = MTK_CLK_APB,
to make the for loops better readable.

> +};
> +
> +struct mtk_smi_common {
> + void __iomem *base;

That seems to be never used. Please delete it.

> + struct clk *clk[MTK_CLK_MAX];
> +};
> +
> +struct mtk_smi_larb {
> + void __iomem *base;
> + spinlock_t portlock; /* lock for config port */
> + struct clk *clk[MTK_CLK_MAX];
> + struct device *smi;
> +};
> +
> +static const char * const mtk_smi_clk_name[MTK_CLK_MAX] = {
> + "apb", "smi"
> +};
> +
> +static int mtk_smi_common_get(struct device *smidev)
> +{
> + struct mtk_smi_common *smipriv = dev_get_drvdata(smidev);
> + int i, ret;
> +
> + for (i = MTK_CLK_APB; i < MTK_CLK_MAX; i++) {
> + ret = clk_enable(smipriv->clk[i]);
> + if (ret) {
> + dev_err(smidev,
> + "Failed to enable the clock of smi_common\n");
> + goto err_smi_clk;
> + }
> + }
> + return ret;
> +
> +err_smi_clk:
> + if (i == MTK_CLK_SMI)
> + clk_disable(smipriv->clk[MTK_CLK_APB]);
> + return ret;
> +}
> +
> +static void mtk_smi_common_put(struct device *smidev)
> +{
> + struct mtk_smi_common *smipriv = dev_get_drvdata(smidev);
> + int i;
> +
> + for (i = MTK_CLK_SMI; i >= MTK_CLK_APB; i--)
> + clk_disable(smipriv->clk[i]);
> +}
> +
> +int mtk_smi_larb_get(struct device *larbdev)
> +{
> + struct mtk_smi_larb *larbpriv = dev_get_drvdata(larbdev);
> + int i, ret = 0;
> +
> + ret = mtk_smi_common_get(larbpriv->smi);
> + if (ret)
> + return ret;
> +
> + for (i = MTK_CLK_APB; i < MTK_CLK_MAX; i++) {
> + ret = clk_enable(larbpriv->clk[i]);
> + if (ret) {
> + dev_err(larbdev,
> + "Failed to enable larb clock%d. ret 0x%x\n",
> + i, ret);
> + goto err_larb_clk;
> + }
> + }
> +
> + return ret;
> +
> +err_larb_clk:
> + if (i == MTK_CLK_SMI)
> + clk_disable(larbpriv->clk[MTK_CLK_APB]);
> + mtk_smi_common_put(larbpriv->smi);
> + return ret;
> +}
> +
> +void mtk_smi_larb_put(struct device *larbdev)

You can pass mtk_smi_larb as parameter instead of struct device

> +{
> + struct mtk_smi_larb *larbpriv = dev_get_drvdata(larbdev);
> + int i;
> +
> + for (i = MTK_CLK_SMI; i >= MTK_CLK_APB; i--)
> + clk_disable(larbpriv->clk[i]);
> +
> + mtk_smi_common_put(larbpriv->smi);
> +}
> +
> +int mtk_smi_config_port(struct device *larbdev, unsigned int larbportid,
> + bool iommuen)
> +{
> + struct mtk_smi_larb *larbpriv = dev_get_drvdata(larbdev);
> + unsigned long flags;
> + int ret;
> + u32 reg;
> +
> + ret = mtk_smi_larb_get(larbdev);
> + if (ret)
> + return ret;
> +
> + spin_lock_irqsave(&larbpriv->portlock, flags);
> + reg = readl(larbpriv->base + SMI_LARB_MMU_EN);
> + reg &= ~F_SMI_MMU_EN(larbportid);
> + if (iommuen)
> + reg |= F_SMI_MMU_EN(larbportid);
> + writel(reg, larbpriv->base + SMI_LARB_MMU_EN);
> + spin_unlock_irqrestore(&larbpriv->portlock, flags);
> +
> + mtk_smi_larb_put(larbdev);
> +
> + return 0;
> +}
> +
> +static int mtk_smi_larb_probe(struct platform_device *pdev)
> +{
> + struct mtk_smi_larb *larbpriv;
> + struct resource *res;
> + struct device *dev = &pdev->dev;
> + struct device_node *smi_node;
> + struct platform_device *smi_pdev;
> + int i, ret;
> +
> + larbpriv = devm_kzalloc(dev, sizeof(struct mtk_smi_larb), GFP_KERNEL);
> + if (!larbpriv)
> + return -ENOMEM;
> +
> + res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> + larbpriv->base = devm_ioremap_resource(dev, res);
> + if (IS_ERR(larbpriv->base)) {
> + dev_err(dev, "Failed to get the larbbase (0x%lx)\n",
> + PTR_ERR(larbpriv->base));
> + return PTR_ERR(larbpriv->base);
> + }
> +
> + for (i = MTK_CLK_APB; i < MTK_CLK_MAX; i++) {
> + larbpriv->clk[i] = devm_clk_get(dev, mtk_smi_clk_name[i]);
> +
> + if (IS_ERR(larbpriv->clk[i])) {

Please delete the blank line before the if.

> + ret = PTR_ERR(larbpriv->clk[i]);
> + dev_err(dev, "Failed to get larb%d clock (0x%x)\n",
> + i, ret);
> + goto fail_larb_clk;
> + } else {
> + ret = clk_prepare(larbpriv->clk[i]);
> + if (ret) {
> + dev_err(dev, "Failed to prepare larb clock%d (0x%x)\n",
> + i, ret);
> + goto fail_larb_clk;
> + }
> + }
> + }
> +
> + smi_node = of_parse_phandle(dev->of_node, "smi", 0);
> + if (!smi_node) {
> + dev_err(dev, "Failed to get smi node\n");
> + ret = -EINVAL;
> + goto fail_larb_clk;
> + }
> +
> + smi_pdev = of_find_device_by_node(smi_node);
> + of_node_put(smi_node);
> + if (smi_pdev) {
> + larbpriv->smi = &smi_pdev->dev;
> + } else {
> + dev_err(dev, "Failed to get the smi_common device\n");
> + ret = -EINVAL;
> + goto fail_larb_clk;
> + }
> +
> + spin_lock_init(&larbpriv->portlock);
> + dev_set_drvdata(dev, larbpriv);
> + return 0;
> +
> +fail_larb_clk:
> + while (--i >= MTK_CLK_APB)
> + clk_unprepare(larbpriv->clk[i]);
> + return ret;
> +}
> +
> +static const struct of_device_id mtk_smi_larb_of_ids[] = {
> + { .compatible = "mediatek,mt8173-smi-larb",
> + },
> + {}
> +};
> +
> +static struct platform_driver mtk_smi_larb_driver = {
> + .probe = mtk_smi_larb_probe,
> + .driver = {
> + .name = "mtksmilarb",

Huh, what about something more readable like mtk-smi-larb?

> + .of_match_table = mtk_smi_larb_of_ids,
> + }
> +};
> +
> +static int mtk_smi_probe(struct platform_device *pdev)
> +{
> + struct device *dev = &pdev->dev;
> + struct mtk_smi_common *smipriv;
> + int ret, i;
> +
> + smipriv = devm_kzalloc(dev, sizeof(*smipriv), GFP_KERNEL);
> + if (!smipriv)
> + return -ENOMEM;
> +
> + for (i = MTK_CLK_APB; i < MTK_CLK_MAX; i++) {
> + smipriv->clk[i] = devm_clk_get(dev, mtk_smi_clk_name[i]);
> +
> + if (IS_ERR(smipriv->clk[i])) {
> + ret = PTR_ERR(smipriv->clk[i]);
> + dev_err(dev, "Failed to get smi-%s clock (0x%x)\n",
> + mtk_smi_clk_name[i], ret);
> + goto fail_smi_clk;
> + } else {
> + ret = clk_prepare(smipriv->clk[i]);
> + if (ret) {
> + dev_err(dev, "Failed to prepare smi%d clock 0x%x\n",
> + i, ret);
> + goto fail_smi_clk;
> + }
> + }
> + }
> +
> + dev_set_drvdata(dev, smipriv);
> + return ret;
> +
> +fail_smi_clk:
> + if (i == MTK_CLK_SMI)
> + clk_unprepare(smipriv->clk[MTK_CLK_APB]);
> + return ret;
> +}
> +
> +static const struct of_device_id mtk_smi_of_ids[] = {
> + { .compatible = "mediatek,mt8173-smi",
> + },
> + {}
> +};
> +
> +static struct platform_driver mtk_smi_driver = {
> + .probe = mtk_smi_probe,
> + .driver = {
> + .name = "mtksmi",

Same here.

Thanks,
Matthias

2015-05-21 06:16:39

by Yong Wu (吴勇)

[permalink] [raw]
Subject: Re: [PATCH v2 4/6] soc: mediatek: Add SMI driver

Hi Matthias,
Thanks very much for your suggestion.
Abort the smi clock name, Could you help check below.
The others I will improve in next time.

On Tue, 2015-05-19 at 13:14 +0200, Matthias Brugger wrote:
> 2015-05-15 11:43 GMT+02:00 Yong Wu <[email protected]>:
> > This patch add SMI(Smart Multimedia Interface) driver. This driver is
> > responsible to enable/disable iommu and control the clocks of each local arbiter.
> >
[snip]
> > +
> > +#define SMI_LARB_MMU_EN (0xf00)
> > +#define F_SMI_MMU_EN(port) (1 << (port))
> > +
> > +enum {
> > + MTK_CLK_APB,
> > + MTK_CLK_SMI,
> > + MTK_CLK_MAX,
>
> Maybe add something like:
> MTK_CLK_FIRST = MTK_CLK_APB,
> to make the for loops better readable.
>
Then, Is it like this? :
enum {
MTK_CLK_FIRST = MTK_CLK_APB,
MTK_CLK_SMI,
MTK_CLK_MAX,
}
or the CLK_SMI also need MTK_CLK_SECOND = MTK_CLK_SMI.
> > +};
> > +
> > +struct mtk_smi_common {
> > + void __iomem *base;
>
> That seems to be never used. Please delete it.
>
> > + struct clk *clk[MTK_CLK_MAX];
> > +};
> > +
> > +struct mtk_smi_larb {
> > + void __iomem *base;
> > + spinlock_t portlock; /* lock for config port */
> > + struct clk *clk[MTK_CLK_MAX];
> > + struct device *smi;
> > +};
> > +
> Thanks,
> Matthias

2015-05-21 07:30:24

by Matthias Brugger

[permalink] [raw]
Subject: Re: [PATCH v2 4/6] soc: mediatek: Add SMI driver

2015-05-21 8:16 GMT+02:00 Yong Wu <[email protected]>:
> Hi Matthias,
> Thanks very much for your suggestion.
> Abort the smi clock name, Could you help check below.
> The others I will improve in next time.
>
> On Tue, 2015-05-19 at 13:14 +0200, Matthias Brugger wrote:
>> 2015-05-15 11:43 GMT+02:00 Yong Wu <[email protected]>:
>> > This patch add SMI(Smart Multimedia Interface) driver. This driver is
>> > responsible to enable/disable iommu and control the clocks of each local arbiter.
>> >
> [snip]
>> > +
>> > +#define SMI_LARB_MMU_EN (0xf00)
>> > +#define F_SMI_MMU_EN(port) (1 << (port))
>> > +
>> > +enum {
>> > + MTK_CLK_APB,
>> > + MTK_CLK_SMI,
>> > + MTK_CLK_MAX,
>>
>> Maybe add something like:
>> MTK_CLK_FIRST = MTK_CLK_APB,
>> to make the for loops better readable.
>>
> Then, Is it like this? :
> enum {
> MTK_CLK_FIRST = MTK_CLK_APB,
> MTK_CLK_SMI,
> MTK_CLK_MAX,
> }
> or the CLK_SMI also need MTK_CLK_SECOND = MTK_CLK_SMI.


something like:
enum {
MTK_CLK_FIRST,
MTK_CLK_APB = MTK_CLK_FIRST,
MTK_CLK_SMI,
MTK_CLK_LAST,
}

So you can rewrite the for loop:
if (i = MTK_CLK_FIRST; i < MTK_CLK_LAST; i++)

Regards,
Matthias

>> > +};
>> > +
>> > +struct mtk_smi_common {
>> > + void __iomem *base;
>>
>> That seems to be never used. Please delete it.
>>
>> > + struct clk *clk[MTK_CLK_MAX];
>> > +};
>> > +
>> > +struct mtk_smi_larb {
>> > + void __iomem *base;
>> > + spinlock_t portlock; /* lock for config port */
>> > + struct clk *clk[MTK_CLK_MAX];
>> > + struct device *smi;
>> > +};
>> > +
>> Thanks,
>> Matthias
>
>



--
motzblog.wordpress.com

2015-05-21 07:38:46

by Yong Wu (吴勇)

[permalink] [raw]
Subject: Re: [PATCH v2 4/6] soc: mediatek: Add SMI driver

On Thu, 2015-05-21 at 09:30 +0200, Matthias Brugger wrote:
> 2015-05-21 8:16 GMT+02:00 Yong Wu <[email protected]>:
> > Hi Matthias,
> > Thanks very much for your suggestion.
> > Abort the smi clock name, Could you help check below.
> > The others I will improve in next time.
> >
> > On Tue, 2015-05-19 at 13:14 +0200, Matthias Brugger wrote:
> >> 2015-05-15 11:43 GMT+02:00 Yong Wu <[email protected]>:
> >> > This patch add SMI(Smart Multimedia Interface) driver. This driver is
> >> > responsible to enable/disable iommu and control the clocks of each local arbiter.
> >> >
> > [snip]
> >> > +
> >> > +#define SMI_LARB_MMU_EN (0xf00)
> >> > +#define F_SMI_MMU_EN(port) (1 << (port))
> >> > +
> >> > +enum {
> >> > + MTK_CLK_APB,
> >> > + MTK_CLK_SMI,
> >> > + MTK_CLK_MAX,
> >>
> >> Maybe add something like:
> >> MTK_CLK_FIRST = MTK_CLK_APB,
> >> to make the for loops better readable.
> >>
> > Then, Is it like this? :
> > enum {
> > MTK_CLK_FIRST = MTK_CLK_APB,
> > MTK_CLK_SMI,
> > MTK_CLK_MAX,
> > }
> > or the CLK_SMI also need MTK_CLK_SECOND = MTK_CLK_SMI.
>
>
> something like:
> enum {
> MTK_CLK_FIRST,
> MTK_CLK_APB = MTK_CLK_FIRST,
> MTK_CLK_SMI,
> MTK_CLK_LAST,
> }
>
> So you can rewrite the for loop:
> if (i = MTK_CLK_FIRST; i < MTK_CLK_LAST; i++)
Get it.
Thanks very much.
>
> Regards,
> Matthias
>
> >> > +};
> >> > +
> >> > +struct mtk_smi_common {
> >> > + void __iomem *base;
> >>
> >> That seems to be never used. Please delete it.
> >>
> >> > + struct clk *clk[MTK_CLK_MAX];
> >> > +};
> >> > +
> >> > +struct mtk_smi_larb {
> >> > + void __iomem *base;
> >> > + spinlock_t portlock; /* lock for config port */
> >> > + struct clk *clk[MTK_CLK_MAX];
> >> > + struct device *smi;
> >> > +};
> >> > +
> >> Thanks,
> >> Matthias
> >
> >
>
>
>

2015-05-21 14:33:37

by Daniel Kurtz

[permalink] [raw]
Subject: Re: [PATCH v2 4/6] soc: mediatek: Add SMI driver

On Thu, May 21, 2015 at 3:30 PM, Matthias Brugger
<[email protected]> wrote:
> 2015-05-21 8:16 GMT+02:00 Yong Wu <[email protected]>:
>> Hi Matthias,
>> Thanks very much for your suggestion.
>> Abort the smi clock name, Could you help check below.
>> The others I will improve in next time.
>>
>> On Tue, 2015-05-19 at 13:14 +0200, Matthias Brugger wrote:
>>> 2015-05-15 11:43 GMT+02:00 Yong Wu <[email protected]>:
>>> > This patch add SMI(Smart Multimedia Interface) driver. This driver is
>>> > responsible to enable/disable iommu and control the clocks of each local arbiter.
>>> >
>> [snip]
>>> > +
>>> > +#define SMI_LARB_MMU_EN (0xf00)
>>> > +#define F_SMI_MMU_EN(port) (1 << (port))
>>> > +
>>> > +enum {
>>> > + MTK_CLK_APB,
>>> > + MTK_CLK_SMI,
>>> > + MTK_CLK_MAX,
>>>
>>> Maybe add something like:
>>> MTK_CLK_FIRST = MTK_CLK_APB,
>>> to make the for loops better readable.
>>>
>> Then, Is it like this? :
>> enum {
>> MTK_CLK_FIRST = MTK_CLK_APB,
>> MTK_CLK_SMI,
>> MTK_CLK_MAX,
>> }
>> or the CLK_SMI also need MTK_CLK_SECOND = MTK_CLK_SMI.
>
>
> something like:
> enum {
> MTK_CLK_FIRST,
> MTK_CLK_APB = MTK_CLK_FIRST,
> MTK_CLK_SMI,
> MTK_CLK_LAST,
> }
>
> So you can rewrite the for loop:
> if (i = MTK_CLK_FIRST; i < MTK_CLK_LAST; i++)

Actually, do we ever plan to add more clks per smi node?
If not, perhaps the whole driver would be simpler if you just
explicitly handle the apb & smi clocks:

struct mtk_smi_larb {
void __iomem *base;
spinlock_t portlock; /* lock for config port */
struct device *smi;
struct clk *clk_apb;
struct clk *clk_smi;
};

And then all of the loops become just a pair of clock operations.

Best Regards,
-Dan

>
> Regards,
> Matthias
>
>>> > +};
>>> > +
>>> > +struct mtk_smi_common {
>>> > + void __iomem *base;
>>>
>>> That seems to be never used. Please delete it.
>>>
>>> > + struct clk *clk[MTK_CLK_MAX];
>>> > +};
>>> > +
>>> > +struct mtk_smi_larb {
>>> > + void __iomem *base;
>>> > + spinlock_t portlock; /* lock for config port */
>>> > + struct clk *clk[MTK_CLK_MAX];
>>> > + struct device *smi;
>>> > +};
>>> > +
>>> Thanks,
>>> Matthias
>>
>>
>
>
>
> --
> motzblog.wordpress.com



--
Daniel Kurtz | Software Engineer | [email protected] | 650.204.0722

2015-05-21 14:49:52

by Yong Wu (吴勇)

[permalink] [raw]
Subject: Re: [PATCH v2 4/6] soc: mediatek: Add SMI driver

On Thu, 2015-05-21 at 22:33 +0800, Daniel Kurtz wrote:
> On Thu, May 21, 2015 at 3:30 PM, Matthias Brugger
> <[email protected]> wrote:
> > 2015-05-21 8:16 GMT+02:00 Yong Wu <[email protected]>:
> >> Hi Matthias,
> >> Thanks very much for your suggestion.
> >> Abort the smi clock name, Could you help check below.
> >> The others I will improve in next time.
> >>
> >> On Tue, 2015-05-19 at 13:14 +0200, Matthias Brugger wrote:
> >>> 2015-05-15 11:43 GMT+02:00 Yong Wu <[email protected]>:
> >>> > This patch add SMI(Smart Multimedia Interface) driver. This driver is
> >>> > responsible to enable/disable iommu and control the clocks of each local arbiter.
> >>> >
> >> [snip]
> >>> > +
> >>> > +#define SMI_LARB_MMU_EN (0xf00)
> >>> > +#define F_SMI_MMU_EN(port) (1 << (port))
> >>> > +
> >>> > +enum {
> >>> > + MTK_CLK_APB,
> >>> > + MTK_CLK_SMI,
> >>> > + MTK_CLK_MAX,
> >>>
> >>> Maybe add something like:
> >>> MTK_CLK_FIRST = MTK_CLK_APB,
> >>> to make the for loops better readable.
> >>>
> >> Then, Is it like this? :
> >> enum {
> >> MTK_CLK_FIRST = MTK_CLK_APB,
> >> MTK_CLK_SMI,
> >> MTK_CLK_MAX,
> >> }
> >> or the CLK_SMI also need MTK_CLK_SECOND = MTK_CLK_SMI.
> >
> >
> > something like:
> > enum {
> > MTK_CLK_FIRST,
> > MTK_CLK_APB = MTK_CLK_FIRST,
> > MTK_CLK_SMI,
> > MTK_CLK_LAST,
> > }
> >
> > So you can rewrite the for loop:
> > if (i = MTK_CLK_FIRST; i < MTK_CLK_LAST; i++)
>
> Actually, do we ever plan to add more clks per smi node?
No.
> If not, perhaps the whole driver would be simpler if you just
> explicitly handle the apb & smi clocks:
>
> struct mtk_smi_larb {
> void __iomem *base;
> spinlock_t portlock; /* lock for config port */
> struct device *smi;
> struct clk *clk_apb;
> struct clk *clk_smi;
> };
>
> And then all of the loops become just a pair of clock operations.
Thanks. I will try this and compare whether it will add many lines.
>
> Best Regards,
> -Dan
>
> >
> > Regards,
> > Matthias
> >
> >>> > +};
> >>> > +
> >>> > +struct mtk_smi_common {
> >>> > + void __iomem *base;
> >>>
> >>> That seems to be never used. Please delete it.
> >>>
> >>> > + struct clk *clk[MTK_CLK_MAX];
> >>> > +};
> >>> > +
> >>> > +struct mtk_smi_larb {
> >>> > + void __iomem *base;
> >>> > + spinlock_t portlock; /* lock for config port */
> >>> > + struct clk *clk[MTK_CLK_MAX];
> >>> > + struct device *smi;
> >>> > +};
> >>> > +
> >>> Thanks,
> >>> Matthias
> >>
> > --
> > motzblog.wordpress.com

2015-05-21 15:21:02

by Matthias Brugger

[permalink] [raw]
Subject: Re: [PATCH v2 4/6] soc: mediatek: Add SMI driver

2015-05-21 16:49 GMT+02:00 Yong Wu <[email protected]>:
> On Thu, 2015-05-21 at 22:33 +0800, Daniel Kurtz wrote:
>> On Thu, May 21, 2015 at 3:30 PM, Matthias Brugger
>> <[email protected]> wrote:
>> > 2015-05-21 8:16 GMT+02:00 Yong Wu <[email protected]>:
>> >> Hi Matthias,
>> >> Thanks very much for your suggestion.
>> >> Abort the smi clock name, Could you help check below.
>> >> The others I will improve in next time.
>> >>
>> >> On Tue, 2015-05-19 at 13:14 +0200, Matthias Brugger wrote:
>> >>> 2015-05-15 11:43 GMT+02:00 Yong Wu <[email protected]>:
>> >>> > This patch add SMI(Smart Multimedia Interface) driver. This driver is
>> >>> > responsible to enable/disable iommu and control the clocks of each local arbiter.
>> >>> >
>> >> [snip]
>> >>> > +
>> >>> > +#define SMI_LARB_MMU_EN (0xf00)
>> >>> > +#define F_SMI_MMU_EN(port) (1 << (port))
>> >>> > +
>> >>> > +enum {
>> >>> > + MTK_CLK_APB,
>> >>> > + MTK_CLK_SMI,
>> >>> > + MTK_CLK_MAX,
>> >>>
>> >>> Maybe add something like:
>> >>> MTK_CLK_FIRST = MTK_CLK_APB,
>> >>> to make the for loops better readable.
>> >>>
>> >> Then, Is it like this? :
>> >> enum {
>> >> MTK_CLK_FIRST = MTK_CLK_APB,
>> >> MTK_CLK_SMI,
>> >> MTK_CLK_MAX,
>> >> }
>> >> or the CLK_SMI also need MTK_CLK_SECOND = MTK_CLK_SMI.
>> >
>> >
>> > something like:
>> > enum {
>> > MTK_CLK_FIRST,
>> > MTK_CLK_APB = MTK_CLK_FIRST,
>> > MTK_CLK_SMI,
>> > MTK_CLK_LAST,
>> > }
>> >
>> > So you can rewrite the for loop:
>> > if (i = MTK_CLK_FIRST; i < MTK_CLK_LAST; i++)
>>
>> Actually, do we ever plan to add more clks per smi node?
> No.
>> If not, perhaps the whole driver would be simpler if you just
>> explicitly handle the apb & smi clocks:
>>
>> struct mtk_smi_larb {
>> void __iomem *base;
>> spinlock_t portlock; /* lock for config port */
>> struct device *smi;
>> struct clk *clk_apb;
>> struct clk *clk_smi;
>> };
>>
>> And then all of the loops become just a pair of clock operations.
> Thanks. I will try this and compare whether it will add many lines.

If you don't plan to add more clocks in the future, then I think the
suggestion Dan made is the best one.

Regards,
Matthias
--
motzblog.wordpress.com

2015-05-22 03:14:58

by Yong Wu (吴勇)

[permalink] [raw]
Subject: Re: [PATCH v2 3/6] iommu: add ARM short descriptor page table allocator.

Hi Robin,
Thanks very much for so detail suggestion.
please also help check my comment, the others i will change in next
time.
On Fri, 2015-05-15 at 16:30 +0100, Robin Murphy wrote:
> Oops, seems I'm rather behind on things - I started this review on the
> RFC, but I'll finish it here...
>
> On 15/05/15 10:43, Yong Wu wrote:
> > This patch is for ARM Short Descriptor Format.It has 2-levels
> > pagetable and the allocator supports 4K/64K/1M/16M.
> >
>
> From the look of the code, this doesn't fully support partial unmaps
> (i.e. splitting block entries), am I right? That's OK for DMA-API use,
> since that doesn't permit partial unmaps anyway, but I'd say it's worth
> making it clear that that's still a TODO in order for short-descriptor
> mappings to fully support arbitrary raw IOMMU API usage.
Yes. I don't add split right now due to I check that
iommu_map/iommu_unmap make sure iova|pa be aligned.

I will add split for fully support in next version. Thanks.
>
> > Signed-off-by: Yong Wu <[email protected]>
> > ---
> > drivers/iommu/Kconfig | 7 +
> > drivers/iommu/Makefile | 1 +
> > drivers/iommu/io-pgtable-arm-short.c | 490 +++++++++++++++++++++++++++++++++++
> > drivers/iommu/io-pgtable.c | 4 +
> > drivers/iommu/io-pgtable.h | 6 +
> > 5 files changed, 508 insertions(+)
> > create mode 100644 drivers/iommu/io-pgtable-arm-short.c
> >
> > diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> > index 1ae4e54..3d2eac6 100644
> > --- a/drivers/iommu/Kconfig
> > +++ b/drivers/iommu/Kconfig
> > @@ -39,6 +39,13 @@ config IOMMU_IO_PGTABLE_LPAE_SELFTEST
> >
> > If unsure, say N here.
> >
> > +config IOMMU_IO_PGTABLE_SHORT
> > + bool "ARMv7/v8 Short Descriptor Format"
> > + select IOMMU_IO_PGTABLE
> > + help
> > + Enable support for the ARM Short descriptor pagetable format.
> > + It has 2-levels pagetable and The allocator supports 4K/64K/1M/16M.
> > +
> > endmenu
> >
> > config IOMMU_IOVA
> > diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> > index 080ffab..815b3c8 100644
> > --- a/drivers/iommu/Makefile
> > +++ b/drivers/iommu/Makefile
> > @@ -3,6 +3,7 @@ obj-$(CONFIG_IOMMU_API) += iommu-traces.o
> > obj-$(CONFIG_IOMMU_API) += iommu-sysfs.o
> > obj-$(CONFIG_IOMMU_IO_PGTABLE) += io-pgtable.o
> > obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o
> > +obj-$(CONFIG_IOMMU_IO_PGTABLE_SHORT) += io-pgtable-arm-short.o
> > obj-$(CONFIG_IOMMU_IOVA) += iova.o
> > obj-$(CONFIG_OF_IOMMU) += of_iommu.o
> > obj-$(CONFIG_MSM_IOMMU) += msm_iommu.o msm_iommu_dev.o
> > diff --git a/drivers/iommu/io-pgtable-arm-short.c b/drivers/iommu/io-pgtable-arm-short.c
> > new file mode 100644
> > index 0000000..cc286ce5
> > --- /dev/null
> > +++ b/drivers/iommu/io-pgtable-arm-short.c
> > @@ -0,0 +1,490 @@
> > +/*
> > + * Copyright (c) 2014-2015 MediaTek Inc.
> > + * Author: Yong Wu <[email protected]>
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License version 2 as
> > + * published by the Free Software Foundation.
> > + *
> > + * This program is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> > + * GNU General Public License for more details.
> > + */
> > +#define pr_fmt(fmt) "arm-short-desc io-pgtable: "fmt
> > +
> > +#include <linux/err.h>
> > +#include <linux/mm.h>
> > +#include <linux/iommu.h>
> > +#include <linux/errno.h>
>
> Alphabetically-sorted includes, please. Also, this list doesn't look
> particularly correct - e.g. I don't think you're actually using anything
> from mm.h, but you are relying on stuff from kernel.h, slab.h, gfp.h,
> etc. being pulled in indirectly.
>
> > +
> > +#include "io-pgtable.h"
> > +
> > +typedef u32 arm_short_iopte;
> > +
> > +struct arm_short_io_pgtable {
> > + struct io_pgtable iop;
> > + struct kmem_cache *ptekmem;
> > + size_t pgd_size;
> > + void *pgd;
> > +};
> > +
> > +#define io_pgtable_short_to_data(x) \
> > + container_of((x), struct arm_short_io_pgtable, iop)
> > +
> > +#define io_pgtable_ops_to_pgtable(x) \
> > + container_of((x), struct io_pgtable, ops)
>
> This macro may as well be factored out into io-pgtable.h before
> duplication spreads any further. I don't see any reason for it not to
> live alongside the definition of struct io_pgtable, anyway.
Thanks. I will move it into io-pgtable.h.
Then I think this also should be deleted in io-pgtable-arm.c.
>
> > +
> > +#define io_pgtable_short_ops_to_data(x) \
> > + io_pgtable_short_to_data(io_pgtable_ops_to_pgtable(x))
> > +
> > +#define ARM_SHORT_MAX_ADDR_BITS 32
> > +
> > +#define ARM_SHORT_PGDIR_SHIFT 20
> > +#define ARM_SHORT_PAGE_SHIFT 12
> > +#define ARM_SHORT_PTRS_PER_PTE 256
> > +#define ARM_SHORT_BYTES_PER_PTE 1024
> > +
> > +/* 1 level pagetable */
> > +#define ARM_SHORT_F_PGD_TYPE_PAGE (0x1)
> > +#define ARM_SHORT_F_PGD_TYPE_PAGE_MSK (0x3)
> > +#define ARM_SHORT_F_PGD_TYPE_SECTION (0x2)
> > +#define ARM_SHORT_F_PGD_TYPE_SUPERSECTION (0x2 | (1 << 18))
> > +#define ARM_SHORT_F_PGD_TYPE_SECTION_MSK (0x3 | (1 << 18))
> > +#define ARM_SHORT_F_PGD_TYPE_IS_PAGE(pgd) (((pgd) & 0x3) == 1)
>
> This confused me on first glance looking at the places it's used,
> because it's not actually referring to a thing which is a page. Maybe
> ..._IS_TABLE would be a better name?
Yes. It is better. From the spec, it is "page table".
Then How about "ARM_SHORT_F_PGD_TYPE_IS_PAGETABLE"?
It is a little long, but there are only 2 lines use it and Both are not
over 80 character even though "_IS_PAGETABLE".
>
> > +#define ARM_SHORT_F_PGD_TYPE_IS_SECTION(pgd) \
> > + (((pgd) & ARM_SHORT_F_PGD_TYPE_SECTION_MSK) \
> > + == ARM_SHORT_F_PGD_TYPE_SECTION)
> > +#define ARM_SHORT_F_PGD_TYPE_IS_SUPERSECTION(pgd) \
> > + (((pgd) & ARM_SHORT_F_PGD_TYPE_SECTION_MSK) \
> > + == ARM_SHORT_F_PGD_TYPE_SUPERSECTION)
> > +
> > +#define ARM_SHORT_F_PGD_B_BIT BIT(2)
> > +#define ARM_SHORT_F_PGD_C_BIT BIT(3)
> > +#define ARM_SHORT_F_PGD_IMPLE_BIT BIT(9)
> > +#define ARM_SHORT_F_PGD_S_BIT BIT(16)
> > +#define ARM_SHORT_F_PGD_NG_BIT BIT(17)
> > +#define ARM_SHORT_F_PGD_NS_BIT_PAGE BIT(3)
> > +#define ARM_SHORT_F_PGD_NS_BIT_SECTION BIT(19)
> > +
> > +#define ARM_SHORT_F_PGD_PA_PAGETABLE_MSK 0xfffffc00
> > +#define ARM_SHORT_F_PGD_PA_SECTION_MSK 0xfff00000
> > +#define ARM_SHORT_F_PGD_PA_SUPERSECTION_MSK 0xff000000
> > +
> > +/* 2 level pagetable */
> > +#define ARM_SHORT_F_PTE_TYPE_GET(val) ((val) & 0x3)
> > +#define ARM_SHORT_F_PTE_TYPE_LARGE BIT(0)
> > +#define ARM_SHORT_F_PTE_TYPE_SMALL BIT(1)
> > +#define ARM_SHORT_F_PTE_B_BIT BIT(2)
> > +#define ARM_SHORT_F_PTE_C_BIT BIT(3)
> > +#define ARM_SHORT_F_PTE_IMPLE_BIT BIT(9)
> > +#define ARM_SHORT_F_PTE_S_BIT BIT(10)
> > +#define ARM_SHORT_F_PTE_PA_LARGE_MSK 0xffff0000
> > +#define ARM_SHORT_F_PTE_PA_SMALL_MSK 0xfffff000
> > +
> > +#define ARM_SHORT_PGD_IDX(a) ((a) >> ARM_SHORT_PGDIR_SHIFT)
> > +#define ARM_SHORT_PTE_IDX(a) \
> > + (((a) >> ARM_SHORT_PAGE_SHIFT) & 0xff)
> > +#define ARM_SHORT_GET_PTE_VA(pgd) \
> > + (phys_to_virt((unsigned long)pgd & ARM_SHORT_F_PGD_PA_PAGETABLE_MSK))
> > +
> > +static arm_short_iopte *
> > +arm_short_get_pte_in_pgd(arm_short_iopte curpgd, unsigned int iova)
> > +{
> > + arm_short_iopte *pte;
> > +
> > + pte = ARM_SHORT_GET_PTE_VA(curpgd);
> > + pte += ARM_SHORT_PTE_IDX(iova);
> > + return pte;
> > +}
> > +
> > +static arm_short_iopte *
> > +arm_short_supersection_start(arm_short_iopte *pgd)
> > +{
> > + return (arm_short_iopte *)(round_down((unsigned long)pgd, (16 * 4)));
> > +}
> > +
> > +static int _arm_short_check_free_pte(struct arm_short_io_pgtable *data,
> > + arm_short_iopte *pgd)
>
> Given that this is only returning success/failure, it should probably be
> bool rather than int.
Thanks.
>
> > +{
> > + arm_short_iopte *pte;
> > + int i;
> > +
> > + pte = ARM_SHORT_GET_PTE_VA(*pgd);
> > +
> > + for (i = 0; i < ARM_SHORT_PTRS_PER_PTE; i++) {
> > + if (pte[i] != 0)
> > + return 1;
> > + }
> > +
> > + /* Free PTE */
> > + kmem_cache_free(data->ptekmem, pte);
> > + *pgd = 0;
> > +
> > + return 0;
> > +}
> > +
> > +static phys_addr_t arm_short_iova_to_phys(struct io_pgtable_ops *ops,
> > + unsigned long iova)
> > +{
> > + struct arm_short_io_pgtable *data = io_pgtable_short_ops_to_data(ops);
> > + arm_short_iopte *pte, *pgd = data->pgd;
> > + phys_addr_t pa = 0;
> > +
> > + pgd += ARM_SHORT_PGD_IDX(iova);
> > +
> > + if (ARM_SHORT_F_PGD_TYPE_IS_PAGE(*pgd)) {
> > + u8 pte_type;
> > +
> > + pte = arm_short_get_pte_in_pgd(*pgd, iova);
> > + pte_type = ARM_SHORT_F_PTE_TYPE_GET(*pte);
> > +
> > + if (pte_type == ARM_SHORT_F_PTE_TYPE_LARGE) {
> > + pa = (*pte) & ARM_SHORT_F_PTE_PA_LARGE_MSK;
> > + pa |= iova & (~ARM_SHORT_F_PTE_PA_LARGE_MSK);
> > + } else if (pte_type == ARM_SHORT_F_PTE_TYPE_SMALL) {
> > + pa = (*pte) & ARM_SHORT_F_PTE_PA_SMALL_MSK;
> > + pa |= iova & (~ARM_SHORT_F_PTE_PA_SMALL_MSK);
> > + }
> > + } else {
> > + if (ARM_SHORT_F_PGD_TYPE_IS_SECTION(*pgd)) {
> > + pa = (*pgd) & ARM_SHORT_F_PGD_PA_SECTION_MSK;
> > + pa |= iova & (~ARM_SHORT_F_PGD_PA_SECTION_MSK);
> > + } else if (ARM_SHORT_F_PGD_TYPE_IS_SUPERSECTION(*pgd)) {
> > + pa = (*pgd) & ARM_SHORT_F_PGD_PA_SUPERSECTION_MSK;
> > + pa |= iova & (~ARM_SHORT_F_PGD_PA_SUPERSECTION_MSK);
> > + }
> > + }
> > +
> > + return pa;
> > +}
> > +
> > +static int arm_short_unmap(struct io_pgtable_ops *ops, unsigned long iova,
> > + size_t size)
> > +{
> > + struct arm_short_io_pgtable *data = io_pgtable_short_ops_to_data(ops);
> > + arm_short_iopte *pgd;
> > + unsigned long iova_start = iova;
> > + unsigned long long end_plus_1 = iova + size;
>
> Since everything's at page granularity, working with IOVA PFNs rather
> than raw addresses might be more convenient, and also sidesteps the
> 32-bit overflow problem. On 64-bit platforms, we're wasting a whole 95
> bits of a long long here ;)
About IOVA PFNs, if add it, we have to include "iova.h". then it may add
a new relationship with other module. I am not sure it is ok.
in pg-iotable-arm.c, I also don't see it. so I don't prepare to add iova
pfn, is it ok?

iova here always is 32bit, and iova+size may over 32bit. so I use "long
long" here. "long long" always is 64bit in 32bit&64bit platform?
"long" may be error while 32bit platform.

I will add split and try to delete this in next version.
>
> > + const struct iommu_gather_ops *tlb = data->iop.cfg.tlb;
> > + void *cookie = data->iop.cookie;
> > + int ret;
> > +
> > + do {
> > + pgd = (arm_short_iopte *)data->pgd + ARM_SHORT_PGD_IDX(iova);
> > +
> > + if (ARM_SHORT_F_PGD_TYPE_IS_PAGE(*pgd)) {
> > + arm_short_iopte *pte;
> > + unsigned int pte_offset;
> > + unsigned int num_to_clean;
> > +
> > + pte_offset = ARM_SHORT_PTE_IDX(iova);
> > + num_to_clean =
> > + min((unsigned int)((end_plus_1 - iova) / PAGE_SIZE),
>
> Shouldn't this be page size for the IOMMU, not the CPU? I'm a bit slow
> today, but this looks like it might go wrong when PAGE_SIZE > 4K.
Thanks. Then I will add a define like this:
#define IOMMU_PAGE_SIZE_4K SZ_4K
And I will put it into io-pgtable.h, the io-pgtable-arm.c may also need
it.
>
> > + (ARM_SHORT_PTRS_PER_PTE - pte_offset));
> > +
> > + pte = arm_short_get_pte_in_pgd(*pgd, iova);
> > +
> > + memset(pte, 0, num_to_clean * sizeof(arm_short_iopte));
> > +
> > + ret = _arm_short_check_free_pte(data, pgd);
> > + if (ret == 1)/* pte is not freed, need to flush pte */
> > + tlb->flush_pgtable(
> > + pte,
> > + num_to_clean * sizeof(arm_short_iopte),
> > + cookie);
> > + else
> > + tlb->flush_pgtable(pgd, sizeof(arm_short_iopte),
> > + cookie);
> > +
> > + iova += num_to_clean << PAGE_SHIFT;
> > + } else if (ARM_SHORT_F_PGD_TYPE_IS_SECTION(*pgd)) {
> > + *pgd = 0;
> > +
> > + tlb->flush_pgtable(pgd, sizeof(arm_short_iopte),
> > + cookie);
> > + iova += SZ_1M;
> > + } else if (ARM_SHORT_F_PGD_TYPE_IS_SUPERSECTION(*pgd)) {
> > + arm_short_iopte *start;
> > +
> > + start = arm_short_supersection_start(pgd);
> > + if (unlikely(start != pgd))
> > + pr_warn("%s:suppersection start isn't aligned.iova=0x%lx,pgd=0x%x\n",
>
> Nit: typo in "supersection" here.
>
> > + __func__, iova, *pgd);
> > +
> > + memset(start, 0, 16 * sizeof(arm_short_iopte));
> > +
> > + tlb->flush_pgtable(start, 16 * sizeof(arm_short_iopte),
> > + cookie);
> > +
> > + iova = (iova + SZ_16M) & (~(SZ_16M - 1));
>
> iova = ALIGN(iova + SZ_16M, SZ_16M);
>
> Except that being unaligned in the first place is an error condition, so
> it would make more sense to just have "iova += SZ_16M;" here, and put
> "iova = ALIGN(iova, SZ_16M) after the warning in the error path above.
>
> Since you don't handle splitting block mappings, and you also seem to be
> missing an equivalent warning for unaligned second-level large pages, it
> might be better to simply return an error if the requested size and
> alignment don't exactly match the type of entry found, rather than let
> the page tables get into an unexpectedly inconsistent state.
OK. I will add split and add the align checking of start_iova for each
case..
>
> > + } else {
> > + break;
> > + }
> > + } while (iova < end_plus_1 && iova);
> > +
> > + tlb->tlb_add_flush(iova_start, size, true, cookie);
> > +
> > + return 0;
> > +}
> > +
> > +static arm_short_iopte __arm_short_pte_port(unsigned int prot, bool large)
>
> I assume _port is a typo of _prot
>
> > +{
> > + arm_short_iopte pteprot;
> > +
> > + pteprot = ARM_SHORT_F_PTE_S_BIT;
> > +
> > + pteprot |= large ? ARM_SHORT_F_PTE_TYPE_LARGE :
> > + ARM_SHORT_F_PTE_TYPE_SMALL;
> > +
> > + if (prot & IOMMU_CACHE)
> > + pteprot |= ARM_SHORT_F_PTE_B_BIT | ARM_SHORT_F_PTE_C_BIT;
> > +
> > + return pteprot;
> > +}
> > +
> > +static arm_short_iopte __arm_short_pgd_port(int prot, bool super)
>
> Ditto
>
> > +{
> > + arm_short_iopte pgdprot;
> > +
> > + pgdprot = ARM_SHORT_F_PGD_S_BIT;
> > + pgdprot |= super ? ARM_SHORT_F_PGD_TYPE_SUPERSECTION :
> > + ARM_SHORT_F_PGD_TYPE_SECTION;
> > + if (prot & IOMMU_CACHE)
> > + pgdprot |= ARM_SHORT_F_PGD_C_BIT | ARM_SHORT_F_PGD_B_BIT;
> > +
> > + return pgdprot;
> > +}
> > +
> > +static int _arm_short_map_page(struct arm_short_io_pgtable *data,
> > + unsigned int iova, phys_addr_t pa,
> > + unsigned int prot, bool largepage)
> > +{
> > + arm_short_iopte *pgd = data->pgd;
> > + arm_short_iopte *pte;
> > + arm_short_iopte pgdprot, pteprot;
> > + arm_short_iopte mask = largepage ? ARM_SHORT_F_PTE_PA_LARGE_MSK :
> > + ARM_SHORT_F_PTE_PA_SMALL_MSK;
> > + int i, ptenum = largepage ? 16 : 1;
> > + bool ptenew = false;
> > + void *pte_new_va;
> > + void *cookie = data->iop.cookie;
> > +
> > + if ((iova | pa) & (~mask)) {
> > + pr_err("IOVA|PA Not Aligned(iova=0x%x pa=0x%pa type=%s)\n",
> > + iova, &pa, largepage ? "large page" : "small page");
>
> Nit: you may as well just have largepage ? "large" : "small" here and
> "...type=%s page..." in the format string.
>
> > + return -EINVAL;
> > + }
> > +
> > + pgdprot = ARM_SHORT_F_PGD_TYPE_PAGE;
> > + if (data->iop.cfg.quirks & IO_PGTABLE_QUIRK_ARM_NS)
> > + pgdprot |= ARM_SHORT_F_PGD_NS_BIT_PAGE;
> > +
> > + pgd += ARM_SHORT_PGD_IDX(iova);
> > +
> > + if (!(*pgd)) {
> > + pte_new_va = kmem_cache_zalloc(data->ptekmem, GFP_KERNEL);
> > + if (unlikely(!pte_new_va)) {
> > + pr_err("Failed to alloc pte\n");
>
> The allocator should already print the details of a failure, so I don't
> think this adds much.
>
> > + return -ENOMEM;
> > + }
> > +
> > + /* Check pte alignment -- must 1K align */
> > + if (unlikely((unsigned long)pte_new_va &
> > + (ARM_SHORT_BYTES_PER_PTE - 1))) {
> > + pr_err("The new pte is not aligned! (va=0x%p)\n",
> > + pte_new_va);
> > + kmem_cache_free(data->ptekmem, (void *)pte_new_va);
> > + return -ENOMEM;
> > + }
>
> Can this ever actually happen? Besides, if kmem_cache_alloc isn't
> honouring the alignment specified in kmem_cache_create, I think the
> kernel might have bigger problems anyway...
Thanks. I will delete it.
>
> > + ptenew = true;
> > + *pgd = virt_to_phys(pte_new_va) | pgdprot;
> > + kmemleak_ignore(pte_new_va);
> > + data->iop.cfg.tlb->flush_pgtable(pgd, sizeof(arm_short_iopte),
> > + cookie);
> > + } else {
> > + /* Someone else may have allocated for this pgd */
> > + if (((*pgd) & (~ARM_SHORT_F_PGD_PA_PAGETABLE_MSK)) != pgdprot) {
> > + pr_err("The prot of old pgd is not Right!iova=0x%x pgd=0x%x pgprot=0x%x\n",
> > + iova, (*pgd), pgdprot);
> > + return -EEXIST;
> > + }
> > + }
> > +
> > + pteprot = (arm_short_iopte)pa;
> > + pteprot |= __arm_short_pte_port(prot, largepage);
> > +
[snip]
> > +static int arm_short_map(struct io_pgtable_ops *ops, unsigned long iova,
> > + phys_addr_t paddr, size_t size, int prot)
> > +{
> > + struct arm_short_io_pgtable *data = io_pgtable_short_ops_to_data(ops);
> > + const struct iommu_gather_ops *tlb = data->iop.cfg.tlb;
> > + int ret;
> > +
> > + if (!(prot & (IOMMU_READ | IOMMU_WRITE)))
> > + return -EINVAL;
> > +
> > + if (size == SZ_4K) {/* most case */
> > + ret = _arm_short_map_page(data, iova, paddr, prot, false);
> > + } else if (size == SZ_64K) {
> > + ret = _arm_short_map_page(data, iova, paddr, prot, true);
> > + } else if (size == SZ_1M) {
> > + ret = _arm_short_map_section(data, iova, paddr, prot, false);
> > + } else if (size == SZ_16M) {
> > + ret = _arm_short_map_section(data, iova, paddr, prot, true);
> > + } else {
> > + ret = -EINVAL;
> > + }
>
> I think this might be nicer as a switch statement. You could perhaps
> move the add_flush beforehand (since it's unconditional here anyway) and
> get rid of ret altogether.
move the add_flush before map?
if we change the pagetable firstly, then add_flush. is it better?
>
> Alternatively, given that map_page and map_section are so similar, maybe
> it's worth sorting out the pgprot and pgd/pte pointer for the
> page/section distinction here, then just passing those to a single
> function which maps compound/non-compound leaf entries.
Ok. I will try to merge them in one function like _arm_short_map.
>
> > + tlb->tlb_add_flush(iova, size, true, data->iop.cookie);
> > + return ret;
> > +}
> > +
> > +static struct io_pgtable *
> > +arm_short_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
> > +{
> > + struct arm_short_io_pgtable *data;
> > +
> > + if (cfg->ias != 32)
> > + return NULL;
> > +
> > + if (cfg->oas > ARM_SHORT_MAX_ADDR_BITS)
> > + return NULL;
> > +
> > + cfg->pgsize_bitmap &= SZ_4K | SZ_64K | SZ_1M | SZ_16M;
> > +
> > + data = kzalloc(sizeof(*data), GFP_KERNEL);
> > + if (!data)
> > + return NULL;
> > +
> > + data->pgd_size = SZ_16K;
> > +
> > + data->pgd = alloc_pages_exact(data->pgd_size, GFP_KERNEL | __GFP_ZERO);
>
> I think this needs __GFP_DMA too, to ensure the pgd lies below the 4GB
> boundary on arm64/LPAE systems with memory above that.
Thanks.
> > @@ -62,6 +63,11 @@ struct io_pgtable_cfg {
> > u64 vttbr;
> > u64 vtcr;
> > } arm_lpae_s2_cfg;
> > +
> > + struct {
> > + u64 ttbr[2];
> > + u64 tcr;
>
> The ARM ARM defines these all as 32-bit registers for the
> short-descriptor format, so I think u32 would be more appropriate -
> better to let the compiler truncate things and warn about it, than have
> the hardware do it silently at runtime ;)
I will change both of them to u32.
>
> > + } arm_short_cfg;
> > };
> > };
> >
> > --
> > 1.8.1.1.dirty
> >
>

2015-05-25 06:32:12

by Tomasz Figa

[permalink] [raw]
Subject: Re: [PATCH v2 1/6] dt-bindings: iommu: Add binding for mediatek IOMMU

Hi,

Please see my comments inline.

On Fri, May 15, 2015 at 6:43 PM, Yong Wu <[email protected]> wrote:
> This patch add mediatek iommu dts binding document.
>
> Signed-off-by: Yong Wu <[email protected]>
> ---
> .../devicetree/bindings/iommu/mediatek,iommu.txt | 51 ++++++++++
> include/dt-bindings/iommu/mt8173-iommu-port.h | 112 +++++++++++++++++++++
> 2 files changed, 163 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/iommu/mediatek,iommu.txt
> create mode 100644 include/dt-bindings/iommu/mt8173-iommu-port.h
>
> diff --git a/Documentation/devicetree/bindings/iommu/mediatek,iommu.txt b/Documentation/devicetree/bindings/iommu/mediatek,iommu.txt
> new file mode 100644
> index 0000000..f2cc7c0
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/iommu/mediatek,iommu.txt
> @@ -0,0 +1,51 @@
> +/******************************************************/
> +/* Mediatek IOMMU Hardware Block Diagram */
> +/******************************************************/

nit: Could you follow one of existing styles of DT binding
documentation? You might be able to use [1] as an example.

[1] http://lxr.free-electrons.com/source/Documentation/devicetree/bindings/iommu/arm,smmu.txt
.

> + EMI (External Memory Interface)
> + |
> + m4u (Multimedia Memory Management Unit)
> + |
> + smi (Smart Multimedia Interface)
> + |
> + +---------------+-------
> + | |
> + | |
> + vdec larb disp larb ... SoCs have different local arbiter(larb).
> + | |
> + | |
> + +----+----+ +-----+-----+
> + | | | | | | ...
> + | | | | | | ...
> + | | | | | | ...
> + MC PP VLD OVL0 RDMA0 WDMA0 ... There are different ports in each larb.
> +

This diagram is still quite meaningless without proper description of
all the functionality off all the blocks and interfaces between them.

> +Required properties:
> +- compatible : must be "mediatek,mt8173-m4u".
> +- reg : m4u register base and size.
> +- interrupts : the interrupt of m4u.
> +- clocks : must contain one entry for each clock-names.
> +- clock-names : must be "bclk", It is the block clock of m4u.
> +- larb-portes-nr : must contain the number of the portes for each larb(local
> + arbiter). The number is defined in dt-binding/iommu/mt8173-iommu-port.h.

typo: Should it be "ports" instead of "portes"?

Anyway, shouldn't this be a property of larb nodes? I.e. the larb0
node would have a port-count property set to M4U_LARB0_PORT_NR.

> +- larb : must contain the local arbiters of the current platform. Refer to
> + bindings/soc/mediatek/mediatek,smi.txt. It must sort according to the
> + local arbiter index, like larb0, larb1, larb2...
> +- iommu-cells : must be 1. Specifies the client PortID as defined in
> + dt-binding/iommu/mt8173-iommu-port.h

Looking at the driver, wouldn't a 2 cell system be more appropriate
here? With first cell indexing the larbs and second the ports within
that larb.

[snip]

> +#define M4U_LARB0_PORT_NR 8
> +#define M4U_LARB1_PORT_NR 10
> +#define M4U_LARB2_PORT_NR 21
> +#define M4U_LARB3_PORT_NR 15
> +#define M4U_LARB4_PORT_NR 6
> +#define M4U_LARB5_PORT_NR 9
> +
> +#define M4U_LARB0_PORT(n) (n)
> +#define M4U_LARB1_PORT(n) ((n) + M4U_LARB0_PORT_NR + M4U_LARB0_PORT(0))
> +#define M4U_LARB2_PORT(n) ((n) + M4U_LARB1_PORT_NR + M4U_LARB1_PORT(0))
> +#define M4U_LARB3_PORT(n) ((n) + M4U_LARB2_PORT_NR + M4U_LARB2_PORT(0))
> +#define M4U_LARB4_PORT(n) ((n) + M4U_LARB3_PORT_NR + M4U_LARB3_PORT(0))
> +#define M4U_LARB5_PORT(n) ((n) + M4U_LARB4_PORT_NR + M4U_LARB4_PORT(0))

This looks like some artificial indexing. Does it have any
correspondence with actual hardware configuration?

> +
> +/* larb0 */
> +#define M4U_PORT_DISP_OVL0 M4U_LARB0_PORT(0)
> +#define M4U_PORT_DISP_RDMA0 M4U_LARB0_PORT(1)
> +#define M4U_PORT_DISP_WDMA0 M4U_LARB0_PORT(2)
[snip]
> +#define M4U_PORT_VENC_CUR_CHROMA_SET2 M4U_LARB5_PORT(6)
> +#define M4U_PORT_VENC_RD_COMA_SET2 M4U_LARB5_PORT(7)
> +#define M4U_PORT_VENC_SV_COMA_SET2 M4U_LARB5_PORT(8)

This convinces me even more that this binding actually needs
#iommu-cells to be 2 and the indexing system I described above.

Best regards,
Tomasz

2015-05-25 06:48:59

by Tomasz Figa

[permalink] [raw]
Subject: Re: [PATCH v2 2/6] dt-bindings: mediatek: Add smi dts binding

Hi,

Please see my comments inline.

On Fri, May 15, 2015 at 6:43 PM, Yong Wu <[email protected]> wrote:
> This patch add smi binding document.
>
> Signed-off-by: Yong Wu <[email protected]>
> ---
> .../bindings/soc/mediatek/mediatek,smi-larb.txt | 24 ++++++++++++++++++++++
> .../bindings/soc/mediatek/mediatek,smi.txt | 22 ++++++++++++++++++++
> 2 files changed, 46 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/soc/mediatek/mediatek,smi-larb.txt
> create mode 100644 Documentation/devicetree/bindings/soc/mediatek/mediatek,smi.txt
>
> diff --git a/Documentation/devicetree/bindings/soc/mediatek/mediatek,smi-larb.txt b/Documentation/devicetree/bindings/soc/mediatek/mediatek,smi-larb.txt
> new file mode 100644
> index 0000000..c06c5b6
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/soc/mediatek/mediatek,smi-larb.txt
> @@ -0,0 +1,24 @@
> +SMI(Smart Multimedia Interface) Local Arbiter
> +
> +The hardware block diagram please check bindings/iommu/mediatek,iommu.txt
> +
> +Required properties:
> +- compatible : must be "mediatek,mt8173-smi-larb"
> +- reg : the register and size of each local arbiter.

Shouldn't it be "of this local arbiter"?

> +- smi : must be "&smi_common". Refer to bindings/soc/mediatek,smi.txt.

This is incorrect. Device tree source authors are free to use any
labels for their nodes and documentation should not rely on the fact
that there is some node with some particular label. This should be
something like:

- smi : a phandle to node of XYZ

with proper description of that node in place of XYZ.

> +- clocks : must contain one entry for each clock-names.
> + There are 2 clockes:
> + APB clock : Advanced Peripheral Bus clock, It's the clock for setting
> + the register.
> + SMI clock : It's the clock for transfer data and command.

The description of each clock should go to "clock-names" property.
Please use some of already existing bindings as an example.

> +- clock-name: must be "apb" and "smi".

typo: Should be "clock-names".

I'd recommend describing this property as below:

- clock-names: must contain 2 entries, as follows:
- "apb" : <here comes the description of APB clock>,
- "smi" : <here comes the description of SMI clock>.

> +
> +Example:
> + larb0:larb@14021000 {
> + compatible = "mediatek,mt8173-smi-larb";
> + reg = <0 0x14021000 0 0x1000>;
> + smi = <&smi_common>;
> + clocks = <&mmsys MM_SMI_LARB0>,
> + <&mmsys MM_SMI_LARB0>;
> + clock-names = "apb", "smi";
> + };
> diff --git a/Documentation/devicetree/bindings/soc/mediatek/mediatek,smi.txt b/Documentation/devicetree/bindings/soc/mediatek/mediatek,smi.txt
> new file mode 100644
> index 0000000..c2389b4
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/soc/mediatek/mediatek,smi.txt
> @@ -0,0 +1,22 @@
> +SMI(Smart Multimedia Interface)
> +
> +The hardware block diagram please check bindings/iommu/mediatek,iommu.txt
> +
> +Required properties:
> +- compatible : must be "mediatek,mt8173-smi"
> +- reg : the register and size of the smi.

nit: s/smi/SMI block/

> +- clocks : must contain one entry for each clock-names.
> + There are 2 clockes:
> + APB clock : Advanced Peripheral Bus clock, It's the clock for setting
> + the register.
> + SMI clock : It's the clock for transfer data and command.
> +- clock-name: must be "apb" and "smi".

See my comment for those properties in the first binding.

Best regards,
Tomasz

2015-05-25 08:29:57

by Tomasz Figa

[permalink] [raw]
Subject: Re: [PATCH v2 5/6] iommu/mediatek: Add mt8173 IOMMU driver

Hi,

Please see my comments inline.

On Fri, May 15, 2015 at 6:43 PM, Yong Wu <[email protected]> wrote:
[snip]
> +
> +struct mtk_iommu_info {
> + void __iomem *base;
> + int irq;
> + struct device *dev;
> + struct device *larbdev[MTK_IOMMU_LARB_MAX_NR];
> + struct clk *bclk;
> + dma_addr_t protect_base; /* protect memory base */
> + unsigned int larb_nr; /* local arbiter number */
> + unsigned int larb_portid_base[MTK_IOMMU_LARB_MAX_NR];
> + /* the port id base for each local arbiter */

It might not be a bad idea to convert this kind of comments into a
proper kerneldoc comment for the whole struct. Similarly for other
structs in the driver.

[snip]

> +static void mtk_iommu_clear_intr(void __iomem *m4u_base)

nit: Instead of pasing m4u_base here, could you pass a pointer to the
mtk_iommu_info here and use its base field? This would be more strict,
because currently a void pointer permits passing anything to this
function.

> +{
> + u32 val;
> +
> + val = readl(m4u_base + REG_MMU_INT_CONTROL0);
> + val |= F_INT_L2_CLR_BIT;
> + writel(val, m4u_base + REG_MMU_INT_CONTROL0);
> +}
> +
> +static void mtk_iommu_tlb_flush_all(void *cookie)
> +{
> + struct mtk_iommu_domain *domain = cookie;
> + u32 val;
> +
> + val = F_INVLD_EN1 | F_INVLD_EN0;
> + writel(val, domain->imuinfo->base + REG_MMU_INV_SEL);

nit: Do you need this val variable?

> + writel(F_ALL_INVLD, domain->imuinfo->base + REG_MMU_INVALIDATE);
> +}
> +
> +static void mtk_iommu_tlb_add_flush(unsigned long iova, size_t size,
> + bool leaf, void *cookie)
> +{
> + struct mtk_iommu_domain *domain = cookie;
> + void __iomem *m4u_base = domain->imuinfo->base;
> + unsigned int iova_start = iova, iova_end = iova + size - 1;
> + int ret;
> + u32 val;
> +
> + val = F_INVLD_EN1 | F_INVLD_EN0;
> + writel(val, m4u_base + REG_MMU_INV_SEL);

nit: Does this write need to go through this val variable?

> +
> + writel(iova_start, m4u_base + REG_MMU_INVLD_START_A);
> + writel(iova_end, m4u_base + REG_MMU_INVLD_END_A);
> + writel(F_MMU_INV_RANGE, m4u_base + REG_MMU_INVALIDATE);
> +
> + ret = readl_poll_timeout_atomic(m4u_base + REG_MMU_CPE_DONE, val,
> + val != 0, 10, 1000000);
> + if (ret) {
> + dev_warn(domain->imuinfo->dev, "Invalid tlb don't done\n");

Maybe "Partial TLB flush timed out, falling back to full flush\n"?

> + mtk_iommu_tlb_flush_all(cookie);
> + }
> + writel(0, m4u_base + REG_MMU_CPE_DONE);
> +}
> +
> +static void mtk_iommu_flush_pgtable(void *ptr, size_t size, void *cookie)
> +{
> + /*
> + * After delete arch_setup_dma_ops,
> + * This will be replaced with dma_map_page
> + */
> + __dma_flush_range(ptr, ptr + size);
> +}
> +
> +static struct iommu_gather_ops mtk_iommu_gather_ops = {
> + .tlb_flush_all = mtk_iommu_tlb_flush_all,
> + .tlb_add_flush = mtk_iommu_tlb_add_flush,
> + .tlb_sync = mtk_iommu_tlb_flush_all,
> + .flush_pgtable = mtk_iommu_flush_pgtable,
> +};
> +
> +static irqreturn_t mtk_iommu_isr(int irq, void *dev_id)
> +{
> + struct mtk_iommu_domain *mtkdomain = dev_id;
> + struct mtk_iommu_info *piommu = mtkdomain->imuinfo;
> + struct device *dev = piommu->dev;
> + void __iomem *m4u_base = piommu->base;
> + u32 int_state, regval, fault_iova, fault_pa;
> + unsigned int fault_larb, fault_port;
> + bool layer, write;
> +
> + int_state = readl(m4u_base + REG_MMU_FAULT_ST1);
> +
> + /* read error info from registers */
> + fault_iova = readl(m4u_base + REG_MMU_FAULT_VA);
> + layer = fault_iova & F_MMU_FAULT_VA_LAYER_BIT;
> + write = fault_iova & F_MMU_FAULT_VA_WRITE_BIT;
> + fault_iova &= F_MMU_FAULT_VA_MSK;
> + fault_pa = readl(m4u_base + REG_MMU_INVLD_PA);
> + regval = readl(m4u_base + REG_MMU_INT_ID);
> + fault_larb = F_MMU0_INT_ID_LARB_ID(regval);
> + fault_port = F_MMU0_INT_ID_PORT_ID(regval);
> +
> + report_iommu_fault(&mtkdomain->domain, dev, fault_iova,
> + write ? IOMMU_FAULT_WRITE : IOMMU_FAULT_READ);
> +
> + if (int_state & F_INT_TRANSLATION_FAULT) {
> + dev_err_ratelimited(
> + dev,
> + "fault:iova=0x%x pa=0x%x larb=%d port=%d layer=%d %s\n",
> + fault_iova, fault_pa, fault_larb, fault_port,
> + layer, write ? "write" : "read");
> + }
> +
> + if (int_state & F_INT_MAIN_MULTI_HIT_FAULT)
> + dev_err_ratelimited(dev,
> + "multi-hit!iova=0x%x larb=%d port=%d\n",
> + fault_iova, fault_larb, fault_port);
> +
> + if (int_state & F_INT_INVALID_PA_FAULT) {
> + if (!(int_state & F_INT_TRANSLATION_FAULT))
> + dev_err_ratelimited(
> + dev,
> + "invalid pa!iova=0x%x larb=%d port=%d\n",
> + fault_iova, fault_larb, fault_port);
> + }
> + if (int_state & F_INT_ENTRY_REPLACEMENT_FAULT)
> + dev_err_ratelimited(dev,
> + "replace-fault!iova=0x%x larb=%d port=%d\n",
> + fault_iova, fault_larb, fault_port);
> +
> + if (int_state & F_INT_TLB_MISS_FAULT)
> + dev_err_ratelimited(dev,
> + "tlb miss-fault!iova=0x%x larb=%d port=%d\n",
> + fault_iova, fault_larb, fault_port);

Hmm, do you need so many prints here? Could you just dump the full
state (including int_state) regardless of interrupt type here? Also I
wonder if this shouldn't be printed only when report_iommu_fault()
returns -ENOSYS.

> +
> + mtk_iommu_tlb_flush_all(mtkdomain);
> +
> + mtk_iommu_clear_intr(m4u_base);
> +
> + return IRQ_HANDLED;
> +}
> +
> +static int mtk_iommu_parse_dt(struct platform_device *pdev,
> + struct mtk_iommu_info *piommu)
> +{
> + struct device *dev = &pdev->dev;
> + struct device_node *ofnode;
> + struct resource *res;
> + unsigned int i, port_nr, lastportnr = 0;
> + int ret;
> +
> + ofnode = dev->of_node;
> +
> + res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> + piommu->base = devm_ioremap_resource(&pdev->dev, res);
> + if (IS_ERR(piommu->base)) {
> + dev_err(dev, "Failed to get base (%lx)\n",
> + PTR_ERR(piommu->base));

devm_ioremap_resource() already prints a message on error.

> + ret = PTR_ERR(piommu->base);
> + goto err_parse_dt;

You can just return PTR_ERR(piommu->base) here directly.

> + }
> +
> + piommu->irq = platform_get_irq(pdev, 0);
> + if (piommu->irq < 0) {
> + dev_err(dev, "Failed to get IRQ (%d)\n", piommu->irq);
> + ret = piommu->irq;
> + goto err_parse_dt;

Again just return.

> + }
> +
> + piommu->bclk = devm_clk_get(dev, "bclk");
> + if (IS_ERR(piommu->bclk)) {
> + dev_err(dev, "Failed to get the bclk (%lx)\n",
> + PTR_ERR(piommu->bclk));
> + ret = PTR_ERR(piommu->bclk);
> + goto err_parse_dt;

Ditto.

> + }
> +
> + ret = of_property_count_u32_elems(ofnode, "larb-portes-nr");

According to my comments to the patch adding the binding, you should
rather count the number of phandles in "larb" property by
of_count_phandle_with_args(ofnode, "larb", NULL).

> + if (ret < 0) {
> + dev_err(dev, "Failed to get larb-portes-nr\n");
> + goto err_parse_dt;

Ditto.

> + } else {
> + piommu->larb_nr = ret;

You can take this out of "else" block.

> + }
> +
> + for (i = 0; i < piommu->larb_nr; i++) {
> + ret = of_property_read_u32_index(ofnode, "larb-portes-nr",
> + i, &port_nr);

For logical correctness, you should parse port count from larb node,
as I mentioned in my comments to the patch adding the binding. However
I'm not sure if you even need to know the number of ports. I will
comment more on this below.

> + if (ret) {
> + dev_err(dev, "Failed to get the port nr@larb%d\n", i);
> + goto err_parse_dt;

Just return here.

> + } else {

> + piommu->larb_portid_base[i] = lastportnr;
> + lastportnr += port_nr;

This looks like creating an artificial 1-dimensional namespace from a
natural 2-dimensional space indexed by (larb, port) tuple.

Also you can take this out of the else block.

> + }
> + }
> + piommu->larb_portid_base[i] = lastportnr;
> +
> + for (i = 0; i < piommu->larb_nr; i++) {
> + struct device_node *larbnode;
> + struct platform_device *plarbdev;
> +
> + larbnode = of_parse_phandle(ofnode, "larb", i);
> + if (!larbnode) {
> + dev_err(dev, "Failed to get the larb property\n");
> + ret = -EINVAL;
> + goto err_parse_dt;

Just return.

> + }
> + plarbdev = of_find_device_by_node(larbnode);
> + of_node_put(larbnode);
> + if (!plarbdev) {
> + dev_err(dev, "Failed to get the dev@larb%d\n", i);
> + ret = -EINVAL;
> + goto err_parse_dt;

Just return.

> + } else {
> + piommu->larbdev[i] = &plarbdev->dev;

Again, no need to put this into else block.

> + }
> + }
> +
> + return 0;
> +
> +err_parse_dt:
> + return ret;

Now, after addressing my comments above, this label can be removed.

> +}
> +
> +static int mtk_iommu_hw_init(const struct mtk_iommu_domain *mtkdomain)
> +{
> + struct mtk_iommu_info *piommu = mtkdomain->imuinfo;
> + void __iomem *m4u_base = piommu->base;
> + u32 regval;
> + int ret = 0;
> +
> + ret = clk_prepare_enable(piommu->bclk);
> + if (ret)
> + return -ENODEV;

Why overriding the error returned in ret with -ENODEV?

> +
> + writel(mtkdomain->cfg.arm_short_cfg.ttbr[0],
> + m4u_base + REG_MMU_PT_BASE_ADDR);
> +
> + regval = F_MMU_PREFETCH_RT_REPLACE_MOD |
> + F_MMU_TF_PROTECT_SEL(2) |
> + F_COHERENCE_EN;
> + writel(regval, m4u_base + REG_MMU_CTRL_REG);
> +
> + regval = F_L2_MULIT_HIT_EN |
> + F_TABLE_WALK_FAULT_INT_EN |
> + F_PREETCH_FIFO_OVERFLOW_INT_EN |
> + F_MISS_FIFO_OVERFLOW_INT_EN |
> + F_PREFETCH_FIFO_ERR_INT_EN |
> + F_MISS_FIFO_ERR_INT_EN;
> + writel(regval, m4u_base + REG_MMU_INT_CONTROL0);
> +
> + regval = F_INT_TRANSLATION_FAULT |
> + F_INT_MAIN_MULTI_HIT_FAULT |
> + F_INT_INVALID_PA_FAULT |
> + F_INT_ENTRY_REPLACEMENT_FAULT |
> + F_INT_TLB_MISS_FAULT |
> + F_INT_MISS_TRANSATION_FIFO_FAULT |
> + F_INT_PRETETCH_TRANSATION_FIFO_FAULT;
> + writel(regval, m4u_base + REG_MMU_INT_MAIN_CONTROL);
> +
> + regval = ALIGN(piommu->protect_base, MTK_PROTECT_PA_ALIGN);
> + regval = (u32)F_MMU_IVRP_PA_SET(regval);
> + writel(regval, m4u_base + REG_MMU_IVRP_PADDR);
> +
> + writel(0, m4u_base + REG_MMU_DCM_DIS);
> + writel(0, m4u_base + REG_MMU_STANDARD_AXI_MODE);
> +
> + if (devm_request_irq(piommu->dev, piommu->irq, mtk_iommu_isr, 0,
> + dev_name(piommu->dev), (void *)mtkdomain)) {
> + dev_err(piommu->dev, "Failed @ IRQ-%d Request\n", piommu->irq);
> + return -ENODEV;

Hmm, this keeps the clock enabled. Should you add clean-up path
uninitializing the hardware and stopping the clock?

> + }
> +
> + return 0;
> +}
> +
> +static int mtk_iommu_config_port(struct mtk_iommu_info *piommu,
> + int portid, bool iommuen)
> +{
> + bool validlarb = false;
> + int i, larbid, larbportid;
> + int ret;
> +
> + if (portid >= piommu->larb_portid_base[piommu->larb_nr]) {
> + dev_err(piommu->dev, "Invalid portid (%d)\n", portid);
> + return -EINVAL;
> + }
> +
> + for (i = piommu->larb_nr - 1; i >= 0; i--) {
> + if (portid >= piommu->larb_portid_base[i]) {
> + larbid = i;
> + larbportid = portid -
> + piommu->larb_portid_base[i];
> + validlarb = true;
> + break;
> + }
> + }

There would be no need for this magic here if two cells was used for
port specifier in DT. By the way, this function seems to be the only
user of port count, so if you don't strictly need the check for port
specifier being in range, then such information could be simply
removed from DT, simplifying the binding.

> +
> + if (validlarb) {
> + ret = mtk_smi_config_port(piommu->larbdev[larbid],
> + larbportid, iommuen);
> + if (ret) {
> + dev_err(piommu->dev,
> + "Failed to config port portid-%d\n", portid);
> + }
> + } else {
> + ret = -EINVAL;
> + dev_err(piommu->dev, "Failed to find out portid-%d\n", portid);
> + }
> + return ret;
> +}
> +
> +static int mtk_iommu_dev_enable_iommu(struct mtk_iommu_info *imuinfo,
> + struct device *dev, bool enable)
> +{
> + struct of_phandle_args out_args = {0};
> + struct device *imudev = imuinfo->dev;
> + unsigned int i = 0;
> + int ret = 0;
> +
> + while (!of_parse_phandle_with_args(dev->of_node, "iommus",
> + "#iommu-cells", i++, &out_args)) {
> + if (out_args.np != imudev->of_node)
> + continue;
> + if (out_args.args_count != 1) {
> + dev_err(imudev,
> + "invalid #iommu-cells property for IOMMU\n");
> + }

I don't believe you need to do this manually. I can see
of_iommu_configure() in
http://lxr.free-electrons.com/source/drivers/iommu/of_iommu.c#L136 ,
which I believe should call .of_xlate from your iommu_ops with correct
node and verified the args count. This callback should parse the args
and prepare some private data.

I don't see any example of such .of_xlate callback though. Could
someone (Joerg, Will?) advise how this could be used and in particular
how the data obtained from parsing the args should be stored for
further use?

> +
> + dev_dbg(imudev, "%s iommu @ port:%d\n",
> + enable ? "enable" : "disable", out_args.args[0]);
> +
> + ret = mtk_iommu_config_port(imuinfo, out_args.args[0], enable);
> + if (!ret)
> + dev->archdata.dma_ops = (enable) ?
> + imudev->archdata.dma_ops : NULL;
> + else
> + break;
> + }
> + return ret;
> +}
> +
> +static struct iommu_domain *mtk_iommu_domain_alloc(unsigned type)
> +{
> + struct mtk_iommu_domain *priv;
> +
> + /* We only support unmanaged domains for now */
> + if (type != IOMMU_DOMAIN_UNMANAGED)
> + return NULL;
> +
> + priv = kzalloc(sizeof(*priv), GFP_KERNEL);
> + if (!priv)
> + return ERR_PTR(-ENOMEM);

What's the proper return convention of this function? The first error
case returns NULL, while this an error pointer. One of them is most
likely wrong.

> +
> + priv->cfg.quirks = IO_PGTABLE_QUIRK_ARM_NS;
> + priv->cfg.pgsize_bitmap = SZ_16M | SZ_1M | SZ_64K | SZ_4K,
> + priv->cfg.ias = 32;
> + priv->cfg.oas = 32;
> + priv->cfg.tlb = &mtk_iommu_gather_ops;
> +
> + priv->iop = alloc_io_pgtable_ops(ARM_SHORT_DESC, &priv->cfg, priv);
> + if (!priv->iop) {
> + pr_err("Failed to alloc io pgtable@MTK iommu\n");
> + goto err_free_priv;
> + }
> +
> + spin_lock_init(&priv->pgtlock);
> +
> + priv->domain.geometry.aperture_start = 0;
> + priv->domain.geometry.aperture_end = ~0;

Should this be UL or ULL?

> + priv->domain.geometry.force_aperture = true;
> +
> + return &priv->domain;
> +
> +err_free_priv:
> + kfree(priv);
> + return NULL;
> +}
> +
> +static void mtk_iommu_domain_free(struct iommu_domain *domain)
> +{
> + struct mtk_iommu_domain *priv = to_mtk_domain(domain);
> +
> + free_io_pgtable_ops(priv->iop);
> + kfree(priv);
> +}
> +
> +static int mtk_iommu_attach_device(struct iommu_domain *domain,
> + struct device *dev)
> +{
> + struct mtk_iommu_domain *priv = to_mtk_domain(domain);
> + int ret;
> +
> + /* word around for arch_setup_dma_ops */

typo: s/word around/Workaround/

Anyway, could you remind me what was the issue with arch_setup_dma_ops, please?

> + if (!priv->imuinfo)
> + ret = 0;
> + else
> + ret = mtk_iommu_dev_enable_iommu(priv->imuinfo, dev, true);
> + return ret;

Maybe you could rewrite the 5 lines above into the following:

if (!priv->imuinfo)
return 0;
return mtk_iommu_dev_enable_iommu(priv->imuinfo, dev, true);

> +}
> +
> +static void mtk_iommu_detach_device(struct iommu_domain *domain,
> + struct device *dev)
> +{
> + struct mtk_iommu_domain *priv = to_mtk_domain(domain);
> +
> + mtk_iommu_dev_enable_iommu(priv->imuinfo, dev, false);
> +}
> +
> +static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova,
> + phys_addr_t paddr, size_t size, int prot)
> +{
> + struct mtk_iommu_domain *priv = to_mtk_domain(domain);
> + unsigned long flags;
> + int ret;
> +
> + spin_lock_irqsave(&priv->pgtlock, flags);
> + ret = priv->iop->map(priv->iop, iova, paddr, size, prot);
> + spin_unlock_irqrestore(&priv->pgtlock, flags);
> +
> + return ret;
> +}
> +
> +static size_t mtk_iommu_unmap(struct iommu_domain *domain,
> + unsigned long iova, size_t size)
> +{
> + struct mtk_iommu_domain *priv = to_mtk_domain(domain);
> + unsigned long flags;
> +
> + spin_lock_irqsave(&priv->pgtlock, flags);
> + priv->iop->unmap(priv->iop, iova, size);
> + spin_unlock_irqrestore(&priv->pgtlock, flags);
> +
> + return size;
> +}
> +
> +static phys_addr_t mtk_iommu_iova_to_phys(struct iommu_domain *domain,
> + dma_addr_t iova)
> +{
> + struct mtk_iommu_domain *priv = to_mtk_domain(domain);
> + unsigned long flags;
> + phys_addr_t pa;
> +
> + spin_lock_irqsave(&priv->pgtlock, flags);
> + pa = priv->iop->iova_to_phys(priv->iop, iova);
> + spin_unlock_irqrestore(&priv->pgtlock, flags);
> +
> + return pa;
> +}
> +
> +static struct iommu_ops mtk_iommu_ops = {
> + .domain_alloc = mtk_iommu_domain_alloc,
> + .domain_free = mtk_iommu_domain_free,
> + .attach_dev = mtk_iommu_attach_device,
> + .detach_dev = mtk_iommu_detach_device,
> + .map = mtk_iommu_map,
> + .unmap = mtk_iommu_unmap,
> + .map_sg = default_iommu_map_sg,
> + .iova_to_phys = mtk_iommu_iova_to_phys,
> + .pgsize_bitmap = SZ_4K | SZ_64K | SZ_1M | SZ_16M,
> +};
> +
> +static const struct of_device_id mtk_iommu_of_ids[] = {
> + { .compatible = "mediatek,mt8173-m4u",
> + },

nit: Why is this on a separate line?

> + {}
> +};
> +
> +static int mtk_iommu_probe(struct platform_device *pdev)
> +{
> + struct iommu_domain *domain;
> + struct mtk_iommu_domain *mtk_domain;
> + struct mtk_iommu_info *piommu;
> + void __iomem *protect;
> + int ret;

CodingStyle: Please no tabs in local variable declarations.

> +
> + piommu = devm_kzalloc(&pdev->dev, sizeof(*piommu), GFP_KERNEL);
> + if (!piommu)
> + return -ENOMEM;
> +
> + piommu->dev = &pdev->dev;
> +
> + /*
> + * Alloc for Protect memory.
> + * HW will access here while translation fault.
> + */
> + protect = devm_kzalloc(piommu->dev, MTK_PROTECT_PA_ALIGN * 2,
> + GFP_KERNEL);
> + if (!protect)
> + return -ENOMEM;
> + piommu->protect_base = dma_map_single(piommu->dev, protect,
> + MTK_PROTECT_PA_ALIGN * 2,
> + DMA_TO_DEVICE); /* clean cache */
> +
> + ret = mtk_iommu_parse_dt(pdev, piommu);
> + if (ret)
> + return ret;

What about the potential dma mapping created by dma_map_single()? (I'm
not sure how important is to clean this up in practice, but for
consistency, it should be handled in error path to prevent leaking
memory if something changes in future in DMA API internals.)

> +
> + /*
> + * arch_setup_dma_ops is not proper.
> + * It should be replaced it with iommu_dma_create_domain
> + * in next dma patch.
> + */
> + arch_setup_dma_ops(piommu->dev, 0, (1ULL << 32) - 1, &mtk_iommu_ops, 0);
> +
> + domain = iommu_dma_raw_domain(get_dma_domain(piommu->dev));
> + if (!domain) {
> + dev_err(piommu->dev, "Failed to get iommu domain\n");
> + ret = -EINVAL;
> + goto err_free_domain;
> + }
> +
> + mtk_domain = to_mtk_domain(domain);
> + mtk_domain->imuinfo = piommu;
> +
> + dev_set_drvdata(piommu->dev, piommu);
> +
> + ret = mtk_iommu_hw_init(mtk_domain);
> + if (ret < 0) {
> + dev_err(piommu->dev, "Failed @ HW init\n");

nit: Maybe "Hardware initialization failed"?

> + goto err_free_domain;
> + }
> +
> + return 0;
> +
> +err_free_domain:
> + arch_teardown_dma_ops(piommu->dev);

Shouldn't the label be called err_teardown_dma_ops then?

Best regards,
Tomasz

2015-06-05 13:12:10

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH v2 3/6] iommu: add ARM short descriptor page table allocator.

Hello,

Thanks for the patch, it's good to see another user of the generic
IO page-table code. However, I have quite a lot of comments on the code.

On Fri, May 15, 2015 at 10:43:26AM +0100, Yong Wu wrote:
> This patch is for ARM Short Descriptor Format.It has 2-levels
> pagetable and the allocator supports 4K/64K/1M/16M.
>
> Signed-off-by: Yong Wu <[email protected]>
> ---
> drivers/iommu/Kconfig | 7 +
> drivers/iommu/Makefile | 1 +
> drivers/iommu/io-pgtable-arm-short.c | 490 +++++++++++++++++++++++++++++++++++
> drivers/iommu/io-pgtable.c | 4 +
> drivers/iommu/io-pgtable.h | 6 +
> 5 files changed, 508 insertions(+)
> create mode 100644 drivers/iommu/io-pgtable-arm-short.c

For some reason, I ended up reviewing this back-to-front (i.e. starting
with the init code), so apologies if the comments feel like they were
written in reverse.

> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index 1ae4e54..3d2eac6 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -39,6 +39,13 @@ config IOMMU_IO_PGTABLE_LPAE_SELFTEST
>
> If unsure, say N here.
>
> +config IOMMU_IO_PGTABLE_SHORT
> + bool "ARMv7/v8 Short Descriptor Format"
> + select IOMMU_IO_PGTABLE

depends on ARM || ARM64 || COMPILE_TEST

> + help
> + Enable support for the ARM Short descriptor pagetable format.
> + It has 2-levels pagetable and The allocator supports 4K/64K/1M/16M.

The second sentence is worded rather strangely.

> +
> endmenu
>
> config IOMMU_IOVA
> diff --git a/drivers/iommu/io-pgtable-arm-short.c b/drivers/iommu/io-pgtable-arm-short.c
> new file mode 100644
> index 0000000..cc286ce5
> --- /dev/null
> +++ b/drivers/iommu/io-pgtable-arm-short.c
> @@ -0,0 +1,490 @@
> +/*
> + * Copyright (c) 2014-2015 MediaTek Inc.
> + * Author: Yong Wu <[email protected]>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + */
> +#define pr_fmt(fmt) "arm-short-desc io-pgtable: "fmt
> +
> +#include <linux/err.h>
> +#include <linux/mm.h>
> +#include <linux/iommu.h>
> +#include <linux/errno.h>
> +
> +#include "io-pgtable.h"
> +
> +typedef u32 arm_short_iopte;
> +
> +struct arm_short_io_pgtable {
> + struct io_pgtable iop;
> + struct kmem_cache *ptekmem;
> + size_t pgd_size;
> + void *pgd;
> +};
> +
> +#define io_pgtable_short_to_data(x) \
> + container_of((x), struct arm_short_io_pgtable, iop)
> +
> +#define io_pgtable_ops_to_pgtable(x) \
> + container_of((x), struct io_pgtable, ops)
> +
> +#define io_pgtable_short_ops_to_data(x) \
> + io_pgtable_short_to_data(io_pgtable_ops_to_pgtable(x))
> +

These are private macros, so I think you can drop the "short" part to,
err, keep them short.

> +#define ARM_SHORT_MAX_ADDR_BITS 32
> +
> +#define ARM_SHORT_PGDIR_SHIFT 20
> +#define ARM_SHORT_PAGE_SHIFT 12
> +#define ARM_SHORT_PTRS_PER_PTE 256
> +#define ARM_SHORT_BYTES_PER_PTE 1024

Isn't that ARM_SHORT_PTRS_PER_PTE * sizeof(arm_short_iopte)?

> +/* 1 level pagetable */
> +#define ARM_SHORT_F_PGD_TYPE_PAGE (0x1)

I think you're using PAGE and PGTABLE interchangeably, which is really
confusing to read.

> +#define ARM_SHORT_F_PGD_TYPE_PAGE_MSK (0x3)

This is the TYPE mask.

> +#define ARM_SHORT_F_PGD_TYPE_SECTION (0x2)
> +#define ARM_SHORT_F_PGD_TYPE_SUPERSECTION (0x2 | (1 << 18))

Are you sure this is correct? afaict, bit 0 is PXN, so you should actually
be using bit 18 to distinguihs sections and supersections.

> +#define ARM_SHORT_F_PGD_TYPE_SECTION_MSK (0x3 | (1 << 18))
> +#define ARM_SHORT_F_PGD_TYPE_IS_PAGE(pgd) (((pgd) & 0x3) == 1)

Use your TYPE mask here.

> +#define ARM_SHORT_F_PGD_TYPE_IS_SECTION(pgd) \
> + (((pgd) & ARM_SHORT_F_PGD_TYPE_SECTION_MSK) \
> + == ARM_SHORT_F_PGD_TYPE_SECTION)
> +#define ARM_SHORT_F_PGD_TYPE_IS_SUPERSECTION(pgd) \
> + (((pgd) & ARM_SHORT_F_PGD_TYPE_SECTION_MSK) \
> + == ARM_SHORT_F_PGD_TYPE_SUPERSECTION)
> +
> +#define ARM_SHORT_F_PGD_B_BIT BIT(2)
> +#define ARM_SHORT_F_PGD_C_BIT BIT(3)
> +#define ARM_SHORT_F_PGD_IMPLE_BIT BIT(9)
> +#define ARM_SHORT_F_PGD_S_BIT BIT(16)
> +#define ARM_SHORT_F_PGD_NG_BIT BIT(17)
> +#define ARM_SHORT_F_PGD_NS_BIT_PAGE BIT(3)
> +#define ARM_SHORT_F_PGD_NS_BIT_SECTION BIT(19)
> +
> +#define ARM_SHORT_F_PGD_PA_PAGETABLE_MSK 0xfffffc00
> +#define ARM_SHORT_F_PGD_PA_SECTION_MSK 0xfff00000
> +#define ARM_SHORT_F_PGD_PA_SUPERSECTION_MSK 0xff000000
> +
> +/* 2 level pagetable */
> +#define ARM_SHORT_F_PTE_TYPE_GET(val) ((val) & 0x3)
> +#define ARM_SHORT_F_PTE_TYPE_LARGE BIT(0)

Careful here, small pages can have bit 0 set for XN.

> +#define ARM_SHORT_F_PTE_TYPE_SMALL BIT(1)
> +#define ARM_SHORT_F_PTE_B_BIT BIT(2)
> +#define ARM_SHORT_F_PTE_C_BIT BIT(3)
> +#define ARM_SHORT_F_PTE_IMPLE_BIT BIT(9)
> +#define ARM_SHORT_F_PTE_S_BIT BIT(10)
> +#define ARM_SHORT_F_PTE_PA_LARGE_MSK 0xffff0000
> +#define ARM_SHORT_F_PTE_PA_SMALL_MSK 0xfffff000

I think you can drop the '_F' parts of all these macros.

> +#define ARM_SHORT_PGD_IDX(a) ((a) >> ARM_SHORT_PGDIR_SHIFT)
> +#define ARM_SHORT_PTE_IDX(a) \
> + (((a) >> ARM_SHORT_PAGE_SHIFT) & 0xff)

The 0xff comes from ARM_SHORT_PTRS_PER_PTE, right? I think you should fix
your definitions so that these are all derived from each other rather than
open-coding the constants.

> +#define ARM_SHORT_GET_PTE_VA(pgd) \
> + (phys_to_virt((unsigned long)pgd & ARM_SHORT_F_PGD_PA_PAGETABLE_MSK))
> +
> +static arm_short_iopte *
> +arm_short_get_pte_in_pgd(arm_short_iopte curpgd, unsigned int iova)
> +{
> + arm_short_iopte *pte;
> +
> + pte = ARM_SHORT_GET_PTE_VA(curpgd);
> + pte += ARM_SHORT_PTE_IDX(iova);
> + return pte;
> +}
> +
> +static arm_short_iopte *
> +arm_short_supersection_start(arm_short_iopte *pgd)
> +{
> + return (arm_short_iopte *)(round_down((unsigned long)pgd, (16 * 4)));
> +}

More magic numbers that should be derived from your global constants.

> +static int _arm_short_check_free_pte(struct arm_short_io_pgtable *data,
> + arm_short_iopte *pgd)
> +{
> + arm_short_iopte *pte;
> + int i;
> +
> + pte = ARM_SHORT_GET_PTE_VA(*pgd);
> +
> + for (i = 0; i < ARM_SHORT_PTRS_PER_PTE; i++) {
> + if (pte[i] != 0)
> + return 1;

-EEXIST?

> + }
> +
> + /* Free PTE */
> + kmem_cache_free(data->ptekmem, pte);
> + *pgd = 0;

I don't think this is safe, as there's a window where the page table
walker can see the freed pte memory.

> +
> + return 0;
> +}
> +
> +static int arm_short_unmap(struct io_pgtable_ops *ops, unsigned long iova,
> + size_t size)
> +{
> + struct arm_short_io_pgtable *data = io_pgtable_short_ops_to_data(ops);
> + arm_short_iopte *pgd;
> + unsigned long iova_start = iova;
> + unsigned long long end_plus_1 = iova + size;
> + const struct iommu_gather_ops *tlb = data->iop.cfg.tlb;
> + void *cookie = data->iop.cookie;
> + int ret;
> +
> + do {
> + pgd = (arm_short_iopte *)data->pgd + ARM_SHORT_PGD_IDX(iova);
> +
> + if (ARM_SHORT_F_PGD_TYPE_IS_PAGE(*pgd)) {
> + arm_short_iopte *pte;
> + unsigned int pte_offset;
> + unsigned int num_to_clean;
> +
> + pte_offset = ARM_SHORT_PTE_IDX(iova);
> + num_to_clean =
> + min((unsigned int)((end_plus_1 - iova) / PAGE_SIZE),
> + (ARM_SHORT_PTRS_PER_PTE - pte_offset));
> +
> + pte = arm_short_get_pte_in_pgd(*pgd, iova);
> +
> + memset(pte, 0, num_to_clean * sizeof(arm_short_iopte));
> +
> + ret = _arm_short_check_free_pte(data, pgd);
> + if (ret == 1)/* pte is not freed, need to flush pte */
> + tlb->flush_pgtable(
> + pte,
> + num_to_clean * sizeof(arm_short_iopte),
> + cookie);
> + else
> + tlb->flush_pgtable(pgd, sizeof(arm_short_iopte),
> + cookie);

Hopefully this can be cleaned up when you remove the outer loop and you
can use the size parameter to figure out which level to unmap.

> + iova += num_to_clean << PAGE_SHIFT;
> + } else if (ARM_SHORT_F_PGD_TYPE_IS_SECTION(*pgd)) {
> + *pgd = 0;
> +
> + tlb->flush_pgtable(pgd, sizeof(arm_short_iopte),
> + cookie);
> + iova += SZ_1M;

Again, these sizes can be derived from other page table properties that
you have.

> + } else if (ARM_SHORT_F_PGD_TYPE_IS_SUPERSECTION(*pgd)) {
> + arm_short_iopte *start;
> +
> + start = arm_short_supersection_start(pgd);
> + if (unlikely(start != pgd))
> + pr_warn("%s:suppersection start isn't aligned.iova=0x%lx,pgd=0x%x\n",
> + __func__, iova, *pgd);
> +
> + memset(start, 0, 16 * sizeof(arm_short_iopte));
> +
> + tlb->flush_pgtable(start, 16 * sizeof(arm_short_iopte),
> + cookie);
> +
> + iova = (iova + SZ_16M) & (~(SZ_16M - 1));

See later, but I think supersections should not be assumed by default.

> + } else {
> + break;
> + }
> + } while (iova < end_plus_1 && iova);

I don't think you need this loop -- unmap will be called in page-sized
chunks (where page-size refers to units as advertised in your IOMMU's
pgsize_bitmap). The tricky part is when somebody unmaps a subset of a
previous mapping that ended up using something like a section. You need
to handle that here by splitting blocks at level 1 into a table and
allocating a level 2.

> +
> + tlb->tlb_add_flush(iova_start, size, true, cookie);
> +
> + return 0;

You need to return the size of the region that you managed to unmap, so
0 isn't right here.

> +}
> +
> +static arm_short_iopte __arm_short_pte_port(unsigned int prot, bool large)
> +{
> + arm_short_iopte pteprot;
> +
> + pteprot = ARM_SHORT_F_PTE_S_BIT;
> +
> + pteprot |= large ? ARM_SHORT_F_PTE_TYPE_LARGE :
> + ARM_SHORT_F_PTE_TYPE_SMALL;
> +
> + if (prot & IOMMU_CACHE)
> + pteprot |= ARM_SHORT_F_PTE_B_BIT | ARM_SHORT_F_PTE_C_BIT;

Where do you set TEX[0] for write-allocate?

> + return pteprot;
> +}
> +
> +static arm_short_iopte __arm_short_pgd_port(int prot, bool super)
> +{
> + arm_short_iopte pgdprot;
> +
> + pgdprot = ARM_SHORT_F_PGD_S_BIT;
> + pgdprot |= super ? ARM_SHORT_F_PGD_TYPE_SUPERSECTION :
> + ARM_SHORT_F_PGD_TYPE_SECTION;
> + if (prot & IOMMU_CACHE)
> + pgdprot |= ARM_SHORT_F_PGD_C_BIT | ARM_SHORT_F_PGD_B_BIT;
> +
> + return pgdprot;
> +}
> +
> +static int _arm_short_map_page(struct arm_short_io_pgtable *data,
> + unsigned int iova, phys_addr_t pa,
> + unsigned int prot, bool largepage)
> +{
> + arm_short_iopte *pgd = data->pgd;
> + arm_short_iopte *pte;
> + arm_short_iopte pgdprot, pteprot;
> + arm_short_iopte mask = largepage ? ARM_SHORT_F_PTE_PA_LARGE_MSK :
> + ARM_SHORT_F_PTE_PA_SMALL_MSK;
> + int i, ptenum = largepage ? 16 : 1;
> + bool ptenew = false;
> + void *pte_new_va;
> + void *cookie = data->iop.cookie;
> +
> + if ((iova | pa) & (~mask)) {
> + pr_err("IOVA|PA Not Aligned(iova=0x%x pa=0x%pa type=%s)\n",
> + iova, &pa, largepage ? "large page" : "small page");
> + return -EINVAL;
> + }
> +
> + pgdprot = ARM_SHORT_F_PGD_TYPE_PAGE;
> + if (data->iop.cfg.quirks & IO_PGTABLE_QUIRK_ARM_NS)
> + pgdprot |= ARM_SHORT_F_PGD_NS_BIT_PAGE;
> +
> + pgd += ARM_SHORT_PGD_IDX(iova);
> +
> + if (!(*pgd)) {
> + pte_new_va = kmem_cache_zalloc(data->ptekmem, GFP_KERNEL);
> + if (unlikely(!pte_new_va)) {
> + pr_err("Failed to alloc pte\n");
> + return -ENOMEM;
> + }
> +
> + /* Check pte alignment -- must 1K align */
> + if (unlikely((unsigned long)pte_new_va &
> + (ARM_SHORT_BYTES_PER_PTE - 1))) {
> + pr_err("The new pte is not aligned! (va=0x%p)\n",
> + pte_new_va);
> + kmem_cache_free(data->ptekmem, (void *)pte_new_va);
> + return -ENOMEM;
> + }

How are you enforcing this alignment?

> + ptenew = true;
> + *pgd = virt_to_phys(pte_new_va) | pgdprot;
> + kmemleak_ignore(pte_new_va);

Maybe you should be using alloc_pages instead of your kmem_cache (I mention
this again later on).

> + data->iop.cfg.tlb->flush_pgtable(pgd, sizeof(arm_short_iopte),
> + cookie);
> + } else {
> + /* Someone else may have allocated for this pgd */
> + if (((*pgd) & (~ARM_SHORT_F_PGD_PA_PAGETABLE_MSK)) != pgdprot) {
> + pr_err("The prot of old pgd is not Right!iova=0x%x pgd=0x%x pgprot=0x%x\n",
> + iova, (*pgd), pgdprot);

You can probably just WARN here, as I do in the LPAE code. It shows a bug
in the caller of the API.

> + return -EEXIST;
> + }
> + }
> +
> + pteprot = (arm_short_iopte)pa;
> + pteprot |= __arm_short_pte_port(prot, largepage);
> +
> + pte = arm_short_get_pte_in_pgd(*pgd, iova);
> +
> + pr_debug("iova:0x%x,pte:0x%p(0x%x),prot:0x%x-%s\n",
> + iova, pte, ARM_SHORT_PTE_IDX(iova), pteprot,
> + largepage ? "large page" : "small page");
> +
> + for (i = 0; i < ptenum; i++) {
> + if (pte[i]) {
> + pr_err("The To-Map pte exists!(iova=0x%x pte=0x%x i=%d)\n",
> + iova, pte[i], i);
> + goto err_out;
> + }
> + pte[i] = pteprot;
> + }

I don't think you need this loop; you should only be given a page size,
like with unmap.

> +
> + data->iop.cfg.tlb->flush_pgtable(pte, ptenum * sizeof(arm_short_iopte),
> + cookie);
> + return 0;
> +
> + err_out:
> + for (i--; i >= 0; i--)
> + pte[i] = 0;
> + if (ptenew)
> + kmem_cache_free(data->ptekmem, pte_new_va);
> + return -EEXIST;
> +}
> +
> +static int _arm_short_map_section(struct arm_short_io_pgtable *data,
> + unsigned int iova, phys_addr_t pa,
> + int prot, bool supersection)
> +{
> + arm_short_iopte pgprot;
> + arm_short_iopte mask = supersection ?
> + ARM_SHORT_F_PGD_PA_SUPERSECTION_MSK :
> + ARM_SHORT_F_PGD_PA_SECTION_MSK;
> + arm_short_iopte *pgd = data->pgd;
> + int i;
> + unsigned int pgdnum = supersection ? 16 : 1;
> +
> + if ((iova | pa) & (~mask)) {
> + pr_err("IOVA|PA Not Aligned(iova=0x%x pa=0x%pa type=%s)\n",
> + iova, &pa, supersection ? "supersection" : "section");
> + return -EINVAL;
> + }
> +
> + pgprot = (arm_short_iopte)pa;
> +
> + if (data->iop.cfg.quirks & IO_PGTABLE_QUIRK_ARM_NS)
> + pgprot |= ARM_SHORT_F_PGD_NS_BIT_SECTION;
> +
> + pgprot |= __arm_short_pgd_port(prot, supersection);
> +
> + pgd += ARM_SHORT_PGD_IDX(iova);
> +
> + pr_debug("iova:0x%x,pgd:0x%p(0x%p+0x%x),value:0x%x-%s\n",
> + iova, pgd, data->pgd, ARM_SHORT_PGD_IDX(iova),
> + pgprot, supersection ? "supersection" : "section");
> +
> + for (i = 0; i < pgdnum; i++) {
> + if (unlikely(*pgd)) {
> + pr_err("The To-Map pdg exists!(iova=0x%x pgd=0x%x i=%d)\n",
> + iova, pgd[i], i);
> + goto err_out;
> + }
> + pgd[i] = pgprot;
> + }

Similar comments here.

> + data->iop.cfg.tlb->flush_pgtable(pgd,
> + pgdnum * sizeof(arm_short_iopte),
> + data->iop.cookie);
> + return 0;
> +
> + err_out:
> + for (i--; i >= 0; i--)
> + pgd[i] = 0;
> + return -EEXIST;
> +}
> +
> +static int arm_short_map(struct io_pgtable_ops *ops, unsigned long iova,
> + phys_addr_t paddr, size_t size, int prot)
> +{
> + struct arm_short_io_pgtable *data = io_pgtable_short_ops_to_data(ops);
> + const struct iommu_gather_ops *tlb = data->iop.cfg.tlb;
> + int ret;
> +
> + if (!(prot & (IOMMU_READ | IOMMU_WRITE)))
> + return -EINVAL;

Why? You could have (another) quirk to select the access model and you
should be able to implement read+write, read-only no-exec and no-access.

> + if (size == SZ_4K) {/* most case */
> + ret = _arm_short_map_page(data, iova, paddr, prot, false);
> + } else if (size == SZ_64K) {
> + ret = _arm_short_map_page(data, iova, paddr, prot, true);
> + } else if (size == SZ_1M) {
> + ret = _arm_short_map_section(data, iova, paddr, prot, false);
> + } else if (size == SZ_16M) {
> + ret = _arm_short_map_section(data, iova, paddr, prot, true);
> + } else {
> + ret = -EINVAL;
> + }

Use a switch statement here?

> + tlb->tlb_add_flush(iova, size, true, data->iop.cookie);
> + return ret;
> +}
> +
> +static struct io_pgtable *
> +arm_short_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
> +{
> + struct arm_short_io_pgtable *data;
> +
> + if (cfg->ias != 32)
> + return NULL;

I think you just need to check '>'; VAs smaller than 32-bit can still
be translated.

> + if (cfg->oas > ARM_SHORT_MAX_ADDR_BITS)
> + return NULL;

What benefit does ARM_SHORT_MAX_ADDR_BITS offer? Why not just '32'?

> +
> + cfg->pgsize_bitmap &= SZ_4K | SZ_64K | SZ_1M | SZ_16M;

We can't support supersections unconditionally. Please add a quirk for
this, as it relies on IOMMU support.

> + data = kzalloc(sizeof(*data), GFP_KERNEL);
> + if (!data)
> + return NULL;
> +
> + data->pgd_size = SZ_16K;
> +
> + data->pgd = alloc_pages_exact(data->pgd_size, GFP_KERNEL | __GFP_ZERO);
> + if (!data->pgd)
> + goto out_free_data;
> +
> + cfg->tlb->flush_pgtable(data->pgd, data->pgd_size, cookie);

We may as well postpone this flush to the end of the function, given that
we can still fail at this point.

> + /* kmem for pte */
> + data->ptekmem = kmem_cache_create("short-descriptor-pte",

A better name would be "io-pgtable-arm-short", however, why can't you
just use GFP_ATOMIC in your pte allocations and do away with the cache
altogether? Also, what happens if you try to allocate multiple caches
with the same name?

> + ARM_SHORT_BYTES_PER_PTE,
> + ARM_SHORT_BYTES_PER_PTE,
> + 0, NULL);
> +
> + if (IS_ERR_OR_NULL(data->ptekmem)) {

I think you just need a NULL check here.

> + pr_err("Failed to Create cached mem for PTE %ld\n",
> + PTR_ERR(data->ptekmem));

I don't think this error is particularly useful.

> + goto out_free_pte;
> + }
> +
> + /* TTBRs */
> + cfg->arm_short_cfg.ttbr[0] = virt_to_phys(data->pgd);
> + cfg->arm_short_cfg.ttbr[1] = 0;
> +
> + cfg->arm_short_cfg.tcr = 0;
> +
> + data->iop.ops = (struct io_pgtable_ops) {
> + .map = arm_short_map,
> + .unmap = arm_short_unmap,
> + .iova_to_phys = arm_short_iova_to_phys,
> + };
> +
> + return &data->iop;
> +
> +out_free_pte:
> + free_pages_exact(data->pgd, data->pgd_size);
> +out_free_data:
> + kfree(data);
> + return NULL;
> +}
> +
> +static void arm_short_free_pgtable(struct io_pgtable *iop)
> +{
> + struct arm_short_io_pgtable *data = io_pgtable_short_to_data(iop);
> +
> + kmem_cache_destroy(data->ptekmem);
> + free_pages_exact(data->pgd, data->pgd_size);
> + kfree(data);
> +}
> +
> +struct io_pgtable_init_fns io_pgtable_arm_short_init_fns = {
> + .alloc = arm_short_alloc_pgtable,
> + .free = arm_short_free_pgtable,
> +};
> +
> diff --git a/drivers/iommu/io-pgtable.c b/drivers/iommu/io-pgtable.c
> index 6436fe2..14a9b3a 100644
> --- a/drivers/iommu/io-pgtable.c
> +++ b/drivers/iommu/io-pgtable.c
> @@ -28,6 +28,7 @@ extern struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s1_init_fns;
> extern struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s2_init_fns;
> extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s1_init_fns;
> extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s2_init_fns;
> +extern struct io_pgtable_init_fns io_pgtable_arm_short_init_fns;
>
> static const struct io_pgtable_init_fns *
> io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] =
> @@ -38,6 +39,9 @@ io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] =
> [ARM_64_LPAE_S1] = &io_pgtable_arm_64_lpae_s1_init_fns,
> [ARM_64_LPAE_S2] = &io_pgtable_arm_64_lpae_s2_init_fns,
> #endif
> +#ifdef CONFIG_IOMMU_IO_PGTABLE_SHORT
> + [ARM_SHORT_DESC] = &io_pgtable_arm_short_init_fns,
> +#endif
> };
>
> struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt,
> diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h
> index 10e32f6..47efaab 100644
> --- a/drivers/iommu/io-pgtable.h
> +++ b/drivers/iommu/io-pgtable.h
> @@ -9,6 +9,7 @@ enum io_pgtable_fmt {
> ARM_32_LPAE_S2,
> ARM_64_LPAE_S1,
> ARM_64_LPAE_S2,
> + ARM_SHORT_DESC,
> IO_PGTABLE_NUM_FMTS,
> };
>
> @@ -62,6 +63,11 @@ struct io_pgtable_cfg {
> u64 vttbr;
> u64 vtcr;
> } arm_lpae_s2_cfg;
> +
> + struct {
> + u64 ttbr[2];
> + u64 tcr;
> + } arm_short_cfg;

I appreciate that you're not using TEX remapping, but could we include
the NMRR and PRRR registers here (we can just zero them) too, please?
That makes it easier to support a TEX_REMAP quick later on and also sets
them to a known value.

Also, any chance of some self-tests?

Will

2015-06-05 13:30:49

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH v2 5/6] iommu/mediatek: Add mt8173 IOMMU driver

On Fri, May 15, 2015 at 10:43:28AM +0100, Yong Wu wrote:
> This patch adds support for mediatek m4u (MultiMedia Memory Management Unit).

After looking at the page table code, I thought I'd come and check your
TLB invalidate code here.

> +static void mtk_iommu_tlb_flush_all(void *cookie)
> +{
> + struct mtk_iommu_domain *domain = cookie;
> + u32 val;
> +
> + val = F_INVLD_EN1 | F_INVLD_EN0;
> + writel(val, domain->imuinfo->base + REG_MMU_INV_SEL);
> + writel(F_ALL_INVLD, domain->imuinfo->base + REG_MMU_INVALIDATE);
> +}
> +
> +static void mtk_iommu_tlb_add_flush(unsigned long iova, size_t size,
> + bool leaf, void *cookie)
> +{
> + struct mtk_iommu_domain *domain = cookie;
> + void __iomem *m4u_base = domain->imuinfo->base;
> + unsigned int iova_start = iova, iova_end = iova + size - 1;
> + int ret;
> + u32 val;
> +
> + val = F_INVLD_EN1 | F_INVLD_EN0;
> + writel(val, m4u_base + REG_MMU_INV_SEL);
> +
> + writel(iova_start, m4u_base + REG_MMU_INVLD_START_A);
> + writel(iova_end, m4u_base + REG_MMU_INVLD_END_A);
> + writel(F_MMU_INV_RANGE, m4u_base + REG_MMU_INVALIDATE);
> +
> + ret = readl_poll_timeout_atomic(m4u_base + REG_MMU_CPE_DONE, val,
> + val != 0, 10, 1000000);
> + if (ret) {
> + dev_warn(domain->imuinfo->dev, "Invalid tlb don't done\n");
> + mtk_iommu_tlb_flush_all(cookie);
> + }
> + writel(0, m4u_base + REG_MMU_CPE_DONE);
> +}

You don't need to wait for completion here if you can implement a proper
->tlb_sync callback.

> +static void mtk_iommu_flush_pgtable(void *ptr, size_t size, void *cookie)
> +{
> + /*
> + * After delete arch_setup_dma_ops,
> + * This will be replaced with dma_map_page
> + */
> + __dma_flush_range(ptr, ptr + size);
> +}

This should give you the necessary barriers to ensure visibility of the
updated page tables, so you can use the _relaxed io accessors for the
other TLB functions.

> +static struct iommu_gather_ops mtk_iommu_gather_ops = {
> + .tlb_flush_all = mtk_iommu_tlb_flush_all,
> + .tlb_add_flush = mtk_iommu_tlb_add_flush,
> + .tlb_sync = mtk_iommu_tlb_flush_all,

sync isn't required to flush anything; it's just supposed to wait for
any outstanding invalidation (i.e. from tlb_add_flush) to complete.

Will

2015-06-26 07:30:24

by Yong Wu (吴勇)

[permalink] [raw]
Subject: Re: [PATCH v2 3/6] iommu: add ARM short descriptor page table allocator.

Hi Will,
Thanks very much for your review.
Sorry for reply so late, I need some time to test the split.
I will improve in next version following your suggestion.
There are some place please help check my comment.

On Fri, 2015-06-05 at 14:12 +0100, Will Deacon wrote:
> Hello,
>
> Thanks for the patch, it's good to see another user of the generic
> IO page-table code. However, I have quite a lot of comments on the code.
>
> On Fri, May 15, 2015 at 10:43:26AM +0100, Yong Wu wrote:
> > This patch is for ARM Short Descriptor Format.It has 2-levels
> > pagetable and the allocator supports 4K/64K/1M/16M.
> >
> > Signed-off-by: Yong Wu <[email protected]>
> > ---
> > drivers/iommu/Kconfig | 7 +
> > drivers/iommu/Makefile | 1 +
> > drivers/iommu/io-pgtable-arm-short.c | 490 +++++++++++++++++++++++++++++++++++
> > drivers/iommu/io-pgtable.c | 4 +
> > drivers/iommu/io-pgtable.h | 6 +
> > 5 files changed, 508 insertions(+)
> > create mode 100644 drivers/iommu/io-pgtable-arm-short.c
>
> For some reason, I ended up reviewing this back-to-front (i.e. starting
> with the init code), so apologies if the comments feel like they were
> written in reverse.
>
[snip]
> > +typedef u32 arm_short_iopte;
> > +
> > +struct arm_short_io_pgtable {
> > + struct io_pgtable iop;
> > + struct kmem_cache *ptekmem;
> > + size_t pgd_size;
> > + void *pgd;
> > +};
> > +
> > +#define io_pgtable_short_to_data(x) \
> > + container_of((x), struct arm_short_io_pgtable, iop)
> > +
> > +#define io_pgtable_ops_to_pgtable(x) \
> > + container_of((x), struct io_pgtable, ops)
> > +
> > +#define io_pgtable_short_ops_to_data(x) \
> > + io_pgtable_short_to_data(io_pgtable_ops_to_pgtable(x))
> > +
>
> These are private macros, so I think you can drop the "short" part to,
> err, keep them short.

I will delete "short" in the definitions.

And io_pgtable_ops_to_pgtable is same with the one in LPAE.
How about move it to alongside the definition of struct io_pgtable in
io-pgtable.h and also delete it in io-pgtable-arm.c?.


> > +#define ARM_SHORT_MAX_ADDR_BITS 32
> > +
> > +#define ARM_SHORT_PGDIR_SHIFT 20
> > +#define ARM_SHORT_PAGE_SHIFT 12
> > +#define ARM_SHORT_PTRS_PER_PTE 256
> > +#define ARM_SHORT_BYTES_PER_PTE 1024
>
> Isn't that ARM_SHORT_PTRS_PER_PTE * sizeof(arm_short_iopte)?
>
> > +/* 1 level pagetable */
> > +#define ARM_SHORT_F_PGD_TYPE_PAGE (0x1)
>
> I think you're using PAGE and PGTABLE interchangeably, which is really
> confusing to read.
>
> > +#define ARM_SHORT_F_PGD_TYPE_PAGE_MSK (0x3)
>
> This is the TYPE mask.
>
> > +#define ARM_SHORT_F_PGD_TYPE_SECTION (0x2)
> > +#define ARM_SHORT_F_PGD_TYPE_SUPERSECTION (0x2 | (1 << 18))
>
> Are you sure this is correct? afaict, bit 0 is PXN, so you should actually
> be using bit 18 to distinguihs sections and supersections.
Thanks.
I will change all like this, is it ok?
//===
#define ARM_SHORT_PGD_TYPE_PGTABLE BIT(0)
#define ARM_SHORT_PGD_TYPE_SECTION BIT(1)
#define ARM_SHORT_PGD_B_BIT BIT(2)
#define ARM_SHORT_PGD_C_BIT BIT(3)
#define ARM_SHORT_PGD_NS_PGTABLE_BIT BIT(3)
#define ARM_SHORT_PGD_IMPLE_BIT BIT(9)
#define ARM_SHORT_PGD_TEX0_BIT BIT(12)
#define ARM_SHORT_PGD_S_BIT BIT(16)
#define ARM_SHORT_PGD_SUPERSECTION_BIT BIT(18)
#define ARM_SHORT_PGD_NS_SECTION_BIT BIT(19)

#define ARM_SHORT_PGD_TYPE_SUPERSECTION \
(ARM_SHORT_PGD_TYPE_SECTION | ARM_SHORT_PGD_SUPERSECTION_BIT)
#define ARM_SHORT_PGD_PGTABLE_MSK (0x3)
#define ARM_SHORT_PGD_SECTION_MSK \
(ARM_SHORT_PGD_PGTABLE_MSK | ARM_SHORT_PGD_SUPERSECTION_BIT)
//=====

> > +#define ARM_SHORT_F_PGD_TYPE_SECTION_MSK (0x3 | (1 << 18))
> > +#define ARM_SHORT_F_PGD_TYPE_IS_PAGE(pgd) (((pgd) & 0x3) == 1)
>
> Use your TYPE mask here.
>
> > +#define ARM_SHORT_F_PGD_TYPE_IS_SECTION(pgd) \
> > + (((pgd) & ARM_SHORT_F_PGD_TYPE_SECTION_MSK) \
> > + == ARM_SHORT_F_PGD_TYPE_SECTION)
> > +#define ARM_SHORT_F_PGD_TYPE_IS_SUPERSECTION(pgd) \
> > + (((pgd) & ARM_SHORT_F_PGD_TYPE_SECTION_MSK) \
> > + == ARM_SHORT_F_PGD_TYPE_SUPERSECTION)
> > +
> > +#define ARM_SHORT_F_PGD_B_BIT BIT(2)
> > +#define ARM_SHORT_F_PGD_C_BIT BIT(3)
> > +#define ARM_SHORT_F_PGD_IMPLE_BIT BIT(9)
> > +#define ARM_SHORT_F_PGD_S_BIT BIT(16)
> > +#define ARM_SHORT_F_PGD_NG_BIT BIT(17)
> > +#define ARM_SHORT_F_PGD_NS_BIT_PAGE BIT(3)
> > +#define ARM_SHORT_F_PGD_NS_BIT_SECTION BIT(19)
> > +
> > +#define ARM_SHORT_F_PGD_PA_PAGETABLE_MSK 0xfffffc00
> > +#define ARM_SHORT_F_PGD_PA_SECTION_MSK 0xfff00000
> > +#define ARM_SHORT_F_PGD_PA_SUPERSECTION_MSK 0xff000000
> > +
[snip]
> > +static int _arm_short_check_free_pte(struct arm_short_io_pgtable *data,
> > + arm_short_iopte *pgd)
> > +{
> > + arm_short_iopte *pte;
> > + int i;
> > +
> > + pte = ARM_SHORT_GET_PTE_VA(*pgd);
> > +
> > + for (i = 0; i < ARM_SHORT_PTRS_PER_PTE; i++) {
> > + if (pte[i] != 0)
> > + return 1;
>
> -EEXIST?
>
> > + }
> > +
> > + /* Free PTE */
> > + kmem_cache_free(data->ptekmem, pte);
> > + *pgd = 0;
>
> I don't think this is safe, as there's a window where the page table
> walker can see the freed pte memory.

Sorry, this function read badly. Originally I expected this function
could check all the ptes in the level-2 pagetable.
I prepare to change the function name and the return type like below,
if all the pte is 0, then free whole the level-2 pagetable and return
true, if there are some other pte remain who isn't unmapped, it return
false.
static bool arm_short_free_wholepte(struct arm_short_io_pgtable *data,
arm_short_iopte *pgd)

> > +
> > + return 0;
> > +}
> > +
> > +static int arm_short_unmap(struct io_pgtable_ops *ops, unsigned long iova,
> > + size_t size)
> > +{
> > + struct arm_short_io_pgtable *data = io_pgtable_short_ops_to_data(ops);
> > + arm_short_iopte *pgd;
> > + unsigned long iova_start = iova;
> > + unsigned long long end_plus_1 = iova + size;
> > + const struct iommu_gather_ops *tlb = data->iop.cfg.tlb;
> > + void *cookie = data->iop.cookie;
> > + int ret;
> > +
> > + do {
> > + pgd = (arm_short_iopte *)data->pgd + ARM_SHORT_PGD_IDX(iova);
> > +
> > + if (ARM_SHORT_F_PGD_TYPE_IS_PAGE(*pgd)) {
> > + arm_short_iopte *pte;
> > + unsigned int pte_offset;
> > + unsigned int num_to_clean;
> > +
> > + pte_offset = ARM_SHORT_PTE_IDX(iova);
> > + num_to_clean =
> > + min((unsigned int)((end_plus_1 - iova) / PAGE_SIZE),
> > + (ARM_SHORT_PTRS_PER_PTE - pte_offset));
> > +
> > + pte = arm_short_get_pte_in_pgd(*pgd, iova);
> > +
> > + memset(pte, 0, num_to_clean * sizeof(arm_short_iopte));
> > +
> > + ret = _arm_short_check_free_pte(data, pgd);
> > + if (ret == 1)/* pte is not freed, need to flush pte */
> > + tlb->flush_pgtable(
> > + pte,
> > + num_to_clean * sizeof(arm_short_iopte),
> > + cookie);
> > + else
> > + tlb->flush_pgtable(pgd, sizeof(arm_short_iopte),
> > + cookie);
>
> Hopefully this can be cleaned up when you remove the outer loop and you
> can use the size parameter to figure out which level to unmap.
>
> > + iova += num_to_clean << PAGE_SHIFT;
> > + } else if (ARM_SHORT_F_PGD_TYPE_IS_SECTION(*pgd)) {
> > + *pgd = 0;
> > +
> > + tlb->flush_pgtable(pgd, sizeof(arm_short_iopte),
> > + cookie);
> > + iova += SZ_1M;
>
> Again, these sizes can be derived from other page table properties that
> you have.
>
> > + } else if (ARM_SHORT_F_PGD_TYPE_IS_SUPERSECTION(*pgd)) {
> > + arm_short_iopte *start;
> > +
> > + start = arm_short_supersection_start(pgd);
> > + if (unlikely(start != pgd))
> > + pr_warn("%s:suppersection start isn't aligned.iova=0x%lx,pgd=0x%x\n",
> > + __func__, iova, *pgd);
> > +
> > + memset(start, 0, 16 * sizeof(arm_short_iopte));
> > +
> > + tlb->flush_pgtable(start, 16 * sizeof(arm_short_iopte),
> > + cookie);
> > +
> > + iova = (iova + SZ_16M) & (~(SZ_16M - 1));
>
> See later, but I think supersections should not be assumed by default.
>
> > + } else {
> > + break;
> > + }
> > + } while (iova < end_plus_1 && iova);
>
> I don't think you need this loop -- unmap will be called in page-sized
> chunks (where page-size refers to units as advertised in your IOMMU's
> pgsize_bitmap). The tricky part is when somebody unmaps a subset of a
> previous mapping that ended up using something like a section. You need
> to handle that here by splitting blocks at level 1 into a table and
> allocating a level 2.

I will delete the loop and get the size from the pagetable properties.
About the split, I have a question,
There are some lines in the self test of LPAE:
//====
/* Partial unmap */
size = 1UL << __ffs(cfg->pgsize_bitmap);
if (ops->unmap(ops, SZ_1G + size, size) != size)
return __FAIL(ops, i);
//====
If it is changed to:
if (ops->unmap(ops, SZ_1G + 3*size, size) != size)
or
if (ops->unmap(ops, SZ_1G + size, 3*size) != size)
It seems don't work. I think it may be never happened if the map and
unmap is from iommu_map and iommu_unmap, I don't know whether somebody
will unmap subset of a previous mapping randomly like above. so I am
sure whether I should cover this two cases in short-descriptor.

>
> > +
> > + tlb->tlb_add_flush(iova_start, size, true, cookie);
> > +
> > + return 0;
>
> You need to return the size of the region that you managed to unmap, so
> 0 isn't right here.
>
> > +}
> > +
> > +static arm_short_iopte __arm_short_pte_port(unsigned int prot, bool large)
> > +{
> > + arm_short_iopte pteprot;
> > +
> > + pteprot = ARM_SHORT_F_PTE_S_BIT;
> > +
> > + pteprot |= large ? ARM_SHORT_F_PTE_TYPE_LARGE :
> > + ARM_SHORT_F_PTE_TYPE_SMALL;
> > +
> > + if (prot & IOMMU_CACHE)
> > + pteprot |= ARM_SHORT_F_PTE_B_BIT | ARM_SHORT_F_PTE_C_BIT;
>
> Where do you set TEX[0] for write-allocate?
I will add:
if (prot & IOMMU_WRITE)
pteprot |= ARM_SHORT_PTE_TEX0_BIT;
>
> > + return pteprot;
> > +}
> > +
> > +static arm_short_iopte __arm_short_pgd_port(int prot, bool super)
> > +{
> > + arm_short_iopte pgdprot;
> > +
> > + pgdprot = ARM_SHORT_F_PGD_S_BIT;
> > + pgdprot |= super ? ARM_SHORT_F_PGD_TYPE_SUPERSECTION :
> > + ARM_SHORT_F_PGD_TYPE_SECTION;
> > + if (prot & IOMMU_CACHE)
> > + pgdprot |= ARM_SHORT_F_PGD_C_BIT | ARM_SHORT_F_PGD_B_BIT;
> > +
> > + return pgdprot;
> > +}
> > +
> > +static int _arm_short_map_page(struct arm_short_io_pgtable *data,
> > + unsigned int iova, phys_addr_t pa,
> > + unsigned int prot, bool largepage)
> > +{
> > + arm_short_iopte *pgd = data->pgd;
> > + arm_short_iopte *pte;
> > + arm_short_iopte pgdprot, pteprot;
> > + arm_short_iopte mask = largepage ? ARM_SHORT_F_PTE_PA_LARGE_MSK :
> > + ARM_SHORT_F_PTE_PA_SMALL_MSK;
> > + int i, ptenum = largepage ? 16 : 1;
> > + bool ptenew = false;
> > + void *pte_new_va;
> > + void *cookie = data->iop.cookie;
> > +
> > + if ((iova | pa) & (~mask)) {
> > + pr_err("IOVA|PA Not Aligned(iova=0x%x pa=0x%pa type=%s)\n",
> > + iova, &pa, largepage ? "large page" : "small page");
> > + return -EINVAL;
> > + }
> > +
> > + pgdprot = ARM_SHORT_F_PGD_TYPE_PAGE;
> > + if (data->iop.cfg.quirks & IO_PGTABLE_QUIRK_ARM_NS)
> > + pgdprot |= ARM_SHORT_F_PGD_NS_BIT_PAGE;
> > +
> > + pgd += ARM_SHORT_PGD_IDX(iova);
> > +
> > + if (!(*pgd)) {
> > + pte_new_va = kmem_cache_zalloc(data->ptekmem, GFP_KERNEL);
> > + if (unlikely(!pte_new_va)) {
> > + pr_err("Failed to alloc pte\n");
> > + return -ENOMEM;
> > + }
> > +
> > + /* Check pte alignment -- must 1K align */
> > + if (unlikely((unsigned long)pte_new_va &
> > + (ARM_SHORT_BYTES_PER_PTE - 1))) {
> > + pr_err("The new pte is not aligned! (va=0x%p)\n",
> > + pte_new_va);
> > + kmem_cache_free(data->ptekmem, (void *)pte_new_va);
> > + return -ENOMEM;
> > + }
>
> How are you enforcing this alignment?

I will delete this.

>
> > + ptenew = true;
> > + *pgd = virt_to_phys(pte_new_va) | pgdprot;
> > + kmemleak_ignore(pte_new_va);
>
> Maybe you should be using alloc_pages instead of your kmem_cache (I mention
> this again later on).
>
> > + data->iop.cfg.tlb->flush_pgtable(pgd, sizeof(arm_short_iopte),
> > + cookie);
> > + } else {
> > + /* Someone else may have allocated for this pgd */
> > + if (((*pgd) & (~ARM_SHORT_F_PGD_PA_PAGETABLE_MSK)) != pgdprot) {
> > + pr_err("The prot of old pgd is not Right!iova=0x%x pgd=0x%x pgprot=0x%x\n",
> > + iova, (*pgd), pgdprot);
>
> You can probably just WARN here, as I do in the LPAE code. It shows a bug
> in the caller of the API.
Sorry, I don't see it in LPAE, Do you mean these lines in LPAE?
//====
/* We require an unmap first */
if (iopte_leaf(*ptep, lvl)) {
WARN_ON(!selftest_running);
return -EEXIST;
}
//====
It may be not the same. Here we only check whether the prot of the old
pgd is same with the current pgd.
I will change it to WARN_ON(1) too.
>
> > + return -EEXIST;
> > + }
> > + }
> > +
> > + pteprot = (arm_short_iopte)pa;
> > + pteprot |= __arm_short_pte_port(prot, largepage);
> > +
> > + pte = arm_short_get_pte_in_pgd(*pgd, iova);
> > +
> > + pr_debug("iova:0x%x,pte:0x%p(0x%x),prot:0x%x-%s\n",
> > + iova, pte, ARM_SHORT_PTE_IDX(iova), pteprot,
> > + largepage ? "large page" : "small page");
> > +
> > + for (i = 0; i < ptenum; i++) {
> > + if (pte[i]) {
> > + pr_err("The To-Map pte exists!(iova=0x%x pte=0x%x i=%d)\n",
> > + iova, pte[i], i);
> > + goto err_out;
I will change to WARN_ON(1) here too.
> > + }
> > + pte[i] = pteprot;
> > + }
>
> I don't think you need this loop; you should only be given a page size,
> like with unmap.

I am not sure I follow your meaning.The ptenum here is only 1 or 16.
It is 1 while current is small page and section. It is 16 while current
is large page or super section.
Because the descriptor should be repeated 16 consecutive, I use a loop
here.
>
> > +
> > + data->iop.cfg.tlb->flush_pgtable(pte, ptenum * sizeof(arm_short_iopte),
> > + cookie);
> > + return 0;
> > +
> > + err_out:
> > + for (i--; i >= 0; i--)
> > + pte[i] = 0;
> > + if (ptenew)
> > + kmem_cache_free(data->ptekmem, pte_new_va);
> > + return -EEXIST;
> > +}
> > +
> > +static int _arm_short_map_section(struct arm_short_io_pgtable *data,
> > + unsigned int iova, phys_addr_t pa,
> > + int prot, bool supersection)
> > +{
> > + arm_short_iopte pgprot;
> > + arm_short_iopte mask = supersection ?
> > + ARM_SHORT_F_PGD_PA_SUPERSECTION_MSK :
> > + ARM_SHORT_F_PGD_PA_SECTION_MSK;
> > + arm_short_iopte *pgd = data->pgd;
> > + int i;
> > + unsigned int pgdnum = supersection ? 16 : 1;
> > +
> > + if ((iova | pa) & (~mask)) {
> > + pr_err("IOVA|PA Not Aligned(iova=0x%x pa=0x%pa type=%s)\n",
> > + iova, &pa, supersection ? "supersection" : "section");
> > + return -EINVAL;
> > + }
> > +
> > + pgprot = (arm_short_iopte)pa;
> > +
> > + if (data->iop.cfg.quirks & IO_PGTABLE_QUIRK_ARM_NS)
> > + pgprot |= ARM_SHORT_F_PGD_NS_BIT_SECTION;
> > +
> > + pgprot |= __arm_short_pgd_port(prot, supersection);
> > +
> > + pgd += ARM_SHORT_PGD_IDX(iova);
> > +
> > + pr_debug("iova:0x%x,pgd:0x%p(0x%p+0x%x),value:0x%x-%s\n",
> > + iova, pgd, data->pgd, ARM_SHORT_PGD_IDX(iova),
> > + pgprot, supersection ? "supersection" : "section");
> > +
> > + for (i = 0; i < pgdnum; i++) {
> > + if (unlikely(*pgd)) {
> > + pr_err("The To-Map pdg exists!(iova=0x%x pgd=0x%x i=%d)\n",
> > + iova, pgd[i], i);
> > + goto err_out;
> > + }
> > + pgd[i] = pgprot;
> > + }
>
> Similar comments here.
I will merge _arm_short_map_page and _arm_short_map_section into one
function named _arm_short_map.
>
> > + data->iop.cfg.tlb->flush_pgtable(pgd,
> > + pgdnum * sizeof(arm_short_iopte),
> > + data->iop.cookie);
> > + return 0;
> > +
> > + err_out:
> > + for (i--; i >= 0; i--)
> > + pgd[i] = 0;
> > + return -EEXIST;
> > +}
> > +
> > +static int arm_short_map(struct io_pgtable_ops *ops, unsigned long iova,
> > + phys_addr_t paddr, size_t size, int prot)
> > +{
> > + struct arm_short_io_pgtable *data = io_pgtable_short_ops_to_data(ops);
> > + const struct iommu_gather_ops *tlb = data->iop.cfg.tlb;
> > + int ret;
> > +
> > + if (!(prot & (IOMMU_READ | IOMMU_WRITE)))
> > + return -EINVAL;
>
> Why? You could have (another) quirk to select the access model and you
> should be able to implement read+write, read-only no-exec and no-access.

If I follow it in LAPE like below. is it ok?
//======
/* If no access, then nothing to do */
if (!(iommu_prot & (IOMMU_READ | IOMMU_WRITE)))
return 0;
//=====
>
> > + if (size == SZ_4K) {/* most case */
> > + ret = _arm_short_map_page(data, iova, paddr, prot, false);
> > + } else if (size == SZ_64K) {
> > + ret = _arm_short_map_page(data, iova, paddr, prot, true);
> > + } else if (size == SZ_1M) {
> > + ret = _arm_short_map_section(data, iova, paddr, prot, false);
> > + } else if (size == SZ_16M) {
> > + ret = _arm_short_map_section(data, iova, paddr, prot, true);
> > + } else {
> > + ret = -EINVAL;
> > + }
>
> Use a switch statement here?
>
> > + tlb->tlb_add_flush(iova, size, true, data->iop.cookie);
> > + return ret;
> > +}
> > +
> > +static struct io_pgtable *
> > +arm_short_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
> > +{
> > + struct arm_short_io_pgtable *data;
> > +
> > + if (cfg->ias != 32)
> > + return NULL;
>
> I think you just need to check '>'; VAs smaller than 32-bit can still
> be translated.
>
> > + if (cfg->oas > ARM_SHORT_MAX_ADDR_BITS)
> > + return NULL;
>
> What benefit does ARM_SHORT_MAX_ADDR_BITS offer? Why not just '32'?
>
> > +
> > + cfg->pgsize_bitmap &= SZ_4K | SZ_64K | SZ_1M | SZ_16M;
>
> We can't support supersections unconditionally. Please add a quirk for
> this, as it relies on IOMMU support.
>
> > + data = kzalloc(sizeof(*data), GFP_KERNEL);
> > + if (!data)
> > + return NULL;
> > +
> > + data->pgd_size = SZ_16K;
> > +
> > + data->pgd = alloc_pages_exact(data->pgd_size, GFP_KERNEL | __GFP_ZERO);
> > + if (!data->pgd)
> > + goto out_free_data;
> > +
> > + cfg->tlb->flush_pgtable(data->pgd, data->pgd_size, cookie);
>
> We may as well postpone this flush to the end of the function, given that
> we can still fail at this point.
>
> > + /* kmem for pte */
> > + data->ptekmem = kmem_cache_create("short-descriptor-pte",
>
> A better name would be "io-pgtable-arm-short", however, why can't you
> just use GFP_ATOMIC in your pte allocations and do away with the cache
> altogether? Also, what happens if you try to allocate multiple caches
> with the same name?
I will add GFP_ATOMIC in pte allocation, It is a bug Daniel has help
to fix it.
And I am sorry. I don't know what is wrong if using kmem_cache here.
The main reason is the size. the size of level-2 pgtable is 1KB, and
the alloc_page_exact will be 4KB. so I use kmem_cache here.
>
> > + ARM_SHORT_BYTES_PER_PTE,
> > + ARM_SHORT_BYTES_PER_PTE,
> > + 0, NULL);
> > +
> > + if (IS_ERR_OR_NULL(data->ptekmem)) {
>
> I think you just need a NULL check here.
>
> > + pr_err("Failed to Create cached mem for PTE %ld\n",
> > + PTR_ERR(data->ptekmem));
>
> I don't think this error is particularly useful.
>
> > + goto out_free_pte;
> > + }
> > +
> > + /* TTBRs */
> > + cfg->arm_short_cfg.ttbr[0] = virt_to_phys(data->pgd);
> > + cfg->arm_short_cfg.ttbr[1] = 0;
> > +
> > + cfg->arm_short_cfg.tcr = 0;
> > +
> > + data->iop.ops = (struct io_pgtable_ops) {
> > + .map = arm_short_map,
> > + .unmap = arm_short_unmap,
> > + .iova_to_phys = arm_short_iova_to_phys,
> > + };
> > +
> > + return &data->iop;
> > +
> > +out_free_pte:
> > + free_pages_exact(data->pgd, data->pgd_size);
> > +out_free_data:
> > + kfree(data);
> > + return NULL;
> > +}
> > +
> > +static void arm_short_free_pgtable(struct io_pgtable *iop)
> > +{
> > + struct arm_short_io_pgtable *data = io_pgtable_short_to_data(iop);
> > +
> > + kmem_cache_destroy(data->ptekmem);
> > + free_pages_exact(data->pgd, data->pgd_size);
> > + kfree(data);
> > +}
> > +
> > +struct io_pgtable_init_fns io_pgtable_arm_short_init_fns = {
> > + .alloc = arm_short_alloc_pgtable,
> > + .free = arm_short_free_pgtable,
> > +};
> > +
> > diff --git a/drivers/iommu/io-pgtable.c b/drivers/iommu/io-pgtable.c
> > index 6436fe2..14a9b3a 100644
> > --- a/drivers/iommu/io-pgtable.c
> > +++ b/drivers/iommu/io-pgtable.c
> > @@ -28,6 +28,7 @@ extern struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s1_init_fns;
> > extern struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s2_init_fns;
> > extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s1_init_fns;
> > extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s2_init_fns;
> > +extern struct io_pgtable_init_fns io_pgtable_arm_short_init_fns;
> >
> > static const struct io_pgtable_init_fns *
> > io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] =
> > @@ -38,6 +39,9 @@ io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] =
> > [ARM_64_LPAE_S1] = &io_pgtable_arm_64_lpae_s1_init_fns,
> > [ARM_64_LPAE_S2] = &io_pgtable_arm_64_lpae_s2_init_fns,
> > #endif
> > +#ifdef CONFIG_IOMMU_IO_PGTABLE_SHORT
> > + [ARM_SHORT_DESC] = &io_pgtable_arm_short_init_fns,
> > +#endif
> > };
> >
> > struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt,
> > diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h
> > index 10e32f6..47efaab 100644
> > --- a/drivers/iommu/io-pgtable.h
> > +++ b/drivers/iommu/io-pgtable.h
> > @@ -9,6 +9,7 @@ enum io_pgtable_fmt {
> > ARM_32_LPAE_S2,
> > ARM_64_LPAE_S1,
> > ARM_64_LPAE_S2,
> > + ARM_SHORT_DESC,
> > IO_PGTABLE_NUM_FMTS,
> > };
> >
> > @@ -62,6 +63,11 @@ struct io_pgtable_cfg {
> > u64 vttbr;
> > u64 vtcr;
> > } arm_lpae_s2_cfg;
> > +
> > + struct {
> > + u64 ttbr[2];
> > + u64 tcr;
> > + } arm_short_cfg;
>
> I appreciate that you're not using TEX remapping, but could we include
> the NMRR and PRRR registers here (we can just zero them) too, please?
> That makes it easier to support a TEX_REMAP quick later on and also sets
> them to a known value.
I will add them and set 0 to them.
u32 ttbr[2];
u32 tcr;
+ u32 nmrr;
+ u32 prrr;
And According to Robin's suggestion, I will change to u32 in
short-descriptor.
>
> Also, any chance of some self-tests?
>
> Will