2013-03-28 20:02:05

by Varun Sethi

[permalink] [raw]
Subject: [PATCH 0/5 v11] iommu/fsl: Freescale PAMU driver and IOMMU API implementation.

This patchset provides the Freescale PAMU (Peripheral Access Management Unit) driver
and the corresponding IOMMU API implementation. PAMU is the IOMMU present on Freescale
QorIQ platforms. PAMU can authorize memory access, remap the memory address, and remap
the I/O transaction type.

This set consists of the following patches:
1. Make iova dma_addr_t in the iommu_iova_to_phys API.
2. Addition of new field in the device (powerpc) archdata structure for storing iommu domain information
pointer.
3. Add window permission flags in the iommu_domain_window_enable API.
4. Add domain attributes for FSL PAMU driver.
5. PAMU driver and IOMMU API implementation.

This patch set is based on the next branch Joerg's iommu git tree.

Varun Sethi (5):
Make iova dma_addr_t in the iommu_iova_to_phys API.
Add iommu domain pointer to device archdata
Add the window permission flag as a parameter to iommu_window_enable
API.
Add addition iommu attributes required by the PAMU driver.
FSL PAMU driver.

arch/powerpc/include/asm/device.h | 6 +
arch/powerpc/sysdev/fsl_pci.h | 5 +
drivers/iommu/Kconfig | 8 +
drivers/iommu/Makefile | 1 +
drivers/iommu/amd_iommu.c | 2 +-
drivers/iommu/exynos-iommu.c | 2 +-
drivers/iommu/fsl_pamu.c | 1269 +++++++++++++++++++++++++++++++++++++
drivers/iommu/fsl_pamu.h | 405 ++++++++++++
drivers/iommu/fsl_pamu_domain.c | 1137 +++++++++++++++++++++++++++++++++
drivers/iommu/fsl_pamu_domain.h | 85 +++
drivers/iommu/intel-iommu.c | 2 +-
drivers/iommu/iommu.c | 8 +-
drivers/iommu/msm_iommu.c | 2 +-
drivers/iommu/omap-iommu.c | 2 +-
drivers/iommu/shmobile-iommu.c | 2 +-
drivers/iommu/tegra-gart.c | 2 +-
drivers/iommu/tegra-smmu.c | 2 +-
include/linux/iommu.h | 51 ++-
18 files changed, 2970 insertions(+), 21 deletions(-)
create mode 100644 drivers/iommu/fsl_pamu.c
create mode 100644 drivers/iommu/fsl_pamu.h
create mode 100644 drivers/iommu/fsl_pamu_domain.c
create mode 100644 drivers/iommu/fsl_pamu_domain.h

--
1.7.4.1


2013-03-28 20:02:18

by Varun Sethi

[permalink] [raw]
Subject: [PATCH 1/5 v11] iommu/fsl: Make iova dma_addr_t in the iommu_iova_to_phys API.

This is required in case of PAMU, as it can support a window size of up
to 64G (even on 32bit).

Signed-off-by: Varun Sethi <[email protected]>
---
changes in v11:
- Made iova dma_addr_t instead of u64.
- no change in v10.
drivers/iommu/amd_iommu.c | 2 +-
drivers/iommu/exynos-iommu.c | 2 +-
drivers/iommu/intel-iommu.c | 2 +-
drivers/iommu/iommu.c | 3 +--
drivers/iommu/msm_iommu.c | 2 +-
drivers/iommu/omap-iommu.c | 2 +-
drivers/iommu/shmobile-iommu.c | 2 +-
drivers/iommu/tegra-gart.c | 2 +-
drivers/iommu/tegra-smmu.c | 2 +-
include/linux/iommu.h | 9 +++------
10 files changed, 12 insertions(+), 16 deletions(-)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 98f555d..ee5431d 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -3412,7 +3412,7 @@ static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
}

static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom,
- unsigned long iova)
+ dma_addr_t iova)
{
struct protection_domain *domain = dom->priv;
unsigned long offset_mask;
diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
index 238a3ca..3f32d64 100644
--- a/drivers/iommu/exynos-iommu.c
+++ b/drivers/iommu/exynos-iommu.c
@@ -1027,7 +1027,7 @@ done:
}

static phys_addr_t exynos_iommu_iova_to_phys(struct iommu_domain *domain,
- unsigned long iova)
+ dma_addr_t iova)
{
struct exynos_iommu_domain *priv = domain->priv;
unsigned long *entry;
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 0099667..6e0b9ff 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -4111,7 +4111,7 @@ static size_t intel_iommu_unmap(struct iommu_domain *domain,
}

static phys_addr_t intel_iommu_iova_to_phys(struct iommu_domain *domain,
- unsigned long iova)
+ dma_addr_t iova)
{
struct dmar_domain *dmar_domain = domain->priv;
struct dma_pte *pte;
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index b972d43..f730ed9 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -706,8 +706,7 @@ void iommu_detach_group(struct iommu_domain *domain, struct iommu_group *group)
}
EXPORT_SYMBOL_GPL(iommu_detach_group);

-phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain,
- unsigned long iova)
+phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
{
if (unlikely(domain->ops->iova_to_phys == NULL))
return 0;
diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c
index 6a8870a..8ab4f41 100644
--- a/drivers/iommu/msm_iommu.c
+++ b/drivers/iommu/msm_iommu.c
@@ -554,7 +554,7 @@ fail:
}

static phys_addr_t msm_iommu_iova_to_phys(struct iommu_domain *domain,
- unsigned long va)
+ dma_addr_t va)
{
struct msm_priv *priv;
struct msm_iommu_drvdata *iommu_drvdata;
diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c
index 6ac02fa..e02e5d7 100644
--- a/drivers/iommu/omap-iommu.c
+++ b/drivers/iommu/omap-iommu.c
@@ -1219,7 +1219,7 @@ static void omap_iommu_domain_destroy(struct iommu_domain *domain)
}

static phys_addr_t omap_iommu_iova_to_phys(struct iommu_domain *domain,
- unsigned long da)
+ dma_addr_t da)
{
struct omap_iommu_domain *omap_domain = domain->priv;
struct omap_iommu *oiommu = omap_domain->iommu_dev;
diff --git a/drivers/iommu/shmobile-iommu.c b/drivers/iommu/shmobile-iommu.c
index b6e8b57..d572863 100644
--- a/drivers/iommu/shmobile-iommu.c
+++ b/drivers/iommu/shmobile-iommu.c
@@ -296,7 +296,7 @@ done:
}

static phys_addr_t shmobile_iommu_iova_to_phys(struct iommu_domain *domain,
- unsigned long iova)
+ dma_addr_t iova)
{
struct shmobile_iommu_domain *sh_domain = domain->priv;
uint32_t l1entry = 0, l2entry = 0;
diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c
index 8643757..4aec8be 100644
--- a/drivers/iommu/tegra-gart.c
+++ b/drivers/iommu/tegra-gart.c
@@ -279,7 +279,7 @@ static size_t gart_iommu_unmap(struct iommu_domain *domain, unsigned long iova,
}

static phys_addr_t gart_iommu_iova_to_phys(struct iommu_domain *domain,
- unsigned long iova)
+ dma_addr_t iova)
{
struct gart_device *gart = domain->priv;
unsigned long pte;
diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c
index b34e5fd..bc9b599 100644
--- a/drivers/iommu/tegra-smmu.c
+++ b/drivers/iommu/tegra-smmu.c
@@ -757,7 +757,7 @@ static size_t smmu_iommu_unmap(struct iommu_domain *domain, unsigned long iova,
}

static phys_addr_t smmu_iommu_iova_to_phys(struct iommu_domain *domain,
- unsigned long iova)
+ dma_addr_t iova)
{
struct smmu_as *as = domain->priv;
unsigned long *pte;
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index ba3b8a9..bb0a0fc 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -91,8 +91,7 @@ struct iommu_ops {
phys_addr_t paddr, size_t size, int prot);
size_t (*unmap)(struct iommu_domain *domain, unsigned long iova,
size_t size);
- phys_addr_t (*iova_to_phys)(struct iommu_domain *domain,
- unsigned long iova);
+ phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t iova);
int (*domain_has_cap)(struct iommu_domain *domain,
unsigned long cap);
int (*add_device)(struct device *dev);
@@ -134,8 +133,7 @@ extern int iommu_map(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot);
extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova,
size_t size);
-extern phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain,
- unsigned long iova);
+extern phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova);
extern int iommu_domain_has_cap(struct iommu_domain *domain,
unsigned long cap);
extern void iommu_set_fault_handler(struct iommu_domain *domain,
@@ -267,8 +265,7 @@ static inline void iommu_domain_window_disable(struct iommu_domain *domain,
{
}

-static inline phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain,
- unsigned long iova)
+static inline phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
{
return 0;
}
--
1.7.4.1

2013-03-28 20:02:23

by Varun Sethi

[permalink] [raw]
Subject: [PATCH 2/5 v11] powerpc: Add iommu domain pointer to device archdata

Add an iommu domain pointer to device (powerpc) archdata. Devices
are attached to iommu domains and this pointer provides a mechanism
to correlate between a device and the associated iommu domain. This
field is set when a device is attached to a domain.

Signed-off-by: Varun Sethi <[email protected]>
---
- no change in v11.
- no change in v10.
- Added CONFIG_IOMMU_API in v9.
arch/powerpc/include/asm/device.h | 6 ++++++
1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/device.h b/arch/powerpc/include/asm/device.h
index 77e97dd..2d5c1c5 100644
--- a/arch/powerpc/include/asm/device.h
+++ b/arch/powerpc/include/asm/device.h
@@ -28,6 +28,12 @@ struct dev_archdata {
void *iommu_table_base;
} dma_data;

+ /* IOMMU domain information pointer. This would be set
+ * when this device is attached to an iommu_domain.
+ */
+#ifdef CONFIG_IOMMU_API
+ void *iommu_domain;
+#endif
#ifdef CONFIG_SWIOTLB
dma_addr_t max_direct_dma_addr;
#endif
--
1.7.4.1

2013-03-28 20:02:33

by Varun Sethi

[permalink] [raw]
Subject: [PATCH 3/5 v11] iommu/fsl: Add the window permission flag as a parameter to iommu_window_enable API.

Each iommu window can have access permissions associated with it. Extended the
window_enable API to incorporate window access permissions.

In case of PAMU each window can have its specific set of permissions.

Signed-off-by: Varun Sethi <[email protected]>
---
- no change in v11.
- no change in v10.
drivers/iommu/iommu.c | 5 +++--
include/linux/iommu.h | 7 ++++---
2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index f730ed9..1d72b4f 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -853,12 +853,13 @@ EXPORT_SYMBOL_GPL(iommu_unmap);


int iommu_domain_window_enable(struct iommu_domain *domain, u32 wnd_nr,
- phys_addr_t paddr, u64 size)
+ phys_addr_t paddr, u64 size, int prot)
{
if (unlikely(domain->ops->domain_window_enable == NULL))
return -ENODEV;

- return domain->ops->domain_window_enable(domain, wnd_nr, paddr, size);
+ return domain->ops->domain_window_enable(domain, wnd_nr, paddr, size,
+ prot);
}
EXPORT_SYMBOL_GPL(iommu_domain_window_enable);

diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index bb0a0fc..2727810 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -104,7 +104,7 @@ struct iommu_ops {

/* Window handling functions */
int (*domain_window_enable)(struct iommu_domain *domain, u32 wnd_nr,
- phys_addr_t paddr, u64 size);
+ phys_addr_t paddr, u64 size, int prot);
void (*domain_window_disable)(struct iommu_domain *domain, u32 wnd_nr);
/* Set the numer of window per domain */
int (*domain_set_windows)(struct iommu_domain *domain, u32 w_count);
@@ -169,7 +169,8 @@ extern int iommu_domain_set_attr(struct iommu_domain *domain, enum iommu_attr,

/* Window handling function prototypes */
extern int iommu_domain_window_enable(struct iommu_domain *domain, u32 wnd_nr,
- phys_addr_t offset, u64 size);
+ phys_addr_t offset, u64 size,
+ int prot);
extern void iommu_domain_window_disable(struct iommu_domain *domain, u32 wnd_nr);
/**
* report_iommu_fault() - report about an IOMMU fault to the IOMMU framework
@@ -255,7 +256,7 @@ static inline int iommu_unmap(struct iommu_domain *domain, unsigned long iova,

static inline int iommu_domain_window_enable(struct iommu_domain *domain,
u32 wnd_nr, phys_addr_t paddr,
- u64 size)
+ u64 size, int prot)
{
return -ENODEV;
}
--
1.7.4.1

2013-03-28 20:02:44

by Varun Sethi

[permalink] [raw]
Subject: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation.

Following is a brief description of the PAMU hardware:
PAMU determines what action to take and whether to authorize the action on
the basis of the memory address, a Logical IO Device Number (LIODN), and
PAACT table (logically) indexed by LIODN and address. Hardware devices which
need to access memory must provide an LIODN in addition to the memory address.

Peripheral Access Authorization and Control Tables (PAACTs) are the primary
data structures used by PAMU. A PAACT is a table of peripheral access
authorization and control entries (PAACE).Each PAACE defines the range of
I/O bus address space that is accessible by the LIOD and the associated access
capabilities.

There are two types of PAACTs: primary PAACT (PPAACT) and secondary PAACT
(SPAACT).A given physical I/O device may be able to act as one or more
independent logical I/O devices (LIODs). Each such logical I/O device is
assigned an identifier called logical I/O device number (LIODN). A LIODN is
allocated a contiguous portion of the I/O bus address space called the DSA window
for performing DSA operations. The DSA window may optionally be divided into
multiple sub-windows, each of which may be used to map to a region in system
storage space. The first sub-window is referred to as the primary sub-window
and the remaining are called secondary sub-windows.

This patch provides the PAMU driver (fsl_pamu.c) and the corresponding IOMMU
API implementation (fsl_pamu_domain.c). The PAMU hardware driver (fsl_pamu.c)
has been derived from the work done by Ashish Kalra and Timur Tabi.

Signed-off-by: Timur Tabi <<[email protected]>
Signed-off-by: Varun Sethi <[email protected]>
---
changes in v11:
- changed iova to dma_addr_t in iova_to_phys API.
changes in v10:
- Support for new guts compatibe string for T4 & B4 devices.
- Modified comment about port ID and mentioned the errata number.
- Fixed the issue where data pointer was not freed in case of a an error.
- Pass data pointer while freeing irq.
- Whle initializing the SPAACE entry clear the valid bit.
changes in v9:
- Merged and createad a single function to delete
a device from domain list.
- Refactored the add_device API code.
- Renamed the paace and spaace init fucntions.
- Renamed functions for mapping windows and subwindows.
- Changed the MAX LIODN value to MAX value u-boot can
program.
- Hard coded maximum number of subwindows.
changes in v8:
- implemented the new API for window based IOMMUs.
changes in v7:
- Set max_subwidows in the geometry attribute.
- Add checking for maximum supported LIODN value.
- Use upper_32_bits and lower_32_bits macros while
intializing PAMU data structures.
changes in v6:
- Simplified complex conditional statements.
- Fixed indentation issues.
- Added comments for IOMMU API implementation.
changes in v5:
- Addressed comments from Timur.
changes in v4:
- Addressed comments from Timur and Scott.
changes in v3:
- Addressed comments by Kumar Gala
- dynamic fspi allocation
- fixed alignment check in map and unmap
arch/powerpc/sysdev/fsl_pci.h | 5 +
drivers/iommu/Kconfig | 8 +
drivers/iommu/Makefile | 1 +
drivers/iommu/fsl_pamu.c | 1269 +++++++++++++++++++++++++++++++++++++++
drivers/iommu/fsl_pamu.h | 405 +++++++++++++
drivers/iommu/fsl_pamu_domain.c | 1137 +++++++++++++++++++++++++++++++++++
drivers/iommu/fsl_pamu_domain.h | 85 +++
7 files changed, 2910 insertions(+), 0 deletions(-)
create mode 100644 drivers/iommu/fsl_pamu.c
create mode 100644 drivers/iommu/fsl_pamu.h
create mode 100644 drivers/iommu/fsl_pamu_domain.c
create mode 100644 drivers/iommu/fsl_pamu_domain.h

diff --git a/arch/powerpc/sysdev/fsl_pci.h b/arch/powerpc/sysdev/fsl_pci.h
index c495c00..feb34f6 100644
--- a/arch/powerpc/sysdev/fsl_pci.h
+++ b/arch/powerpc/sysdev/fsl_pci.h
@@ -14,6 +14,11 @@
#ifndef __POWERPC_FSL_PCI_H
#define __POWERPC_FSL_PCI_H

+
+/* FSL PCI controller BRR1 register */
+#define PCI_FSL_BRR1 0xbf8
+#define PCI_FSL_BRR1_VER 0xffff
+
#define PCIE_LTSSM 0x0404 /* PCIE Link Training and Status */
#define PCIE_LTSSM_L0 0x16 /* L0 state */
#define PCIE_IP_REV_2_2 0x02080202 /* PCIE IP block version Rev2.2 */
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index c332fb9..3274333 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -17,6 +17,14 @@ config OF_IOMMU
def_bool y
depends on OF

+config FSL_PAMU
+ bool "Freescale IOMMU support"
+ depends on PPC_E500MC
+ select IOMMU_API
+ select GENERIC_ALLOCATOR
+ help
+ Freescale PAMU support.
+
# MSM IOMMU support
config MSM_IOMMU
bool "MSM IOMMU Support"
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index ef0e520..027d1af 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -15,3 +15,4 @@ obj-$(CONFIG_TEGRA_IOMMU_SMMU) += tegra-smmu.o
obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
obj-$(CONFIG_SHMOBILE_IOMMU) += shmobile-iommu.o
obj-$(CONFIG_SHMOBILE_IPMMU) += shmobile-ipmmu.o
+obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
diff --git a/drivers/iommu/fsl_pamu.c b/drivers/iommu/fsl_pamu.c
new file mode 100644
index 0000000..e0354dc
--- /dev/null
+++ b/drivers/iommu/fsl_pamu.c
@@ -0,0 +1,1269 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ *
+ * Copyright (C) 2013 Freescale Semiconductor, Inc.
+ *
+ */
+
+#define pr_fmt(fmt) "fsl-pamu: %s: " fmt, __func__
+
+#include <linux/init.h>
+#include <linux/iommu.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/mm.h>
+#include <linux/interrupt.h>
+#include <linux/device.h>
+#include <linux/of_platform.h>
+#include <linux/bootmem.h>
+#include <linux/genalloc.h>
+#include <asm/io.h>
+#include <asm/bitops.h>
+#include <asm/fsl_guts.h>
+
+#include "fsl_pamu.h"
+
+/* define indexes for each operation mapping scenario */
+#define OMI_QMAN 0x00
+#define OMI_FMAN 0x01
+#define OMI_QMAN_PRIV 0x02
+#define OMI_CAAM 0x03
+
+/* Handling access violations */
+#define make64(high, low) (((u64)(high) << 32) | (low))
+
+struct pamu_isr_data {
+ void __iomem *pamu_reg_base; /* Base address of PAMU regs*/
+ unsigned int count; /* The number of PAMUs */
+};
+
+static struct paace *ppaact;
+static struct paace *spaact;
+static struct ome *omt;
+
+/*
+ * Table for matching compatible strings, for device tree
+ * guts node, for QorIQ SOCs.
+ * "fsl,qoriq-device-config-2.0" corresponds to T4 & B4
+ * SOCs. For the older SOCs "fsl,qoriq-device-config-1.0"
+ * string would be used.
+*/
+static const struct of_device_id guts_device_ids[] = {
+ { .compatible = "fsl,qoriq-device-config-1.0", },
+ { .compatible = "fsl,qoriq-device-config-2.0", },
+ {}
+};
+
+/* maximum subwindows permitted per liodn */
+static u32 max_subwindow_count;
+
+/* Pool for fspi allocation */
+struct gen_pool *spaace_pool;
+
+/**
+ * pamu_get_max_subwin_cnt() - Return the maximum supported
+ * subwindow count per liodn.
+ *
+ */
+u32 pamu_get_max_subwin_cnt()
+{
+ return max_subwindow_count;
+}
+
+/**
+ * pamu_get_ppaace() - Return the primary PACCE
+ * @liodn: liodn PAACT index for desired PAACE
+ *
+ * Returns the ppace pointer upon success else return
+ * null.
+ */
+static struct paace *pamu_get_ppaace(int liodn)
+{
+ if (!ppaact || liodn >= PAACE_NUMBER_ENTRIES) {
+ pr_err("PPAACT doesn't exist\n");
+ return NULL;
+ }
+
+ return &ppaact[liodn];
+}
+
+/**
+ * pamu_enable_liodn() - Set valid bit of PACCE
+ * @liodn: liodn PAACT index for desired PAACE
+ *
+ * Returns 0 upon success else error code < 0 returned
+ */
+int pamu_enable_liodn(int liodn)
+{
+ struct paace *ppaace;
+
+ ppaace = pamu_get_ppaace(liodn);
+ if (!ppaace) {
+ pr_err("Invalid primary paace entry\n");
+ return -ENOENT;
+ }
+
+ if (!get_bf(ppaace->addr_bitfields, PPAACE_AF_WSE)) {
+ pr_err("liodn %d not configured\n", liodn);
+ return -EINVAL;
+ }
+
+ /* Ensure that all other stores to the ppaace complete first */
+ mb();
+
+ ppaace->addr_bitfields |= PAACE_V_VALID;
+ mb();
+
+ return 0;
+}
+
+/**
+ * pamu_disable_liodn() - Clears valid bit of PACCE
+ * @liodn: liodn PAACT index for desired PAACE
+ *
+ * Returns 0 upon success else error code < 0 returned
+ */
+int pamu_disable_liodn(int liodn)
+{
+ struct paace *ppaace;
+
+ ppaace = pamu_get_ppaace(liodn);
+ if (!ppaace) {
+ pr_err("Invalid primary paace entry\n");
+ return -ENOENT;
+ }
+
+ set_bf(ppaace->addr_bitfields, PAACE_AF_V, PAACE_V_INVALID);
+ mb();
+
+ return 0;
+}
+
+/* Derive the window size encoding for a particular PAACE entry */
+static unsigned int map_addrspace_size_to_wse(phys_addr_t addrspace_size)
+{
+ /* Bug if not a power of 2 */
+ BUG_ON((addrspace_size & (addrspace_size - 1)));
+
+ /* window size is 2^(WSE+1) bytes */
+ return __ffs(addrspace_size >> PAMU_PAGE_SHIFT) + PAMU_PAGE_SHIFT - 1;
+}
+
+/* Derive the PAACE window count encoding for the subwindow count */
+static unsigned int map_subwindow_cnt_to_wce(u32 subwindow_cnt)
+{
+ /* window count is 2^(WCE+1) bytes */
+ return __ffs(subwindow_cnt) - 1;
+}
+
+/*
+ * Set the PAACE type as primary and set the coherency required domain
+ * attribute
+ */
+static void pamu_init_ppaace(struct paace *ppaace)
+{
+ set_bf(ppaace->addr_bitfields, PAACE_AF_PT, PAACE_PT_PRIMARY);
+
+ set_bf(ppaace->domain_attr.to_host.coherency_required, PAACE_DA_HOST_CR,
+ PAACE_M_COHERENCE_REQ);
+}
+
+/*
+ * Set the PAACE type as secondary and set the coherency required domain
+ * attribute.
+ */
+static void pamu_init_spaace(struct paace *spaace)
+{
+ set_bf(spaace->addr_bitfields, PAACE_AF_PT, PAACE_PT_SECONDARY);
+ set_bf(spaace->domain_attr.to_host.coherency_required, PAACE_DA_HOST_CR,
+ PAACE_M_COHERENCE_REQ);
+}
+
+/*
+ * Return the spaace (corresponding to the secondary window index)
+ * for a particular ppaace.
+ */
+static struct paace *pamu_get_spaace(struct paace *paace, u32 wnum)
+{
+ u32 subwin_cnt;
+ struct paace *spaace = NULL;
+
+ subwin_cnt = 1UL << (get_bf(paace->impl_attr, PAACE_IA_WCE) + 1);
+
+ if (wnum < subwin_cnt)
+ spaace = &spaact[paace->fspi + wnum];
+ else
+ pr_err("secondary paace out of bounds\n");
+
+ return spaace;
+}
+
+/**
+ * pamu_get_fspi_and_allocate() - Allocates fspi index and reserves subwindows
+ * required for primary PAACE in the secondary
+ * PAACE table.
+ * @subwin_cnt: Number of subwindows to be reserved.
+ *
+ * A PPAACE entry may have a number of associated subwindows. A subwindow
+ * corresponds to a SPAACE entry in the SPAACT table. Each PAACE entry stores
+ * the index (fspi) of the first SPAACE entry in the SPAACT table. This
+ * function returns the index of the first SPAACE entry. The remaining
+ * SPAACE entries are reserved contiguously from that index.
+ *
+ * Returns a valid fspi index in the range of 0 - SPAACE_NUMBER_ENTRIES on success.
+ * If no SPAACE entry is available or the allocator can not reserve the required
+ * number of contiguous entries function returns ULONG_MAX indicating a failure.
+ *
+*/
+static unsigned long pamu_get_fspi_and_allocate(u32 subwin_cnt)
+{
+ unsigned long spaace_addr;
+
+ spaace_addr = gen_pool_alloc(spaace_pool, subwin_cnt * sizeof(struct paace));
+ if (!spaace_addr)
+ return ULONG_MAX;
+
+ return (spaace_addr - (unsigned long)spaact) / (sizeof(struct paace));
+}
+
+/* Release the subwindows reserved for a particular LIODN */
+void pamu_free_subwins(int liodn)
+{
+ struct paace *ppaace;
+ u32 subwin_cnt, size;
+
+ ppaace = pamu_get_ppaace(liodn);
+ if (!ppaace) {
+ pr_err("Invalid liodn entry\n");
+ return;
+ }
+
+ if (get_bf(ppaace->addr_bitfields, PPAACE_AF_MW)) {
+ subwin_cnt = 1UL << (get_bf(ppaace->impl_attr, PAACE_IA_WCE) + 1);
+ size = (subwin_cnt - 1) * sizeof(struct paace);
+ gen_pool_free(spaace_pool, (unsigned long)&spaact[ppaace->fspi], size);
+ set_bf(ppaace->addr_bitfields, PPAACE_AF_MW, 0);
+ }
+}
+
+/*
+ * Function used for updating stash destination for the coressponding
+ * LIODN.
+ */
+int pamu_update_paace_stash(int liodn, u32 subwin, u32 value)
+{
+ struct paace *paace;
+
+ paace = pamu_get_ppaace(liodn);
+ if (!paace) {
+ pr_err("Invalid liodn entry\n");
+ return -ENOENT;
+ }
+ if (subwin) {
+ paace = pamu_get_spaace(paace, subwin - 1);
+ if (!paace) {
+ return -ENOENT;
+ }
+ }
+ set_bf(paace->impl_attr, PAACE_IA_CID, value);
+
+ mb();
+
+ return 0;
+}
+
+/* Disable a subwindow corresponding to the LIODN */
+int pamu_disable_spaace(int liodn, u32 subwin)
+{
+ struct paace *paace;
+
+ paace = pamu_get_ppaace(liodn);
+ if (!paace) {
+ pr_err("Invalid liodn entry\n");
+ return -ENOENT;
+ }
+ if (subwin) {
+ paace = pamu_get_spaace(paace, subwin - 1);
+ if (!paace) {
+ return -ENOENT;
+ }
+ set_bf(paace->addr_bitfields, PAACE_AF_V,
+ PAACE_V_INVALID);
+ } else {
+ set_bf(paace->addr_bitfields, PAACE_AF_AP,
+ PAACE_AP_PERMS_DENIED);
+ }
+
+ mb();
+
+ return 0;
+}
+
+
+/**
+ * pamu_config_paace() - Sets up PPAACE entry for specified liodn
+ *
+ * @liodn: Logical IO device number
+ * @win_addr: starting address of DSA window
+ * @win-size: size of DSA window
+ * @omi: Operation mapping index -- if ~omi == 0 then omi not defined
+ * @rpn: real (true physical) page number
+ * @stashid: cache stash id for associated cpu -- if ~stashid == 0 then
+ * stashid not defined
+ * @snoopid: snoop id for hardware coherency -- if ~snoopid == 0 then
+ * snoopid not defined
+ * @subwin_cnt: number of sub-windows
+ * @prot: window permissions
+ *
+ * Returns 0 upon success else error code < 0 returned
+ */
+int pamu_config_ppaace(int liodn, phys_addr_t win_addr, phys_addr_t win_size,
+ u32 omi, unsigned long rpn, u32 snoopid, u32 stashid,
+ u32 subwin_cnt, int prot)
+{
+ struct paace *ppaace;
+ unsigned long fspi;
+
+ if ((win_size & (win_size - 1)) || win_size < PAMU_PAGE_SIZE) {
+ pr_err("window size too small or not a power of two %llx\n", win_size);
+ return -EINVAL;
+ }
+
+ if (win_addr & (win_size - 1)) {
+ pr_err("window address is not aligned with window size\n");
+ return -EINVAL;
+ }
+
+ ppaace = pamu_get_ppaace(liodn);
+ if (!ppaace) {
+ return -ENOENT;
+ }
+
+ /* window size is 2^(WSE+1) bytes */
+ set_bf(ppaace->addr_bitfields, PPAACE_AF_WSE,
+ map_addrspace_size_to_wse(win_size));
+
+ pamu_init_ppaace(ppaace);
+
+ ppaace->wbah = win_addr >> (PAMU_PAGE_SHIFT + 20);
+ set_bf(ppaace->addr_bitfields, PPAACE_AF_WBAL,
+ (win_addr >> PAMU_PAGE_SHIFT));
+
+ /* set up operation mapping if it's configured */
+ if (omi < OME_NUMBER_ENTRIES) {
+ set_bf(ppaace->impl_attr, PAACE_IA_OTM, PAACE_OTM_INDEXED);
+ ppaace->op_encode.index_ot.omi = omi;
+ } else if (~omi != 0) {
+ pr_err("bad operation mapping index: %d\n", omi);
+ return -EINVAL;
+ }
+
+ /* configure stash id */
+ if (~stashid != 0)
+ set_bf(ppaace->impl_attr, PAACE_IA_CID, stashid);
+
+ /* configure snoop id */
+ if (~snoopid != 0)
+ ppaace->domain_attr.to_host.snpid = snoopid;
+
+ if (subwin_cnt) {
+ /* The first entry is in the primary PAACE instead */
+ fspi = pamu_get_fspi_and_allocate(subwin_cnt - 1);
+ if (fspi == ULONG_MAX) {
+ pr_err("spaace indexes exhausted\n");
+ return -EINVAL;
+ }
+
+ /* window count is 2^(WCE+1) bytes */
+ set_bf(ppaace->impl_attr, PAACE_IA_WCE,
+ map_subwindow_cnt_to_wce(subwin_cnt));
+ set_bf(ppaace->addr_bitfields, PPAACE_AF_MW, 0x1);
+ ppaace->fspi = fspi;
+ } else {
+ set_bf(ppaace->impl_attr, PAACE_IA_ATM, PAACE_ATM_WINDOW_XLATE);
+ ppaace->twbah = rpn >> 20;
+ set_bf(ppaace->win_bitfields, PAACE_WIN_TWBAL, rpn);
+ set_bf(ppaace->addr_bitfields, PAACE_AF_AP, prot);
+ set_bf(ppaace->impl_attr, PAACE_IA_WCE, 0);
+ set_bf(ppaace->addr_bitfields, PPAACE_AF_MW, 0);
+ }
+ mb();
+
+ return 0;
+}
+
+/**
+ * pamu_config_spaace() - Sets up SPAACE entry for specified subwindow
+ *
+ * @liodn: Logical IO device number
+ * @subwin_cnt: number of sub-windows associated with dma-window
+ * @subwin: subwindow index
+ * @subwin_size: size of subwindow
+ * @omi: Operation mapping index
+ * @rpn: real (true physical) page number
+ * @snoopid: snoop id for hardware coherency -- if ~snoopid == 0 then
+ * snoopid not defined
+ * @stashid: cache stash id for associated cpu
+ * @enable: enable/disable subwindow after reconfiguration
+ * @prot: sub window permissions
+ *
+ * Returns 0 upon success else error code < 0 returned
+ */
+int pamu_config_spaace(int liodn, u32 subwin_cnt, u32 subwin,
+ phys_addr_t subwin_size, u32 omi, unsigned long rpn,
+ u32 snoopid, u32 stashid, int enable, int prot)
+{
+ struct paace *paace;
+
+
+ /* setup sub-windows */
+ if (!subwin_cnt) {
+ pr_err("Invalid subwindow count\n");
+ return -EINVAL;
+ }
+
+ paace = pamu_get_ppaace(liodn);
+ if (subwin > 0 && subwin < subwin_cnt && paace) {
+ paace = pamu_get_spaace(paace, subwin - 1);
+
+ if (paace && !(paace->addr_bitfields & PAACE_V_VALID)) {
+ pamu_init_spaace(paace);
+ set_bf(paace->addr_bitfields, SPAACE_AF_LIODN, liodn);
+ }
+ }
+
+ if (!paace) {
+ pr_err("Invalid liodn entry\n");
+ return -ENOENT;
+ }
+
+ if (subwin_size & (subwin_size - 1) || subwin_size < PAMU_PAGE_SIZE) {
+ pr_err("subwindow size out of range, or not a power of 2\n");
+ return -EINVAL;
+ }
+
+ if (rpn == ULONG_MAX) {
+ pr_err("real page number out of range\n");
+ return -EINVAL;
+ }
+
+ /* window size is 2^(WSE+1) bytes */
+ set_bf(paace->win_bitfields, PAACE_WIN_SWSE,
+ map_addrspace_size_to_wse(subwin_size));
+
+ set_bf(paace->impl_attr, PAACE_IA_ATM, PAACE_ATM_WINDOW_XLATE);
+ paace->twbah = rpn >> 20;
+ set_bf(paace->win_bitfields, PAACE_WIN_TWBAL, rpn);
+ set_bf(paace->addr_bitfields, PAACE_AF_AP, prot);
+
+ /* configure snoop id */
+ if (~snoopid != 0)
+ paace->domain_attr.to_host.snpid = snoopid;
+
+ /* set up operation mapping if it's configured */
+ if (omi < OME_NUMBER_ENTRIES) {
+ set_bf(paace->impl_attr, PAACE_IA_OTM, PAACE_OTM_INDEXED);
+ paace->op_encode.index_ot.omi = omi;
+ } else if (~omi != 0) {
+ pr_err("bad operation mapping index: %d\n", omi);
+ return -EINVAL;
+ }
+
+ if (~stashid != 0)
+ set_bf(paace->impl_attr, PAACE_IA_CID, stashid);
+
+ smp_wmb();
+
+ if (enable)
+ paace->addr_bitfields |= PAACE_V_VALID;
+
+ mb();
+
+ return 0;
+}
+
+/**
+* get_ome_index() - Returns the index in the operation mapping table
+* for device.
+* @*omi_index: pointer for storing the index value
+*
+*/
+void get_ome_index(u32 *omi_index, struct device *dev)
+{
+ if (of_device_is_compatible(dev->of_node, "fsl,qman-portal"))
+ *omi_index = OMI_QMAN;
+ if (of_device_is_compatible(dev->of_node, "fsl,qman"))
+ *omi_index = OMI_QMAN_PRIV;
+}
+
+/**
+ * get_stash_id - Returns stash destination id corresponding to a
+ * cache type and vcpu.
+ * @stash_dest_hint: L1, L2 or L3
+ * @vcpu: vpcu target for a particular cache type.
+ *
+ * Returs stash on success or ~(u32)0 on failure.
+ *
+ */
+u32 get_stash_id(u32 stash_dest_hint, u32 vcpu)
+{
+ const u32 *prop;
+ struct device_node *node;
+ u32 cache_level;
+ int len;
+
+ /* Fastpath, exit early if L3/CPC cache is target for stashing */
+ if (stash_dest_hint == IOMMU_ATTR_CACHE_L3) {
+ node = of_find_compatible_node(NULL, NULL,
+ "fsl,p4080-l3-cache-controller");
+ if (node) {
+ prop = of_get_property(node, "cache-stash-id", 0);
+ if (!prop) {
+ pr_err("missing cache-stash-id at %s\n", node->full_name);
+ of_node_put(node);
+ return ~(u32)0;
+ }
+ of_node_put(node);
+ return be32_to_cpup(prop);
+ }
+ return ~(u32)0;
+ }
+
+ for_each_node_by_type(node, "cpu") {
+ prop = of_get_property(node, "reg", &len);
+ if (be32_to_cpup(prop) == vcpu)
+ break;
+ }
+
+ /* find the hwnode that represents the cache */
+ for (cache_level = IOMMU_ATTR_CACHE_L1; cache_level < IOMMU_ATTR_CACHE_L3; cache_level++) {
+ if (stash_dest_hint == cache_level) {
+ prop = of_get_property(node, "cache-stash-id", 0);
+ if (!prop) {
+ pr_err("missing cache-stash-id at %s\n", node->full_name);
+ of_node_put(node);
+ return ~(u32)0;
+ }
+ of_node_put(node);
+ return be32_to_cpup(prop);
+ }
+
+ prop = of_get_property(node, "next-level-cache", 0);
+ if (!prop) {
+ pr_err("can't find next-level-cache at %s\n",
+ node->full_name);
+ of_node_put(node);
+ return ~(u32)0; /* can't traverse any further */
+ }
+ of_node_put(node);
+
+ /* advance to next node in cache hierarchy */
+ node = of_find_node_by_phandle(*prop);
+ if (!node) {
+ pr_err("Invalid node for cache hierarchy %s\n",
+ node->full_name);
+ return ~(u32)0;
+ }
+ }
+
+ pr_err("stash dest not found for %d on vcpu %d\n",
+ stash_dest_hint, vcpu);
+ return ~(u32)0;
+}
+
+/* Identify if the PAACT table entry belongs to QMAN, BMAN or QMAN Portal */
+#define QMAN_PAACE 1
+#define QMAN_PORTAL_PAACE 2
+#define BMAN_PAACE 3
+
+/**
+ * Setup operation mapping and stash destinations for QMAN and QMAN portal.
+ * Memory accesses to QMAN and BMAN private memory need not be coherent, so
+ * clear the PAACE entry coherency attribute for them.
+ */
+static void setup_qbman_paace(struct paace *ppaace, int paace_type)
+{
+ switch (paace_type) {
+ case QMAN_PAACE:
+ set_bf(ppaace->impl_attr, PAACE_IA_OTM, PAACE_OTM_INDEXED);
+ ppaace->op_encode.index_ot.omi = OMI_QMAN_PRIV;
+ /* setup QMAN Private data stashing for the L3 cache */
+ set_bf(ppaace->impl_attr, PAACE_IA_CID, get_stash_id(IOMMU_ATTR_CACHE_L3, 0));
+ set_bf(ppaace->domain_attr.to_host.coherency_required, PAACE_DA_HOST_CR,
+ 0);
+ break;
+ case QMAN_PORTAL_PAACE:
+ set_bf(ppaace->impl_attr, PAACE_IA_OTM, PAACE_OTM_INDEXED);
+ ppaace->op_encode.index_ot.omi = OMI_QMAN;
+ /*Set DQRR and Frame stashing for the L3 cache */
+ set_bf(ppaace->impl_attr, PAACE_IA_CID, get_stash_id(IOMMU_ATTR_CACHE_L3, 0));
+ break;
+ case BMAN_PAACE:
+ set_bf(ppaace->domain_attr.to_host.coherency_required, PAACE_DA_HOST_CR,
+ 0);
+ break;
+ }
+}
+
+/**
+ * Setup the operation mapping table for various devices. This is a static
+ * table where each table index corresponds to a particular device. PAMU uses
+ * this table to translate device transaction to appropriate corenet
+ * transaction.
+ */
+static void __init setup_omt(struct ome *omt)
+{
+ struct ome *ome;
+
+ /* Configure OMI_QMAN */
+ ome = &omt[OMI_QMAN];
+
+ ome->moe[IOE_READ_IDX] = EOE_VALID | EOE_READ;
+ ome->moe[IOE_EREAD0_IDX] = EOE_VALID | EOE_RSA;
+ ome->moe[IOE_WRITE_IDX] = EOE_VALID | EOE_WRITE;
+ ome->moe[IOE_EWRITE0_IDX] = EOE_VALID | EOE_WWSAO;
+
+ ome->moe[IOE_DIRECT0_IDX] = EOE_VALID | EOE_LDEC;
+ ome->moe[IOE_DIRECT1_IDX] = EOE_VALID | EOE_LDECPE;
+
+ /* Configure OMI_FMAN */
+ ome = &omt[OMI_FMAN];
+ ome->moe[IOE_READ_IDX] = EOE_VALID | EOE_READI;
+ ome->moe[IOE_WRITE_IDX] = EOE_VALID | EOE_WRITE;
+
+ /* Configure OMI_QMAN private */
+ ome = &omt[OMI_QMAN_PRIV];
+ ome->moe[IOE_READ_IDX] = EOE_VALID | EOE_READ;
+ ome->moe[IOE_WRITE_IDX] = EOE_VALID | EOE_WRITE;
+ ome->moe[IOE_EREAD0_IDX] = EOE_VALID | EOE_RSA;
+ ome->moe[IOE_EWRITE0_IDX] = EOE_VALID | EOE_WWSA;
+
+ /* Configure OMI_CAAM */
+ ome = &omt[OMI_CAAM];
+ ome->moe[IOE_READ_IDX] = EOE_VALID | EOE_READI;
+ ome->moe[IOE_WRITE_IDX] = EOE_VALID | EOE_WRITE;
+}
+
+/*
+ * Get the maximum number of PAACT table entries
+ * and subwindows supported by PAMU
+ */
+static void get_pamu_cap_values(unsigned long pamu_reg_base)
+{
+ u32 pc_val;
+
+ pc_val = in_be32((u32 *)(pamu_reg_base + PAMU_PC3));
+ /* Maximum number of subwindows per liodn */
+ max_subwindow_count = 1 << (1 + PAMU_PC3_MWCE(pc_val));
+}
+
+/* Setup PAMU registers pointing to PAACT, SPAACT and OMT */
+int setup_one_pamu(unsigned long pamu_reg_base, unsigned long pamu_reg_size,
+ phys_addr_t ppaact_phys, phys_addr_t spaact_phys,
+ phys_addr_t omt_phys)
+{
+ u32 *pc;
+ struct pamu_mmap_regs *pamu_regs;
+
+ pc = (u32 *) (pamu_reg_base + PAMU_PC);
+ pamu_regs = (struct pamu_mmap_regs *)
+ (pamu_reg_base + PAMU_MMAP_REGS_BASE);
+
+ /* set up pointers to corenet control blocks */
+
+ out_be32(&pamu_regs->ppbah, upper_32_bits(ppaact_phys));
+ out_be32(&pamu_regs->ppbal, lower_32_bits(ppaact_phys));
+ ppaact_phys = ppaact_phys + PAACT_SIZE;
+ out_be32(&pamu_regs->pplah, upper_32_bits(ppaact_phys));
+ out_be32(&pamu_regs->pplal, lower_32_bits(ppaact_phys));
+
+ out_be32(&pamu_regs->spbah, upper_32_bits(spaact_phys));
+ out_be32(&pamu_regs->spbal, lower_32_bits(spaact_phys));
+ spaact_phys = spaact_phys + SPAACT_SIZE;
+ out_be32(&pamu_regs->splah, upper_32_bits(spaact_phys));
+ out_be32(&pamu_regs->splal, lower_32_bits(spaact_phys));
+
+ out_be32(&pamu_regs->obah, upper_32_bits(omt_phys));
+ out_be32(&pamu_regs->obal, lower_32_bits(omt_phys));
+ omt_phys = omt_phys + OMT_SIZE;
+ out_be32(&pamu_regs->olah, upper_32_bits(omt_phys));
+ out_be32(&pamu_regs->olal, lower_32_bits(omt_phys));
+
+ /*
+ * set PAMU enable bit,
+ * allow ppaact & omt to be cached
+ * & enable PAMU access violation interrupts.
+ */
+
+ out_be32((u32 *)(pamu_reg_base + PAMU_PICS),
+ PAMU_ACCESS_VIOLATION_ENABLE);
+ out_be32(pc, PAMU_PC_PE | PAMU_PC_OCE | PAMU_PC_SPCC | PAMU_PC_PPCC);
+ return 0;
+}
+
+/* Enable all device LIODNS */
+static void __init setup_liodns(void)
+{
+ int i, len;
+ struct paace *ppaace;
+ struct device_node *node = NULL;
+ const u32 *prop;
+
+ for_each_node_with_property(node, "fsl,liodn") {
+ prop = of_get_property(node, "fsl,liodn", &len);
+ for (i = 0; i < len / sizeof(u32); i++) {
+ int liodn;
+
+ liodn = be32_to_cpup(&prop[i]);
+ if (liodn >= PAACE_NUMBER_ENTRIES) {
+ pr_err("Invalid LIODN value %d\n", liodn);
+ continue;
+ }
+ ppaace = pamu_get_ppaace(liodn);
+ pamu_init_ppaace(ppaace);
+ /* window size is 2^(WSE+1) bytes */
+ set_bf(ppaace->addr_bitfields, PPAACE_AF_WSE, 35);
+ ppaace->wbah = 0;
+ set_bf(ppaace->addr_bitfields, PPAACE_AF_WBAL, 0);
+ set_bf(ppaace->impl_attr, PAACE_IA_ATM,
+ PAACE_ATM_NO_XLATE);
+ set_bf(ppaace->addr_bitfields, PAACE_AF_AP,
+ PAACE_AP_PERMS_ALL);
+ if (of_device_is_compatible(node, "fsl,qman-portal"))
+ setup_qbman_paace(ppaace, QMAN_PORTAL_PAACE);
+ if (of_device_is_compatible(node, "fsl,qman"))
+ setup_qbman_paace(ppaace, QMAN_PAACE);
+ if (of_device_is_compatible(node, "fsl,bman"))
+ setup_qbman_paace(ppaace, BMAN_PAACE);
+ mb();
+ pamu_enable_liodn(liodn);
+ }
+ }
+}
+
+irqreturn_t pamu_av_isr(int irq, void *arg)
+{
+ struct pamu_isr_data *data = arg;
+ phys_addr_t phys;
+ unsigned int i, j;
+
+ pr_emerg("fsl-pamu: access violation interrupt\n");
+
+ for (i = 0; i < data->count; i++) {
+ void __iomem *p = data->pamu_reg_base + i * PAMU_OFFSET;
+ u32 pics = in_be32(p + PAMU_PICS);
+
+ if (pics & PAMU_ACCESS_VIOLATION_STAT) {
+ pr_emerg("POES1=%08x\n", in_be32(p + PAMU_POES1));
+ pr_emerg("POES2=%08x\n", in_be32(p + PAMU_POES2));
+ pr_emerg("AVS1=%08x\n", in_be32(p + PAMU_AVS1));
+ pr_emerg("AVS2=%08x\n", in_be32(p + PAMU_AVS2));
+ pr_emerg("AVA=%016llx\n", make64(in_be32(p + PAMU_AVAH),
+ in_be32(p + PAMU_AVAL)));
+ pr_emerg("UDAD=%08x\n", in_be32(p + PAMU_UDAD));
+ pr_emerg("POEA=%016llx\n", make64(in_be32(p + PAMU_POEAH),
+ in_be32(p + PAMU_POEAL)));
+
+ phys = make64(in_be32(p + PAMU_POEAH),
+ in_be32(p + PAMU_POEAL));
+
+ /* Assume that POEA points to a PAACE */
+ if (phys) {
+ u32 *paace = phys_to_virt(phys);
+
+ /* Only the first four words are relevant */
+ for (j = 0; j < 4; j++)
+ pr_emerg("PAACE[%u]=%08x\n", j, in_be32(paace + j));
+ }
+ }
+ }
+
+ panic("\n");
+ /* NOTREACHED */
+
+ return IRQ_HANDLED;
+}
+
+#define LAWAR_EN 0x80000000
+#define LAWAR_TARGET_MASK 0x0FF00000
+#define LAWAR_TARGET_SHIFT 20
+#define LAWAR_SIZE_MASK 0x0000003F
+#define LAWAR_CSDID_MASK 0x000FF000
+#define LAWAR_CSDID_SHIFT 12
+
+#define LAW_SIZE_4K 0xb
+
+struct ccsr_law {
+ u32 lawbarh; /* LAWn base address high */
+ u32 lawbarl; /* LAWn base address low */
+ u32 lawar; /* LAWn attributes */
+ u32 reserved;
+};
+
+#define make64(high, low) (((u64)(high) << 32) | (low))
+
+/*
+ * Create a coherence subdomain for a given memory block.
+ */
+static int __init create_csd(phys_addr_t phys, size_t size, u32 csd_port_id)
+{
+ struct device_node *np;
+ const __be32 *iprop;
+ void __iomem *lac = NULL; /* Local Access Control registers */
+ struct ccsr_law __iomem *law;
+ void __iomem *ccm = NULL;
+ u32 __iomem *csdids;
+ unsigned int i, num_laws, num_csds;
+ u32 law_target = 0;
+ u32 csd_id = 0;
+ int ret = 0;
+
+ np = of_find_compatible_node(NULL, NULL, "fsl,corenet-law");
+ if (!np)
+ return -ENODEV;
+
+ iprop = of_get_property(np, "fsl,num-laws", NULL);
+ if (!iprop) {
+ ret = -ENODEV;
+ goto error;
+ }
+
+ num_laws = be32_to_cpup(iprop);
+ if (!num_laws) {
+ ret = -ENODEV;
+ goto error;
+ }
+
+ lac = of_iomap(np, 0);
+ if (!lac) {
+ ret = -ENODEV;
+ goto error;
+ }
+
+ /* LAW registers are at offset 0xC00 */
+ law = lac + 0xC00;
+
+ of_node_put(np);
+
+ np = of_find_compatible_node(NULL, NULL, "fsl,corenet-cf");
+ if (!np) {
+ ret = -ENODEV;
+ goto error;
+ }
+
+ iprop = of_get_property(np, "fsl,ccf-num-csdids", NULL);
+ if (!iprop) {
+ ret = -ENODEV;
+ goto error;
+ }
+
+ num_csds = be32_to_cpup(iprop);
+ if (!num_csds) {
+ ret = -ENODEV;
+ goto error;
+ }
+
+ ccm = of_iomap(np, 0);
+ if (!ccm) {
+ ret = -ENOMEM;
+ goto error;
+ }
+
+ /* The undocumented CSDID registers are at offset 0x600 */
+ csdids = ccm + 0x600;
+
+ of_node_put(np);
+ np = NULL;
+
+ /* Find an unused coherence subdomain ID */
+ for (csd_id = 0; csd_id < num_csds; csd_id++) {
+ if (!csdids[csd_id])
+ break;
+ }
+
+ /* Store the Port ID in the (undocumented) proper CIDMRxx register */
+ csdids[csd_id] = csd_port_id;
+
+ /* Find the DDR LAW that maps to our buffer. */
+ for (i = 0; i < num_laws; i++) {
+ if (law[i].lawar & LAWAR_EN) {
+ phys_addr_t law_start, law_end;
+
+ law_start = make64(law[i].lawbarh, law[i].lawbarl);
+ law_end = law_start +
+ (2ULL << (law[i].lawar & LAWAR_SIZE_MASK));
+
+ if (law_start <= phys && phys < law_end) {
+ law_target = law[i].lawar & LAWAR_TARGET_MASK;
+ break;
+ }
+ }
+ }
+
+ if (i == 0 || i == num_laws) {
+ /* This should never happen*/
+ ret = -ENOENT;
+ goto error;
+ }
+
+ /* Find a free LAW entry */
+ while (law[--i].lawar & LAWAR_EN) {
+ if (i == 0) {
+ /* No higher priority LAW slots available */
+ ret = -ENOENT;
+ goto error;
+ }
+ }
+
+ law[i].lawbarh = upper_32_bits(phys);
+ law[i].lawbarl = lower_32_bits(phys);
+ wmb();
+ law[i].lawar = LAWAR_EN | law_target | (csd_id << LAWAR_CSDID_SHIFT) |
+ (LAW_SIZE_4K + get_order(size));
+ wmb();
+
+error:
+ if (ccm)
+ iounmap(ccm);
+
+ if (lac)
+ iounmap(lac);
+
+ if (np)
+ of_node_put(np);
+
+ return ret;
+}
+
+/*
+ * Table of SVRs and the corresponding PORT_ID values. Port ID corresponds to a
+ * bit map of snoopers for a given range of memory mapped by a LAW.
+ *
+ * All future CoreNet-enabled SOCs will have this erratum(A-004510) fixed, so this
+ * table should never need to be updated. SVRs are guaranteed to be unique, so
+ * there is no worry that a future SOC will inadvertently have one of these
+ * values.
+ */
+static const struct {
+ u32 svr;
+ u32 port_id;
+} port_id_map[] = {
+ {0x82100010, 0xFF000000}, /* P2040 1.0 */
+ {0x82100011, 0xFF000000}, /* P2040 1.1 */
+ {0x82100110, 0xFF000000}, /* P2041 1.0 */
+ {0x82100111, 0xFF000000}, /* P2041 1.1 */
+ {0x82110310, 0xFF000000}, /* P3041 1.0 */
+ {0x82110311, 0xFF000000}, /* P3041 1.1 */
+ {0x82010020, 0xFFF80000}, /* P4040 2.0 */
+ {0x82000020, 0xFFF80000}, /* P4080 2.0 */
+ {0x82210010, 0xFC000000}, /* P5010 1.0 */
+ {0x82210020, 0xFC000000}, /* P5010 2.0 */
+ {0x82200010, 0xFC000000}, /* P5020 1.0 */
+ {0x82050010, 0xFF800000}, /* P5021 1.0 */
+ {0x82040010, 0xFF800000}, /* P5040 1.0 */
+};
+
+#define SVR_SECURITY 0x80000 /* The Security (E) bit */
+
+static int __init fsl_pamu_probe(struct platform_device *pdev)
+{
+ void __iomem *pamu_regs = NULL;
+ struct ccsr_guts __iomem *guts_regs = NULL;
+ u32 pamubypenr, pamu_counter;
+ unsigned long pamu_reg_off;
+ unsigned long pamu_reg_base;
+ struct pamu_isr_data *data = NULL;
+ struct device_node *guts_node;
+ u64 size;
+ struct page *p;
+ int ret = 0;
+ int irq;
+ phys_addr_t ppaact_phys;
+ phys_addr_t spaact_phys;
+ phys_addr_t omt_phys;
+ size_t mem_size = 0;
+ unsigned int order = 0;
+ u32 csd_port_id = 0;
+ unsigned i;
+ /*
+ * enumerate all PAMUs and allocate and setup PAMU tables
+ * for each of them,
+ * NOTE : All PAMUs share the same LIODN tables.
+ */
+
+ pamu_regs = of_iomap(pdev->dev.of_node, 0);
+ if (!pamu_regs) {
+ dev_err(&pdev->dev, "ioremap of PAMU node failed\n");
+ return -ENOMEM;
+ }
+ of_get_address(pdev->dev.of_node, 0, &size, NULL);
+
+ irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
+ if (irq == NO_IRQ) {
+ dev_warn(&pdev->dev, "no interrupts listed in PAMU node\n");
+ goto error;
+ }
+
+ data = kzalloc(sizeof(struct pamu_isr_data), GFP_KERNEL);
+ if (!data) {
+ dev_err(&pdev->dev, "PAMU isr data memory allocation failed\n");
+ ret = -ENOMEM;
+ goto error;
+ }
+ data->pamu_reg_base = pamu_regs;
+ data->count = size / PAMU_OFFSET;
+
+ /* The ISR needs access to the regs, so we won't iounmap them */
+ ret = request_irq(irq, pamu_av_isr, 0, "pamu", data);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "error %i installing ISR for irq %i\n",
+ ret, irq);
+ goto error;
+ }
+
+ guts_node = of_find_matching_node(NULL, guts_device_ids);
+ if (!guts_node) {
+ dev_err(&pdev->dev, "could not find GUTS node %s\n",
+ pdev->dev.of_node->full_name);
+ ret = -ENODEV;
+ goto error;
+ }
+
+ guts_regs = of_iomap(guts_node, 0);
+ of_node_put(guts_node);
+ if (!guts_regs) {
+ dev_err(&pdev->dev, "ioremap of GUTS node failed\n");
+ ret = -ENODEV;
+ goto error;
+ }
+
+ /* read in the PAMU capability registers */
+ get_pamu_cap_values((unsigned long)pamu_regs);
+ /*
+ * To simplify the allocation of a coherency domain, we allocate the
+ * PAACT and the OMT in the same memory buffer. Unfortunately, this
+ * wastes more memory compared to allocating the buffers separately.
+ */
+ /* Determine how much memory we need */
+ mem_size = (PAGE_SIZE << get_order(PAACT_SIZE)) +
+ (PAGE_SIZE << get_order(SPAACT_SIZE)) +
+ (PAGE_SIZE << get_order(OMT_SIZE));
+ order = get_order(mem_size);
+
+ p = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);
+ if (!p) {
+ dev_err(&pdev->dev, "unable to allocate PAACT/SPAACT/OMT block\n");
+ ret = -ENOMEM;
+ goto error;
+ }
+
+ ppaact = page_address(p);
+ ppaact_phys = page_to_phys(p);
+
+ /* Make sure the memory is naturally aligned */
+ if (ppaact_phys & ((PAGE_SIZE << order) - 1)) {
+ dev_err(&pdev->dev, "PAACT/OMT block is unaligned\n");
+ ret = -ENOMEM;
+ goto error;
+ }
+
+ spaact = (void *)ppaact + (PAGE_SIZE << get_order(PAACT_SIZE));
+ omt = (void *)spaact + (PAGE_SIZE << get_order(SPAACT_SIZE));
+
+ dev_dbg(&pdev->dev, "ppaact virt=%p phys=0x%llx\n", ppaact,
+ (unsigned long long) ppaact_phys);
+
+ /* Check to see if we need to implement the work-around on this SOC */
+
+ /* Determine the Port ID for our coherence subdomain */
+ for (i = 0; i < ARRAY_SIZE(port_id_map); i++) {
+ if (port_id_map[i].svr == (mfspr(SPRN_SVR) & ~SVR_SECURITY)) {
+ csd_port_id = port_id_map[i].port_id;
+ dev_dbg(&pdev->dev, "found matching SVR %08x\n",
+ port_id_map[i].svr);
+ break;
+ }
+ }
+
+ if (csd_port_id) {
+ dev_dbg(&pdev->dev, "creating coherency subdomain at address "
+ "0x%llx, size %zu, port id 0x%08x", ppaact_phys,
+ mem_size, csd_port_id);
+
+ ret = create_csd(ppaact_phys, mem_size, csd_port_id);
+ if (ret) {
+ dev_err(&pdev->dev, "could not create coherence "
+ "subdomain\n");
+ return ret;
+ }
+ }
+
+ spaact_phys = virt_to_phys(spaact);
+ omt_phys = virt_to_phys(omt);
+
+ spaace_pool = gen_pool_create(ilog2(sizeof(struct paace)), -1);
+ if (!spaace_pool) {
+ ret = -ENOMEM;
+ dev_err(&pdev->dev, "PAMU : failed to allocate spaace gen pool\n");
+ goto error;
+ }
+
+ ret = gen_pool_add(spaace_pool, (unsigned long)spaact, SPAACT_SIZE, -1);
+ if (ret)
+ goto error_genpool;
+
+ pamubypenr = in_be32(&guts_regs->pamubypenr);
+
+ for (pamu_reg_off = 0, pamu_counter = 0x80000000; pamu_reg_off < size;
+ pamu_reg_off += PAMU_OFFSET, pamu_counter >>= 1) {
+
+ pamu_reg_base = (unsigned long) pamu_regs + pamu_reg_off;
+ setup_one_pamu(pamu_reg_base, pamu_reg_off, ppaact_phys,
+ spaact_phys, omt_phys);
+ /* Disable PAMU bypass for this PAMU */
+ pamubypenr &= ~pamu_counter;
+ }
+
+ setup_omt(omt);
+
+ /* Enable all relevant PAMU(s) */
+ out_be32(&guts_regs->pamubypenr, pamubypenr);
+
+ iounmap(guts_regs);
+
+ /* Enable DMA for the LIODNs in the device tree*/
+
+ setup_liodns();
+
+ return 0;
+
+error_genpool:
+ gen_pool_destroy(spaace_pool);
+
+error:
+ if (irq != NO_IRQ)
+ free_irq(irq, data);
+
+ if (data) {
+ memset(data, 0, sizeof(struct pamu_isr_data));
+ kfree(data);
+ }
+
+ if (pamu_regs)
+ iounmap(pamu_regs);
+
+ if (guts_regs)
+ iounmap(guts_regs);
+
+ if (ppaact)
+ free_pages((unsigned long)ppaact, order);
+
+ ppaact = NULL;
+
+ return ret;
+}
+
+static const struct of_device_id fsl_of_pamu_ids[] = {
+ {
+ .compatible = "fsl,p4080-pamu",
+ },
+ {
+ .compatible = "fsl,pamu",
+ },
+ {},
+};
+
+static struct platform_driver fsl_of_pamu_driver = {
+ .driver = {
+ .name = "fsl-of-pamu",
+ .owner = THIS_MODULE,
+ },
+ .probe = fsl_pamu_probe,
+};
+
+static __init int fsl_pamu_init(void)
+{
+ struct platform_device *pdev = NULL;
+ struct device_node *np;
+ int ret;
+
+ /*
+ * The normal OF process calls the probe function at some
+ * indeterminate later time, after most drivers have loaded. This is
+ * too late for us, because PAMU clients (like the Qman driver)
+ * depend on PAMU being initialized early.
+ *
+ * So instead, we "manually" call our probe function by creating the
+ * platform devices ourselves.
+ */
+
+ /*
+ * We assume that there is only one PAMU node in the device tree. A
+ * single PAMU node represents all of the PAMU devices in the SOC
+ * already. Everything else already makes that assumption, and the
+ * binding for the PAMU nodes doesn't allow for any parent-child
+ * relationships anyway. In other words, support for more than one
+ * PAMU node would require significant changes to a lot of code.
+ */
+
+ np = of_find_compatible_node(NULL, NULL, "fsl,pamu");
+ if (!np) {
+ pr_err("fsl-pamu: could not find a PAMU node\n");
+ return -ENODEV;
+ }
+
+ ret = platform_driver_register(&fsl_of_pamu_driver);
+ if (ret) {
+ pr_err("fsl-pamu: could not register driver (err=%i)\n", ret);
+ goto error_driver_register;
+ }
+
+ pdev = platform_device_alloc("fsl-of-pamu", 0);
+ if (!pdev) {
+ pr_err("fsl-pamu: could not allocate device %s\n",
+ np->full_name);
+ ret = -ENOMEM;
+ goto error_device_alloc;
+ }
+ pdev->dev.of_node = of_node_get(np);
+
+ ret = pamu_domain_init();
+ if (ret)
+ goto error_device_add;
+
+ ret = platform_device_add(pdev);
+ if (ret) {
+ pr_err("fsl-pamu: could not add device %s (err=%i)\n",
+ np->full_name, ret);
+ goto error_device_add;
+ }
+
+ return 0;
+
+error_device_add:
+ of_node_put(pdev->dev.of_node);
+ pdev->dev.of_node = NULL;
+
+ platform_device_put(pdev);
+
+error_device_alloc:
+ platform_driver_unregister(&fsl_of_pamu_driver);
+
+error_driver_register:
+ of_node_put(np);
+
+ return ret;
+}
+arch_initcall(fsl_pamu_init);
diff --git a/drivers/iommu/fsl_pamu.h b/drivers/iommu/fsl_pamu.h
new file mode 100644
index 0000000..83cbd26
--- /dev/null
+++ b/drivers/iommu/fsl_pamu.h
@@ -0,0 +1,405 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ *
+ * Copyright (C) 2013 Freescale Semiconductor, Inc.
+ *
+ */
+
+#ifndef __FSL_PAMU_H
+#define __FSL_PAMU_H
+
+/* Bit Field macros
+ * v = bit field variable; m = mask, m##_SHIFT = shift, x = value to load
+ */
+#define set_bf(v, m, x) (v = ((v) & ~(m)) | (((x) << (m##_SHIFT)) & (m)))
+#define get_bf(v, m) (((v) & (m)) >> (m##_SHIFT))
+
+/* PAMU CCSR space */
+#define PAMU_PGC 0x00000000 /* Allows all peripheral accesses */
+#define PAMU_PE 0x40000000 /* enable PAMU */
+
+/* PAMU_OFFSET to the next pamu space in ccsr */
+#define PAMU_OFFSET 0x1000
+
+#define PAMU_MMAP_REGS_BASE 0
+
+struct pamu_mmap_regs {
+ u32 ppbah;
+ u32 ppbal;
+ u32 pplah;
+ u32 pplal;
+ u32 spbah;
+ u32 spbal;
+ u32 splah;
+ u32 splal;
+ u32 obah;
+ u32 obal;
+ u32 olah;
+ u32 olal;
+};
+
+/* PAMU Error Registers */
+#define PAMU_POES1 0x0040
+#define PAMU_POES2 0x0044
+#define PAMU_POEAH 0x0048
+#define PAMU_POEAL 0x004C
+#define PAMU_AVS1 0x0050
+#define PAMU_AVS1_AV 0x1
+#define PAMU_AVS1_OTV 0x6
+#define PAMU_AVS1_APV 0x78
+#define PAMU_AVS1_WAV 0x380
+#define PAMU_AVS1_LAV 0x1c00
+#define PAMU_AVS1_GCV 0x2000
+#define PAMU_AVS1_PDV 0x4000
+#define PAMU_AV_MASK (PAMU_AVS1_AV | PAMU_AVS1_OTV | PAMU_AVS1_APV | PAMU_AVS1_WAV \
+ | PAMU_AVS1_LAV | PAMU_AVS1_GCV | PAMU_AVS1_PDV)
+#define PAMU_AVS1_LIODN_SHIFT 16
+#define PAMU_LAV_LIODN_NOT_IN_PPAACT 0x400
+
+#define PAMU_AVS2 0x0054
+#define PAMU_AVAH 0x0058
+#define PAMU_AVAL 0x005C
+#define PAMU_EECTL 0x0060
+#define PAMU_EEDIS 0x0064
+#define PAMU_EEINTEN 0x0068
+#define PAMU_EEDET 0x006C
+#define PAMU_EEATTR 0x0070
+#define PAMU_EEAHI 0x0074
+#define PAMU_EEALO 0x0078
+#define PAMU_EEDHI 0X007C
+#define PAMU_EEDLO 0x0080
+#define PAMU_EECC 0x0084
+#define PAMU_UDAD 0x0090
+
+/* PAMU Revision Registers */
+#define PAMU_PR1 0x0BF8
+#define PAMU_PR2 0x0BFC
+
+/* PAMU Capabilities Registers */
+#define PAMU_PC1 0x0C00
+#define PAMU_PC2 0x0C04
+#define PAMU_PC3 0x0C08
+#define PAMU_PC4 0x0C0C
+
+/* PAMU Control Register */
+#define PAMU_PC 0x0C10
+
+/* PAMU control defs */
+#define PAMU_CONTROL 0x0C10
+#define PAMU_PC_PGC 0x80000000 /* PAMU gate closed bit */
+#define PAMU_PC_PE 0x40000000 /* PAMU enable bit */
+#define PAMU_PC_SPCC 0x00000010 /* sPAACE cache enable */
+#define PAMU_PC_PPCC 0x00000001 /* pPAACE cache enable */
+#define PAMU_PC_OCE 0x00001000 /* OMT cache enable */
+
+#define PAMU_PFA1 0x0C14
+#define PAMU_PFA2 0x0C18
+
+#define PAMU_PC2_MLIODN(X) ((X) >> 16)
+#define PAMU_PC3_MWCE(X) (((X) >> 21) & 0xf)
+
+/* PAMU Interrupt control and Status Register */
+#define PAMU_PICS 0x0C1C
+#define PAMU_ACCESS_VIOLATION_STAT 0x8
+#define PAMU_ACCESS_VIOLATION_ENABLE 0x4
+
+/* PAMU Debug Registers */
+#define PAMU_PD1 0x0F00
+#define PAMU_PD2 0x0F04
+#define PAMU_PD3 0x0F08
+#define PAMU_PD4 0x0F0C
+
+#define PAACE_AP_PERMS_DENIED 0x0
+#define PAACE_AP_PERMS_QUERY 0x1
+#define PAACE_AP_PERMS_UPDATE 0x2
+#define PAACE_AP_PERMS_ALL 0x3
+
+#define PAACE_DD_TO_HOST 0x0
+#define PAACE_DD_TO_IO 0x1
+#define PAACE_PT_PRIMARY 0x0
+#define PAACE_PT_SECONDARY 0x1
+#define PAACE_V_INVALID 0x0
+#define PAACE_V_VALID 0x1
+#define PAACE_MW_SUBWINDOWS 0x1
+
+#define PAACE_WSE_4K 0xB
+#define PAACE_WSE_8K 0xC
+#define PAACE_WSE_16K 0xD
+#define PAACE_WSE_32K 0xE
+#define PAACE_WSE_64K 0xF
+#define PAACE_WSE_128K 0x10
+#define PAACE_WSE_256K 0x11
+#define PAACE_WSE_512K 0x12
+#define PAACE_WSE_1M 0x13
+#define PAACE_WSE_2M 0x14
+#define PAACE_WSE_4M 0x15
+#define PAACE_WSE_8M 0x16
+#define PAACE_WSE_16M 0x17
+#define PAACE_WSE_32M 0x18
+#define PAACE_WSE_64M 0x19
+#define PAACE_WSE_128M 0x1A
+#define PAACE_WSE_256M 0x1B
+#define PAACE_WSE_512M 0x1C
+#define PAACE_WSE_1G 0x1D
+#define PAACE_WSE_2G 0x1E
+#define PAACE_WSE_4G 0x1F
+
+#define PAACE_DID_PCI_EXPRESS_1 0x00
+#define PAACE_DID_PCI_EXPRESS_2 0x01
+#define PAACE_DID_PCI_EXPRESS_3 0x02
+#define PAACE_DID_PCI_EXPRESS_4 0x03
+#define PAACE_DID_LOCAL_BUS 0x04
+#define PAACE_DID_SRIO 0x0C
+#define PAACE_DID_MEM_1 0x10
+#define PAACE_DID_MEM_2 0x11
+#define PAACE_DID_MEM_3 0x12
+#define PAACE_DID_MEM_4 0x13
+#define PAACE_DID_MEM_1_2 0x14
+#define PAACE_DID_MEM_3_4 0x15
+#define PAACE_DID_MEM_1_4 0x16
+#define PAACE_DID_BM_SW_PORTAL 0x18
+#define PAACE_DID_PAMU 0x1C
+#define PAACE_DID_CAAM 0x21
+#define PAACE_DID_QM_SW_PORTAL 0x3C
+#define PAACE_DID_CORE0_INST 0x80
+#define PAACE_DID_CORE0_DATA 0x81
+#define PAACE_DID_CORE1_INST 0x82
+#define PAACE_DID_CORE1_DATA 0x83
+#define PAACE_DID_CORE2_INST 0x84
+#define PAACE_DID_CORE2_DATA 0x85
+#define PAACE_DID_CORE3_INST 0x86
+#define PAACE_DID_CORE3_DATA 0x87
+#define PAACE_DID_CORE4_INST 0x88
+#define PAACE_DID_CORE4_DATA 0x89
+#define PAACE_DID_CORE5_INST 0x8A
+#define PAACE_DID_CORE5_DATA 0x8B
+#define PAACE_DID_CORE6_INST 0x8C
+#define PAACE_DID_CORE6_DATA 0x8D
+#define PAACE_DID_CORE7_INST 0x8E
+#define PAACE_DID_CORE7_DATA 0x8F
+#define PAACE_DID_BROADCAST 0xFF
+
+#define PAACE_ATM_NO_XLATE 0x00
+#define PAACE_ATM_WINDOW_XLATE 0x01
+#define PAACE_ATM_PAGE_XLATE 0x02
+#define PAACE_ATM_WIN_PG_XLATE \
+ (PAACE_ATM_WINDOW_XLATE | PAACE_ATM_PAGE_XLATE)
+#define PAACE_OTM_NO_XLATE 0x00
+#define PAACE_OTM_IMMEDIATE 0x01
+#define PAACE_OTM_INDEXED 0x02
+#define PAACE_OTM_RESERVED 0x03
+
+#define PAACE_M_COHERENCE_REQ 0x01
+
+#define PAACE_PID_0 0x0
+#define PAACE_PID_1 0x1
+#define PAACE_PID_2 0x2
+#define PAACE_PID_3 0x3
+#define PAACE_PID_4 0x4
+#define PAACE_PID_5 0x5
+#define PAACE_PID_6 0x6
+#define PAACE_PID_7 0x7
+
+#define PAACE_TCEF_FORMAT0_8B 0x00
+#define PAACE_TCEF_FORMAT1_RSVD 0x01
+/*
+ * Hard coded value for the PAACT size to accomodate
+ * maximum LIODN value generated by u-boot.
+ */
+#define PAACE_NUMBER_ENTRIES 0x500
+/* Hard coded value for the SPAACT size */
+#define SPAACE_NUMBER_ENTRIES 0x800
+
+#define OME_NUMBER_ENTRIES 16
+
+/* PAACE Bit Field Defines */
+#define PPAACE_AF_WBAL 0xfffff000
+#define PPAACE_AF_WBAL_SHIFT 12
+#define PPAACE_AF_WSE 0x00000fc0
+#define PPAACE_AF_WSE_SHIFT 6
+#define PPAACE_AF_MW 0x00000020
+#define PPAACE_AF_MW_SHIFT 5
+
+#define SPAACE_AF_LIODN 0xffff0000
+#define SPAACE_AF_LIODN_SHIFT 16
+
+#define PAACE_AF_AP 0x00000018
+#define PAACE_AF_AP_SHIFT 3
+#define PAACE_AF_DD 0x00000004
+#define PAACE_AF_DD_SHIFT 2
+#define PAACE_AF_PT 0x00000002
+#define PAACE_AF_PT_SHIFT 1
+#define PAACE_AF_V 0x00000001
+#define PAACE_AF_V_SHIFT 0
+
+#define PAACE_DA_HOST_CR 0x80
+#define PAACE_DA_HOST_CR_SHIFT 7
+
+#define PAACE_IA_CID 0x00FF0000
+#define PAACE_IA_CID_SHIFT 16
+#define PAACE_IA_WCE 0x000000F0
+#define PAACE_IA_WCE_SHIFT 4
+#define PAACE_IA_ATM 0x0000000C
+#define PAACE_IA_ATM_SHIFT 2
+#define PAACE_IA_OTM 0x00000003
+#define PAACE_IA_OTM_SHIFT 0
+
+#define PAACE_WIN_TWBAL 0xfffff000
+#define PAACE_WIN_TWBAL_SHIFT 12
+#define PAACE_WIN_SWSE 0x00000fc0
+#define PAACE_WIN_SWSE_SHIFT 6
+
+/* PAMU Data Structures */
+/* primary / secondary paact structure */
+struct paace {
+ /* PAACE Offset 0x00 */
+ u32 wbah; /* only valid for Primary PAACE */
+ u32 addr_bitfields; /* See P/S PAACE_AF_* */
+
+ /* PAACE Offset 0x08 */
+ /* Interpretation of first 32 bits dependent on DD above */
+ union {
+ struct {
+ /* Destination ID, see PAACE_DID_* defines */
+ u8 did;
+ /* Partition ID */
+ u8 pid;
+ /* Snoop ID */
+ u8 snpid;
+ /* coherency_required : 1 reserved : 7 */
+ u8 coherency_required; /* See PAACE_DA_* */
+ } to_host;
+ struct {
+ /* Destination ID, see PAACE_DID_* defines */
+ u8 did;
+ u8 reserved1;
+ u16 reserved2;
+ } to_io;
+ } domain_attr;
+
+ /* Implementation attributes + window count + address & operation translation modes */
+ u32 impl_attr; /* See PAACE_IA_* */
+
+ /* PAACE Offset 0x10 */
+ /* Translated window base address */
+ u32 twbah;
+ u32 win_bitfields; /* See PAACE_WIN_* */
+
+ /* PAACE Offset 0x18 */
+ /* first secondary paace entry */
+ u32 fspi; /* only valid for Primary PAACE */
+ union {
+ struct {
+ u8 ioea;
+ u8 moea;
+ u8 ioeb;
+ u8 moeb;
+ } immed_ot;
+ struct {
+ u16 reserved;
+ u16 omi;
+ } index_ot;
+ } op_encode;
+
+ /* PAACE Offsets 0x20-0x38 */
+ u32 reserved[8]; /* not currently implemented */
+};
+
+/* OME : Operation mapping entry
+ * MOE : Mapped Operation Encodings
+ * The operation mapping table is table containing operation mapping entries (OME).
+ * The index of a particular OME is programmed in the PAACE entry for translation
+ * in bound I/O operations corresponding to an LIODN. The OMT is used for translation
+ * specifically in case of the indexed translation mode. Each OME contains a 128
+ * byte mapped operation encoding (MOE), where each byte represents an MOE.
+ */
+#define NUM_MOE 128
+struct ome {
+ u8 moe[NUM_MOE];
+} __attribute__((packed));
+
+#define PAACT_SIZE (sizeof(struct paace) * PAACE_NUMBER_ENTRIES)
+#define SPAACT_SIZE (sizeof(struct paace) * SPAACE_NUMBER_ENTRIES)
+#define OMT_SIZE (sizeof(struct ome) * OME_NUMBER_ENTRIES)
+
+#define PAMU_PAGE_SHIFT 12
+#define PAMU_PAGE_SIZE 4096ULL
+
+#define IOE_READ 0x00
+#define IOE_READ_IDX 0x00
+#define IOE_WRITE 0x81
+#define IOE_WRITE_IDX 0x01
+#define IOE_EREAD0 0x82 /* Enhanced read type 0 */
+#define IOE_EREAD0_IDX 0x02 /* Enhanced read type 0 */
+#define IOE_EWRITE0 0x83 /* Enhanced write type 0 */
+#define IOE_EWRITE0_IDX 0x03 /* Enhanced write type 0 */
+#define IOE_DIRECT0 0x84 /* Directive type 0 */
+#define IOE_DIRECT0_IDX 0x04 /* Directive type 0 */
+#define IOE_EREAD1 0x85 /* Enhanced read type 1 */
+#define IOE_EREAD1_IDX 0x05 /* Enhanced read type 1 */
+#define IOE_EWRITE1 0x86 /* Enhanced write type 1 */
+#define IOE_EWRITE1_IDX 0x06 /* Enhanced write type 1 */
+#define IOE_DIRECT1 0x87 /* Directive type 1 */
+#define IOE_DIRECT1_IDX 0x07 /* Directive type 1 */
+#define IOE_RAC 0x8c /* Read with Atomic clear */
+#define IOE_RAC_IDX 0x0c /* Read with Atomic clear */
+#define IOE_RAS 0x8d /* Read with Atomic set */
+#define IOE_RAS_IDX 0x0d /* Read with Atomic set */
+#define IOE_RAD 0x8e /* Read with Atomic decrement */
+#define IOE_RAD_IDX 0x0e /* Read with Atomic decrement */
+#define IOE_RAI 0x8f /* Read with Atomic increment */
+#define IOE_RAI_IDX 0x0f /* Read with Atomic increment */
+
+#define EOE_READ 0x00
+#define EOE_WRITE 0x01
+#define EOE_RAC 0x0c /* Read with Atomic clear */
+#define EOE_RAS 0x0d /* Read with Atomic set */
+#define EOE_RAD 0x0e /* Read with Atomic decrement */
+#define EOE_RAI 0x0f /* Read with Atomic increment */
+#define EOE_LDEC 0x10 /* Load external cache */
+#define EOE_LDECL 0x11 /* Load external cache with stash lock */
+#define EOE_LDECPE 0x12 /* Load external cache with preferred exclusive */
+#define EOE_LDECPEL 0x13 /* Load external cache with preferred exclusive and lock */
+#define EOE_LDECFE 0x14 /* Load external cache with forced exclusive */
+#define EOE_LDECFEL 0x15 /* Load external cache with forced exclusive and lock */
+#define EOE_RSA 0x16 /* Read with stash allocate */
+#define EOE_RSAU 0x17 /* Read with stash allocate and unlock */
+#define EOE_READI 0x18 /* Read with invalidate */
+#define EOE_RWNITC 0x19 /* Read with no intention to cache */
+#define EOE_WCI 0x1a /* Write cache inhibited */
+#define EOE_WWSA 0x1b /* Write with stash allocate */
+#define EOE_WWSAL 0x1c /* Write with stash allocate and lock */
+#define EOE_WWSAO 0x1d /* Write with stash allocate only */
+#define EOE_WWSAOL 0x1e /* Write with stash allocate only and lock */
+#define EOE_VALID 0x80
+
+/* Function prototypes */
+int pamu_domain_init(void);
+int pamu_enable_liodn(int liodn);
+int pamu_disable_liodn(int liodn);
+void pamu_free_subwins(int liodn);
+int pamu_config_ppaace(int liodn, phys_addr_t win_addr, phys_addr_t win_size,
+ u32 omi, unsigned long rpn, u32 snoopid, uint32_t stashid,
+ u32 subwin_cnt, int prot);
+int pamu_config_spaace(int liodn, u32 subwin_cnt, u32 subwin_addr,
+ phys_addr_t subwin_size, u32 omi, unsigned long rpn,
+ uint32_t snoopid, u32 stashid, int enable, int prot);
+
+u32 get_stash_id(u32 stash_dest_hint, u32 vcpu);
+void get_ome_index(u32 *omi_index, struct device *dev);
+int pamu_update_paace_stash(int liodn, u32 subwin, u32 value);
+int pamu_disable_spaace(int liodn, u32 subwin);
+u32 pamu_get_max_subwin_cnt(void);
+
+#endif /* __FSL_PAMU_H */
diff --git a/drivers/iommu/fsl_pamu_domain.c b/drivers/iommu/fsl_pamu_domain.c
new file mode 100644
index 0000000..16b728f
--- /dev/null
+++ b/drivers/iommu/fsl_pamu_domain.c
@@ -0,0 +1,1137 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ *
+ * Copyright (C) 2013 Freescale Semiconductor, Inc.
+ * Author: Varun Sethi <[email protected]>
+ *
+ */
+
+#define pr_fmt(fmt) "fsl-pamu-domain: %s: " fmt, __func__
+
+#include <linux/init.h>
+#include <linux/iommu.h>
+#include <linux/notifier.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/mm.h>
+#include <linux/interrupt.h>
+#include <linux/device.h>
+#include <linux/of_platform.h>
+#include <linux/bootmem.h>
+#include <linux/err.h>
+#include <asm/io.h>
+#include <asm/bitops.h>
+
+#include <asm/pci-bridge.h>
+#include <sysdev/fsl_pci.h>
+
+#include "fsl_pamu_domain.h"
+
+/*
+ * Global spinlock that needs to be held while
+ * configuring PAMU.
+ */
+static DEFINE_SPINLOCK(iommu_lock);
+
+static struct kmem_cache *fsl_pamu_domain_cache;
+static struct kmem_cache *iommu_devinfo_cache;
+static DEFINE_SPINLOCK(device_domain_lock);
+
+static int __init iommu_init_mempool(void)
+{
+
+ fsl_pamu_domain_cache = kmem_cache_create("fsl_pamu_domain",
+ sizeof(struct fsl_dma_domain),
+ 0,
+ SLAB_HWCACHE_ALIGN,
+
+ NULL);
+ if (!fsl_pamu_domain_cache) {
+ pr_err("Couldn't create fsl iommu_domain cache\n");
+ return -ENOMEM;
+ }
+
+ iommu_devinfo_cache = kmem_cache_create("iommu_devinfo",
+ sizeof(struct device_domain_info),
+ 0,
+ SLAB_HWCACHE_ALIGN,
+ NULL);
+ if (!iommu_devinfo_cache) {
+ pr_err("Couldn't create devinfo cache\n");
+ kmem_cache_destroy(fsl_pamu_domain_cache);
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static phys_addr_t get_phys_addr(struct fsl_dma_domain *dma_domain, dma_addr_t iova)
+{
+ u32 win_cnt = dma_domain->win_cnt;
+ struct dma_window *win_ptr =
+ &dma_domain->win_arr[0];
+ struct iommu_domain_geometry *geom;
+
+ geom = &dma_domain->iommu_domain->geometry;
+
+ if (!win_cnt || !dma_domain->geom_size) {
+ pr_err("Number of windows/geometry not configured for the domain\n");
+ return 0;
+ }
+
+ if (win_cnt > 1) {
+ u64 subwin_size;
+ dma_addr_t subwin_iova;
+ u32 wnd;
+
+ subwin_size = dma_domain->geom_size >> ilog2(win_cnt);
+ subwin_iova = iova & ~(subwin_size - 1);
+ wnd = (subwin_iova - geom->aperture_start) >> ilog2(subwin_size);
+ win_ptr = &dma_domain->win_arr[wnd];
+ }
+
+ if (win_ptr->valid)
+ return (win_ptr->paddr + (iova & (win_ptr->size - 1)));
+
+ return 0;
+}
+
+static int map_subwins(int liodn, struct fsl_dma_domain *dma_domain)
+{
+ struct dma_window *sub_win_ptr =
+ &dma_domain->win_arr[0];
+ int i, ret;
+ unsigned long rpn;
+
+ for (i = 0; i < dma_domain->win_cnt; i++) {
+ if (sub_win_ptr[i].valid) {
+ rpn = sub_win_ptr[i].paddr >>
+ PAMU_PAGE_SHIFT;
+ spin_lock(&iommu_lock);
+ ret = pamu_config_spaace(liodn, dma_domain->win_cnt, i,
+ sub_win_ptr[i].size,
+ ~(u32)0,
+ rpn,
+ dma_domain->snoop_id,
+ dma_domain->stash_id,
+ (i > 0) ? 1 : 0,
+ sub_win_ptr[i].prot);
+ spin_unlock(&iommu_lock);
+ if (ret) {
+ pr_err("PAMU SPAACE configuration failed for liodn %d\n",
+ liodn);
+ return ret;
+ }
+ }
+ }
+
+ return ret;
+}
+
+static int map_win(int liodn, struct fsl_dma_domain *dma_domain)
+{
+ int ret;
+ struct dma_window *wnd = &dma_domain->win_arr[0];
+ phys_addr_t wnd_addr = dma_domain->iommu_domain->geometry.aperture_start;
+
+ spin_lock(&iommu_lock);
+ ret = pamu_config_ppaace(liodn, wnd_addr,
+ wnd->size,
+ ~(u32)0,
+ wnd->paddr >> PAMU_PAGE_SHIFT,
+ dma_domain->snoop_id, dma_domain->stash_id,
+ 0, wnd->prot);
+ spin_unlock(&iommu_lock);
+ if (ret)
+ pr_err("PAMU PAACE configuration failed for liodn %d\n",
+ liodn);
+
+ return ret;
+}
+
+/* Map the DMA window corresponding to the LIODN */
+static int map_liodn(int liodn, struct fsl_dma_domain *dma_domain)
+{
+ if (dma_domain->win_cnt > 1)
+ return map_subwins(liodn, dma_domain);
+ else
+ return map_win(liodn, dma_domain);
+
+}
+
+/* Update window/subwindow mapping for the LIODN */
+static int update_liodn(int liodn, struct fsl_dma_domain *dma_domain, u32 wnd_nr)
+{
+ int ret;
+ struct dma_window *wnd = &dma_domain->win_arr[wnd_nr];
+
+ spin_lock(&iommu_lock);
+ if (dma_domain->win_cnt > 1) {
+ ret = pamu_config_spaace(liodn, dma_domain->win_cnt, wnd_nr,
+ wnd->size,
+ ~(u32)0,
+ wnd->paddr >> PAMU_PAGE_SHIFT,
+ dma_domain->snoop_id,
+ dma_domain->stash_id,
+ (wnd_nr > 0) ? 1 : 0,
+ wnd->prot);
+ if (ret)
+ pr_err("Subwindow reconfiguration failed for liodn %d\n", liodn);
+ } else {
+ phys_addr_t wnd_addr;
+
+ wnd_addr = dma_domain->iommu_domain->geometry.aperture_start;
+
+ ret = pamu_config_ppaace(liodn, wnd_addr,
+ wnd->size,
+ ~(u32)0,
+ wnd->paddr >> PAMU_PAGE_SHIFT,
+ dma_domain->snoop_id, dma_domain->stash_id,
+ 0, wnd->prot);
+ if (ret)
+ pr_err("Window reconfiguration failed for liodn %d\n", liodn);
+ }
+
+ spin_unlock(&iommu_lock);
+
+ return ret;
+}
+
+static int update_liodn_stash(int liodn, struct fsl_dma_domain *dma_domain,
+ u32 val)
+{
+ int ret = 0, i;
+
+ spin_lock(&iommu_lock);
+ if (!dma_domain->win_cnt) {
+ ret = pamu_update_paace_stash(liodn, 0, val);
+ if (ret) {
+ pr_err("Failed to update PAACE field for liodn %d\n ", liodn);
+ spin_unlock(&iommu_lock);
+ return ret;
+ }
+ } else {
+ for (i = 0; i < dma_domain->win_cnt; i++) {
+ ret = pamu_update_paace_stash(liodn, i, val);
+ if (ret) {
+ pr_err("Failed to update SPAACE %d field for liodn %d\n ", i, liodn);
+ spin_unlock(&iommu_lock);
+ return ret;
+ }
+ }
+ }
+ spin_unlock(&iommu_lock);
+
+ return ret;
+}
+
+/* Set the geometry parameters for a LIODN */
+static int pamu_set_liodn(int liodn, struct device *dev,
+ struct fsl_dma_domain *dma_domain,
+ struct iommu_domain_geometry *geom_attr,
+ u32 win_cnt)
+{
+ phys_addr_t window_addr, window_size;
+ phys_addr_t subwin_size;
+ int ret = 0, i;
+ u32 omi_index = ~(u32)0;
+
+ /*
+ * Configure the omi_index at the geometry setup time.
+ * This is a static value which depends on the type of
+ * device and would not change thereafter.
+ */
+ get_ome_index(&omi_index, dev);
+
+ window_addr = geom_attr->aperture_start;
+ window_size = dma_domain->geom_size;
+
+ spin_lock(&iommu_lock);
+ ret = pamu_disable_liodn(liodn);
+ if (!ret)
+ ret = pamu_config_ppaace(liodn, window_addr, window_size, omi_index,
+ 0, dma_domain->snoop_id,
+ dma_domain->stash_id, win_cnt, 0);
+ spin_unlock(&iommu_lock);
+ if (ret) {
+ pr_err("PAMU PAACE configuration failed for liodn %d, win_cnt =%d\n", liodn, win_cnt);
+ return ret;
+ }
+
+ if (win_cnt > 1) {
+ subwin_size = window_size >> ilog2(win_cnt);
+ for (i = 0; i < win_cnt; i++) {
+ spin_lock(&iommu_lock);
+ ret = pamu_disable_spaace(liodn, i);
+ if (!ret)
+ ret = pamu_config_spaace(liodn, win_cnt, i,
+ subwin_size, omi_index,
+ 0, dma_domain->snoop_id,
+ dma_domain->stash_id,
+ 0, 0);
+ spin_unlock(&iommu_lock);
+ if (ret) {
+ pr_err("PAMU SPAACE configuration failed for liodn %d\n", liodn);
+ return ret;
+ }
+ }
+ }
+
+ return ret;
+}
+
+static int check_size(u64 size, dma_addr_t iova)
+{
+ /*
+ * Size must be a power of two and at least be equal
+ * to PAMU page size.
+ */
+ if ((size & (size - 1)) || size < PAMU_PAGE_SIZE) {
+ pr_err("%s: size too small or not a power of two\n", __func__);
+ return -EINVAL;
+ }
+
+ /* iova must be page size aligned*/
+ if (iova & (size - 1)) {
+ pr_err("%s: address is not aligned with window size\n", __func__);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static struct fsl_dma_domain *iommu_alloc_dma_domain(void)
+{
+ struct fsl_dma_domain *domain;
+
+ domain = kmem_cache_zalloc(fsl_pamu_domain_cache, GFP_KERNEL);
+ if (!domain)
+ return NULL;
+
+ domain->stash_id = ~(u32)0;
+ domain->snoop_id = ~(u32)0;
+ domain->win_cnt = pamu_get_max_subwin_cnt();
+ domain->geom_size = 0;
+
+ INIT_LIST_HEAD(&domain->devices);
+
+ spin_lock_init(&domain->domain_lock);
+
+ return domain;
+}
+
+static inline struct device_domain_info *find_domain(struct device *dev)
+{
+ return dev->archdata.iommu_domain;
+}
+
+static void remove_device_ref(struct device_domain_info *info, u32 win_cnt)
+{
+ list_del(&info->link);
+ spin_lock(&iommu_lock);
+ if (win_cnt > 1)
+ pamu_free_subwins(info->liodn);
+ pamu_disable_liodn(info->liodn);
+ spin_unlock(&iommu_lock);
+ spin_lock(&device_domain_lock);
+ info->dev->archdata.iommu_domain = NULL;
+ kmem_cache_free(iommu_devinfo_cache, info);
+ spin_unlock(&device_domain_lock);
+}
+
+static void detach_device(struct device *dev, struct fsl_dma_domain *dma_domain)
+{
+ struct device_domain_info *info;
+ struct list_head *entry, *tmp;
+ unsigned long flags;
+
+ spin_lock_irqsave(&dma_domain->domain_lock, flags);
+ /* Remove the device from the domain device list */
+ if (!list_empty(&dma_domain->devices)) {
+ list_for_each_safe(entry, tmp, &dma_domain->devices) {
+ info = list_entry(entry, struct device_domain_info, link);
+ if (!dev || (info->dev == dev))
+ remove_device_ref(info, dma_domain->win_cnt);
+ }
+ }
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+}
+
+static void attach_device(struct fsl_dma_domain *dma_domain, int liodn, struct device *dev)
+{
+ struct device_domain_info *info, *old_domain_info;
+
+ spin_lock(&device_domain_lock);
+ /*
+ * Check here if the device is already attached to domain or not.
+ * If the device is already attached to a domain detach it.
+ */
+ old_domain_info = find_domain(dev);
+ if (old_domain_info && old_domain_info->domain != dma_domain) {
+ spin_unlock(&device_domain_lock);
+ detach_device(dev, old_domain_info->domain);
+ spin_lock(&device_domain_lock);
+ }
+
+ info = kmem_cache_zalloc(iommu_devinfo_cache, GFP_KERNEL);
+
+ info->dev = dev;
+ info->liodn = liodn;
+ info->domain = dma_domain;
+
+ list_add(&info->link, &dma_domain->devices);
+ /*
+ * In case of devices with multiple LIODNs just store
+ * the info for the first LIODN as all
+ * LIODNs share the same domain
+ */
+ if (!old_domain_info)
+ dev->archdata.iommu_domain = info;
+ spin_unlock(&device_domain_lock);
+
+}
+
+static phys_addr_t fsl_pamu_iova_to_phys(struct iommu_domain *domain,
+ dma_addr_t iova)
+{
+ struct fsl_dma_domain *dma_domain = domain->priv;
+
+ if ((iova < domain->geometry.aperture_start) ||
+ iova > (domain->geometry.aperture_end))
+ return 0;
+
+ return get_phys_addr(dma_domain, iova);
+}
+
+static int fsl_pamu_domain_has_cap(struct iommu_domain *domain,
+ unsigned long cap)
+{
+ return cap == IOMMU_CAP_CACHE_COHERENCY;
+}
+
+static void fsl_pamu_domain_destroy(struct iommu_domain *domain)
+{
+ struct fsl_dma_domain *dma_domain = domain->priv;
+
+ domain->priv = NULL;
+
+ /* remove all the devices from the device list */
+ detach_device(NULL, dma_domain);
+
+ dma_domain->enabled = 0;
+ dma_domain->mapped = 0;
+
+ kmem_cache_free(fsl_pamu_domain_cache, dma_domain);
+}
+
+static int fsl_pamu_domain_init(struct iommu_domain *domain)
+{
+ struct fsl_dma_domain *dma_domain;
+
+ dma_domain = iommu_alloc_dma_domain();
+ if (!dma_domain) {
+ pr_err("dma_domain allocation failed\n");
+ return -ENOMEM;
+ }
+ domain->priv = dma_domain;
+ dma_domain->iommu_domain = domain;
+ /* defaul geometry 64 GB i.e. maximum system address */
+ domain->geometry.aperture_start = 0;
+ domain->geometry.aperture_end = 1ULL << 36;
+ domain->geometry.force_aperture = true;
+
+ return 0;
+}
+
+/* Configure geometry settings for all LIODNs associated with domain */
+static int pamu_set_domain_geometry(struct fsl_dma_domain *dma_domain,
+ struct iommu_domain_geometry *geom_attr,
+ u32 win_cnt)
+{
+ struct device_domain_info *info;
+ int ret = 0;
+
+ if (!list_empty(&dma_domain->devices)) {
+ list_for_each_entry(info, &dma_domain->devices, link) {
+ ret = pamu_set_liodn(info->liodn, info->dev, dma_domain,
+ geom_attr, win_cnt);
+ if (ret)
+ break;
+ }
+ }
+
+ return ret;
+}
+
+/* Update stash destination for all LIODNs associated with the domain */
+static int update_domain_stash(struct fsl_dma_domain *dma_domain, u32 val)
+{
+ struct device_domain_info *info;
+ int ret = 0;
+
+ if (!list_empty(&dma_domain->devices)) {
+ list_for_each_entry(info, &dma_domain->devices, link) {
+ ret = update_liodn_stash(info->liodn, dma_domain, val);
+ if (ret)
+ break;
+ }
+ }
+
+ return ret;
+}
+
+/* Update domain mappings for all LIODNs associated with the domain */
+static int update_domain_mapping(struct fsl_dma_domain *dma_domain, u32 wnd_nr)
+{
+ struct device_domain_info *info;
+ int ret = 0;
+
+ if (!list_empty(&dma_domain->devices)) {
+ list_for_each_entry(info, &dma_domain->devices, link) {
+ ret = update_liodn(info->liodn, dma_domain, wnd_nr);
+ if (ret)
+ break;
+ }
+ }
+ return ret;
+}
+
+static int disable_domain_win(struct fsl_dma_domain *dma_domain, u32 wnd_nr)
+{
+ struct device_domain_info *info;
+ int ret = 0;
+
+ if (!list_empty(&dma_domain->devices)) {
+ list_for_each_entry(info, &dma_domain->devices, link) {
+ if (dma_domain->win_cnt == 1 && dma_domain->enabled) {
+ ret = pamu_disable_liodn(info->liodn);
+ if (!ret)
+ dma_domain->enabled = 0;
+ } else {
+ ret = pamu_disable_spaace(info->liodn, wnd_nr);
+ }
+ }
+ }
+
+ return ret;
+}
+
+static void fsl_pamu_window_disable(struct iommu_domain *domain, u32 wnd_nr)
+{
+ struct fsl_dma_domain *dma_domain = domain->priv;
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&dma_domain->domain_lock, flags);
+ if (!dma_domain->win_arr) {
+ pr_err("Number of windows not configured\n");
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+ return;
+ }
+
+ if (wnd_nr >= dma_domain->win_cnt) {
+ pr_err("Invalid window index\n");
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+ return;
+ }
+
+ if (dma_domain->win_arr[wnd_nr].valid) {
+ ret = disable_domain_win(dma_domain, wnd_nr);
+ if (!ret) {
+ dma_domain->win_arr[wnd_nr].valid = 0;
+ dma_domain->mapped--;
+ }
+ }
+
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+
+}
+
+static int fsl_pamu_window_enable(struct iommu_domain *domain, u32 wnd_nr,
+ phys_addr_t paddr, u64 size, int prot)
+{
+ struct fsl_dma_domain *dma_domain = domain->priv;
+ struct dma_window *wnd;
+ int pamu_prot = 0;
+ int ret;
+ unsigned long flags;
+ u64 win_size;
+
+ if (prot & IOMMU_READ)
+ pamu_prot |= PAACE_AP_PERMS_QUERY;
+ if (prot & IOMMU_WRITE)
+ pamu_prot |= PAACE_AP_PERMS_UPDATE;
+
+ spin_lock_irqsave(&dma_domain->domain_lock, flags);
+ if (!dma_domain->win_arr) {
+ pr_err("Number of windows not configured\n");
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+ return -ENODEV;
+ }
+
+ if (wnd_nr >= dma_domain->win_cnt) {
+ pr_err("Invalid window index\n");
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+ return -EINVAL;
+ }
+
+ win_size = dma_domain->geom_size >> ilog2(dma_domain->win_cnt);
+ if (size > win_size) {
+ pr_err("Invalid window size \n");
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+ return -EINVAL;
+ }
+
+ if (dma_domain->win_cnt == 1) {
+ if (dma_domain->enabled) {
+ pr_err("Disable the window before updating the mapping\n");
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+ return -EBUSY;
+ }
+
+ ret = check_size(size, domain->geometry.aperture_start);
+ if (ret) {
+ pr_err("Aperture start not aligned to the size\n");
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+ return -EINVAL;
+ }
+ }
+
+ wnd = &dma_domain->win_arr[wnd_nr];
+ if (!wnd->valid) {
+ wnd->paddr = paddr;
+ wnd->size = size;
+ wnd->prot = pamu_prot;
+
+ ret = update_domain_mapping(dma_domain, wnd_nr);
+ if (!ret) {
+ wnd->valid = 1;
+ dma_domain->mapped++;
+ }
+ } else {
+ pr_err("Disable the window before updating the mapping\n");
+ ret = -EBUSY;
+ }
+
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+
+ return ret;
+}
+
+/*
+ * Attach the LIODN to the DMA domain and configure the geometry
+ * and window mappings.
+ */
+static int handle_attach_device(struct fsl_dma_domain *dma_domain,
+ struct device *dev, const u32 *liodn,
+ int num)
+{
+ unsigned long flags;
+ struct iommu_domain *domain = dma_domain->iommu_domain;
+ int ret = 0;
+ int i;
+
+ spin_lock_irqsave(&dma_domain->domain_lock, flags);
+ for (i = 0; i < num; i++) {
+
+ /* Ensure that LIODN value is valid */
+ if (liodn[i] >= PAACE_NUMBER_ENTRIES) {
+ pr_err("Invalid liodn %d, attach device failed for %s\n",
+ liodn[i], dev->of_node->full_name);
+ ret = -EINVAL;
+ break;
+ }
+
+ attach_device(dma_domain, liodn[i], dev);
+ /*
+ * Check if geometry has already been configured
+ * for the domain. If yes, set the geometry for
+ * the LIODN.
+ */
+ if (dma_domain->win_cnt) {
+ u32 win_cnt = dma_domain->win_cnt > 1 ? dma_domain->win_cnt : 0;
+ ret = pamu_set_liodn(liodn[i], dev, dma_domain,
+ &domain->geometry,
+ win_cnt);
+ if (ret)
+ break;
+ if (dma_domain->mapped) {
+ /*
+ * Create window/subwindow mapping for
+ * the LIODN.
+ */
+ ret = map_liodn(liodn[i], dma_domain);
+ if (ret)
+ break;
+ }
+ }
+ }
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+
+ return ret;
+}
+
+static int fsl_pamu_attach_device(struct iommu_domain *domain,
+ struct device *dev)
+{
+ struct fsl_dma_domain *dma_domain = domain->priv;
+ const u32 *liodn;
+ u32 liodn_cnt;
+ int len, ret = 0;
+ struct pci_dev *pdev = NULL;
+ struct pci_controller *pci_ctl;
+
+ /*
+ * Use LIODN of the PCI controller while attaching a
+ * PCI device.
+ */
+ if (dev->bus == &pci_bus_type) {
+ pdev = to_pci_dev(dev);
+ pci_ctl = pci_bus_to_host(pdev->bus);
+ /*
+ * make dev point to pci controller device
+ * so we can get the LIODN programmed by
+ * u-boot.
+ */
+ dev = pci_ctl->parent;
+ }
+
+ liodn = of_get_property(dev->of_node, "fsl,liodn", &len);
+ if (liodn) {
+ liodn_cnt = len / sizeof(u32);
+ ret = handle_attach_device(dma_domain, dev,
+ liodn, liodn_cnt);
+ } else {
+ pr_err("missing fsl,liodn property at %s\n",
+ dev->of_node->full_name);
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+static void fsl_pamu_detach_device(struct iommu_domain *domain,
+ struct device *dev)
+{
+ struct fsl_dma_domain *dma_domain = domain->priv;
+ const u32 *prop;
+ int len;
+ struct pci_dev *pdev = NULL;
+ struct pci_controller *pci_ctl;
+
+ /*
+ * Use LIODN of the PCI controller while detaching a
+ * PCI device.
+ */
+ if (dev->bus == &pci_bus_type) {
+ pdev = to_pci_dev(dev);
+ pci_ctl = pci_bus_to_host(pdev->bus);
+ /*
+ * make dev point to pci controller device
+ * so we can get the LIODN programmed by
+ * u-boot.
+ */
+ dev = pci_ctl->parent;
+ }
+
+ prop = of_get_property(dev->of_node, "fsl,liodn", &len);
+ if (prop)
+ detach_device(dev, dma_domain);
+ else
+ pr_err("missing fsl,liodn property at %s\n",
+ dev->of_node->full_name);
+}
+
+static int configure_domain_geometry(struct iommu_domain *domain, void *data)
+{
+ struct iommu_domain_geometry *geom_attr = data;
+ struct fsl_dma_domain *dma_domain = domain->priv;
+ dma_addr_t geom_size;
+ unsigned long flags;
+
+ geom_size = geom_attr->aperture_end - geom_attr->aperture_start;
+ /*
+ * Sanity check the geometry size. Also, we do not support
+ * DMA outside of the geometry.
+ */
+ if (check_size(geom_size, geom_attr->aperture_start) ||
+ !geom_attr->force_aperture) {
+ pr_err("Invalid PAMU geometry attributes\n");
+ return -EINVAL;
+ }
+
+ spin_lock_irqsave(&dma_domain->domain_lock, flags);
+ if (dma_domain->enabled) {
+ pr_err("Can't set geometry attributes as domain is active\n");
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+ return -EBUSY;
+ }
+
+ /* Copy the domain geometry information */
+ memcpy(&domain->geometry, geom_attr,
+ sizeof(struct iommu_domain_geometry));
+ dma_domain->geom_size = geom_size;
+
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+
+ return 0;
+}
+
+/* Set the domain stash attribute */
+static int configure_domain_stash(struct fsl_dma_domain *dma_domain, void *data)
+{
+ struct iommu_stash_attribute *stash_attr = data;
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&dma_domain->domain_lock, flags);
+
+ memcpy(&dma_domain->dma_stash, stash_attr,
+ sizeof(struct iommu_stash_attribute));
+
+ dma_domain->stash_id = get_stash_id(stash_attr->cache,
+ stash_attr->cpu);
+ if (dma_domain->stash_id == ~(u32)0) {
+ pr_err("Invalid stash attributes\n");
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+ return -EINVAL;
+ }
+
+ ret = update_domain_stash(dma_domain, dma_domain->stash_id);
+
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+
+ return ret;
+}
+
+/* Configure domain dma state i.e. enable/disable DMA*/
+static int configure_domain_dma_state(struct fsl_dma_domain *dma_domain, bool enable)
+{
+ struct device_domain_info *info;
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&dma_domain->domain_lock, flags);
+
+ if (enable && !dma_domain->mapped) {
+ pr_err("Can't enable DMA domain without valid mapping\n");
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+ return -ENODEV;
+ }
+
+ dma_domain->enabled = enable;
+ if (!list_empty(&dma_domain->devices)) {
+ list_for_each_entry(info, &dma_domain->devices,
+ link) {
+ ret = (enable) ? pamu_enable_liodn(info->liodn) :
+ pamu_disable_liodn(info->liodn);
+ if (ret)
+ pr_err("Unable to set dma state for liodn %d",
+ info->liodn);
+ }
+ }
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+
+ return 0;
+}
+
+static int fsl_pamu_set_domain_attr(struct iommu_domain *domain,
+ enum iommu_attr attr_type, void *data)
+{
+ struct fsl_dma_domain *dma_domain = domain->priv;
+ int ret = 0;
+
+
+ switch (attr_type) {
+ case DOMAIN_ATTR_GEOMETRY:
+ ret = configure_domain_geometry(domain, data);
+ break;
+ case DOMAIN_ATTR_PAMU_STASH:
+ ret = configure_domain_stash(dma_domain, data);
+ break;
+ case DOMAIN_ATTR_PAMU_ENABLE:
+ ret = configure_domain_dma_state(dma_domain, *(int *)data);
+ break;
+ default:
+ pr_err("Unsupported attribute type\n");
+ ret = -EINVAL;
+ break;
+ };
+
+ return ret;
+}
+
+static int fsl_pamu_get_domain_attr(struct iommu_domain *domain,
+ enum iommu_attr attr_type, void *data)
+{
+ struct fsl_dma_domain *dma_domain = domain->priv;
+ int ret = 0;
+
+
+ switch (attr_type) {
+ case DOMAIN_ATTR_PAMU_STASH:
+ memcpy((struct iommu_stash_attribute *) data, &dma_domain->dma_stash,
+ sizeof(struct iommu_stash_attribute));
+ break;
+ case DOMAIN_ATTR_PAMU_ENABLE:
+ *(int *)data = dma_domain->enabled;
+ break;
+ case DOMAIN_ATTR_FSL_PAMUV1:
+ *(int *)data = DOMAIN_ATTR_FSL_PAMUV1;
+ break;
+ default:
+ pr_err("Unsupported attribute type\n");
+ ret = -EINVAL;
+ break;
+ };
+
+ return ret;
+}
+
+static void swap_pci_ref(struct pci_dev **from, struct pci_dev *to)
+{
+ pci_dev_put(*from);
+ *from = to;
+}
+
+static struct iommu_group *get_device_iommu_group(struct device *dev)
+{
+ struct iommu_group *group;
+
+ group = iommu_group_get(dev);
+ if (!group)
+ group = iommu_group_alloc();
+
+ return group;
+}
+
+static bool check_pci_ctl_endpt_part(struct pci_controller *pci_ctl)
+{
+ u32 version;
+
+ /* Check the PCI controller version number by readding BRR1 register */
+ version = in_be32(pci_ctl->cfg_addr + (PCI_FSL_BRR1 >> 2));
+ version &= PCI_FSL_BRR1_VER;
+ /* If PCI controller version is >= 0x204 we can partition endpoints*/
+ if (version >= 0x204)
+ return 1;
+
+ return 0;
+}
+
+static struct iommu_group *get_peer_pci_device_group(struct pci_dev *pdev)
+{
+ struct iommu_group *group = NULL;
+
+ /* check if this is the first device on the bus*/
+ if (pdev->bus_list.next == pdev->bus_list.prev) {
+ struct pci_bus *bus = pdev->bus->parent;
+ /* Traverese the parent bus list to get
+ * pdev & dev for the sibling device.
+ */
+ while (bus) {
+ if (!list_empty(&bus->devices)) {
+ pdev = container_of(bus->devices.next,
+ struct pci_dev, bus_list);
+ group = iommu_group_get(&pdev->dev);
+ break;
+ } else
+ bus = bus->parent;
+ }
+ } else {
+ /*
+ * Get the pdev & dev for the sibling device
+ */
+ pdev = container_of(pdev->bus_list.prev,
+ struct pci_dev, bus_list);
+ group = iommu_group_get(&pdev->dev);
+ }
+
+ return group;
+}
+
+static struct iommu_group *get_pci_device_group(struct pci_dev *pdev)
+{
+ struct iommu_group *group = NULL;
+ struct pci_dev *bridge, *dma_pdev = NULL;
+ struct pci_controller *pci_ctl;
+ bool pci_endpt_partioning;
+
+ pci_ctl = pci_bus_to_host(pdev->bus);
+ pci_endpt_partioning = check_pci_ctl_endpt_part(pci_ctl);
+ /* We can partition PCIe devices so assign device group to the device */
+ if (pci_endpt_partioning) {
+ bridge = pci_find_upstream_pcie_bridge(pdev);
+ if (bridge) {
+ if (pci_is_pcie(bridge))
+ dma_pdev = pci_get_domain_bus_and_slot(
+ pci_domain_nr(pdev->bus),
+ bridge->subordinate->number, 0);
+ if (!dma_pdev)
+ dma_pdev = pci_dev_get(bridge);
+ } else
+ dma_pdev = pci_dev_get(pdev);
+ /* Account for quirked devices */
+ swap_pci_ref(&dma_pdev, pci_get_dma_source(dma_pdev));
+ group = get_device_iommu_group(&pdev->dev);
+ pci_dev_put(pdev);
+ /*
+ * PCIe controller is not a paritionable entity
+ * free the controller device iommu_group.
+ */
+ if (pci_ctl->parent->iommu_group)
+ iommu_group_remove_device(pci_ctl->parent);
+ } else {
+ /*
+ * All devices connected to the controller will share the
+ * PCI controllers device group. If this is the first
+ * device to be probed for the pci controller, copy the
+ * device group information from the PCI controller device
+ * node and remove the PCI controller iommu group.
+ * For subsequent devices, the iommu group information can
+ * be obtained from sibling devices (i.e. from the bus_devices
+ * link list).
+ */
+ if (pci_ctl->parent->iommu_group) {
+ group = get_device_iommu_group(pci_ctl->parent);
+ iommu_group_remove_device(pci_ctl->parent);
+ } else
+ group = get_peer_pci_device_group(pdev);
+ }
+
+ return group;
+}
+
+static int fsl_pamu_add_device(struct device *dev)
+{
+ struct iommu_group *group = NULL;
+ struct pci_dev *pdev;
+ int ret;
+
+ /*
+ * For platform devices we allocate a separate group for
+ * each of the devices.
+ */
+ if (dev->bus == &pci_bus_type) {
+ pdev = to_pci_dev(dev);
+ /* Don't create device groups for virtual PCI bridges */
+ if (pdev->subordinate)
+ return 0;
+
+ group = get_pci_device_group(pdev);
+
+ } else
+ group = get_device_iommu_group(dev);
+
+ if (!group || IS_ERR(group))
+ return PTR_ERR(group);
+
+ ret = iommu_group_add_device(group, dev);
+
+ iommu_group_put(group);
+ return ret;
+}
+
+static void fsl_pamu_remove_device(struct device *dev)
+{
+ iommu_group_remove_device(dev);
+}
+
+static int fsl_pamu_set_windows(struct iommu_domain *domain, u32 w_count)
+{
+ struct fsl_dma_domain *dma_domain = domain->priv;
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&dma_domain->domain_lock, flags);
+ /* Ensure domain is inactive i.e. DMA should be disabled for the domain */
+ if (dma_domain->enabled) {
+ pr_err("Can't set geometry attributes as domain is active\n");
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+ return -EBUSY;
+ }
+
+ /* Ensure that the geometry has been set for the domain */
+ if (!dma_domain->geom_size) {
+ pr_err("Please configure geometry before setting the number of windows\n");
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+ return -EINVAL;
+ }
+
+ /*
+ * Ensure we have valid window count i.e. it should be less than
+ * maximum permissible limit and should be a power of two.
+ */
+ if (w_count > pamu_get_max_subwin_cnt() || (w_count & (w_count - 1))) {
+ pr_err("Invalid window count\n");
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+ return -EINVAL;
+ }
+
+ ret = pamu_set_domain_geometry(dma_domain, &domain->geometry,
+ ((w_count > 1) ? w_count : 0));
+ if (!ret) {
+ if (dma_domain->win_arr)
+ kfree(dma_domain->win_arr);
+ dma_domain->win_arr = kzalloc(sizeof(struct dma_window) *
+ w_count, GFP_KERNEL);
+ if (!dma_domain->win_arr) {
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+ return -ENOMEM;
+ }
+ dma_domain->win_cnt = w_count;
+ }
+ spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
+
+ return ret;
+}
+
+static u32 fsl_pamu_get_windows(struct iommu_domain *domain)
+{
+ struct fsl_dma_domain *dma_domain = domain->priv;
+
+ return dma_domain->win_cnt;
+}
+
+static struct iommu_ops fsl_pamu_ops = {
+ .domain_init = fsl_pamu_domain_init,
+ .domain_destroy = fsl_pamu_domain_destroy,
+ .attach_dev = fsl_pamu_attach_device,
+ .detach_dev = fsl_pamu_detach_device,
+ .domain_window_enable = fsl_pamu_window_enable,
+ .domain_window_disable = fsl_pamu_window_disable,
+ .domain_get_windows = fsl_pamu_get_windows,
+ .domain_set_windows = fsl_pamu_set_windows,
+ .iova_to_phys = fsl_pamu_iova_to_phys,
+ .domain_has_cap = fsl_pamu_domain_has_cap,
+ .domain_set_attr = fsl_pamu_set_domain_attr,
+ .domain_get_attr = fsl_pamu_get_domain_attr,
+ .add_device = fsl_pamu_add_device,
+ .remove_device = fsl_pamu_remove_device,
+};
+
+int pamu_domain_init()
+{
+ int ret = 0;
+
+ ret = iommu_init_mempool();
+ if (ret)
+ return ret;
+
+ bus_set_iommu(&platform_bus_type, &fsl_pamu_ops);
+ bus_set_iommu(&pci_bus_type, &fsl_pamu_ops);
+
+ return ret;
+}
diff --git a/drivers/iommu/fsl_pamu_domain.h b/drivers/iommu/fsl_pamu_domain.h
new file mode 100644
index 0000000..52dede7
--- /dev/null
+++ b/drivers/iommu/fsl_pamu_domain.h
@@ -0,0 +1,85 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ *
+ * Copyright (C) 2013 Freescale Semiconductor, Inc.
+ *
+ */
+
+#ifndef __FSL_PAMU_DOMAIN_H
+#define __FSL_PAMU_DOMAIN_H
+
+#include "fsl_pamu.h"
+
+struct dma_window {
+ phys_addr_t paddr;
+ u64 size;
+ int valid;
+ int prot;
+};
+
+struct fsl_dma_domain {
+ /*
+ * Indicates the geometry size for the domain.
+ * This would be set when the geometry is
+ * configured for the domain.
+ */
+ dma_addr_t geom_size;
+ /*
+ * Number of windows assocaited with this domain.
+ * During domain initialization, it is set to the
+ * the maximum number of subwindows allowed for a LIODN.
+ * Minimum value for this is 1 indicating a single PAMU
+ * window, without any sub windows. Value can be set/
+ * queried by set_attr/get_attr API for DOMAIN_ATTR_WINDOWS.
+ * Value can only be set once the geometry has been configured.
+ */
+ u32 win_cnt;
+ /*
+ * win_arr contains information of the configured
+ * windows for a domain. This is allocated only
+ * when the number of windows for the domain are
+ * set.
+ */
+ struct dma_window *win_arr;
+ /* list of devices associated with the domain */
+ struct list_head devices;
+ /* dma_domain states:
+ * mapped - A particular mapping has been created
+ * within the configured geometry.
+ * enabled - DMA has been enabled for the given
+ * domain. This translates to setting of the
+ * valid bit for the primary PAACE in the PAMU
+ * PAACT table. Domain geometry should be set and
+ * it must have a valid mapping before DMA can be
+ * enabled for it.
+ *
+ */
+ int mapped;
+ int enabled;
+ /* stash_id obtained from the stash attribute details */
+ u32 stash_id;
+ struct iommu_stash_attribute dma_stash;
+ u32 snoop_id;
+ struct iommu_domain *iommu_domain;
+ spinlock_t domain_lock;
+};
+
+/* domain-device relationship */
+struct device_domain_info {
+ struct list_head link; /* link to domain siblings */
+ struct device *dev;
+ u32 liodn;
+ struct fsl_dma_domain *domain; /* pointer to domain */
+};
+#endif /* __FSL_PAMU_DOMAIN_H */
--
1.7.4.1

2013-03-28 20:02:32

by Varun Sethi

[permalink] [raw]
Subject: [PATCH 4/5 v11] iommu/fsl: Add additional iommu attributes required by the PAMU driver.

Added the following domain attributes for the FSL PAMU driver:
1. Added new iommu stash attribute, which allows setting of the
LIODN specific stash id parameter through IOMMU API.
2. Added an attribute for enabling/disabling DMA to a particular
memory window.
3. Added domain attribute to check for PAMUV1 specific constraints.

Signed-off-by: Varun Sethi <[email protected]>
---
- no change in v11.
- no change in v10.
include/linux/iommu.h | 35 +++++++++++++++++++++++++++++++++++
1 files changed, 35 insertions(+), 0 deletions(-)

diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 2727810..af8f996 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -40,6 +40,25 @@ struct notifier_block;
typedef int (*iommu_fault_handler_t)(struct iommu_domain *,
struct device *, unsigned long, int, void *);

+/* cache stash targets */
+enum stash_target {
+ IOMMU_ATTR_CACHE_L1 = 1,
+ IOMMU_ATTR_CACHE_L2,
+ IOMMU_ATTR_CACHE_L3,
+};
+
+/* This attribute corresponds to IOMMUs capable of generating
+ * a stash transaction. A stash transaction is typically a
+ * hardware initiated prefetch of data from memory to cache.
+ * This attribute allows configuring stashig specific parameters
+ * in the IOMMU hardware.
+ */
+
+struct iommu_stash_attribute {
+ u32 cpu; /* cpu number */
+ u32 cache; /* cache to stash to: L1,L2,L3 */
+};
+
struct iommu_domain_geometry {
dma_addr_t aperture_start; /* First address that can be mapped */
dma_addr_t aperture_end; /* Last address that can be mapped */
@@ -57,10 +76,26 @@ struct iommu_domain {
#define IOMMU_CAP_CACHE_COHERENCY 0x1
#define IOMMU_CAP_INTR_REMAP 0x2 /* isolates device intrs */

+/*
+ * Following constraints are specifc to PAMUV1:
+ * -aperture must be power of 2, and naturally aligned
+ * -number of windows must be power of 2, and address space size
+ * of each window is determined by aperture size / # of windows
+ * -the actual size of the mapped region of a window must be power
+ * of 2 starting with 4KB and physical address must be naturally
+ * aligned.
+ * DOMAIN_ATTR_FSL_PAMUV1 corresponds to the above mentioned contraints.
+ * The caller can invoke iommu_domain_get_attr to check if the underlying
+ * iommu implementation supports these constraints.
+ */
+
enum iommu_attr {
DOMAIN_ATTR_GEOMETRY,
DOMAIN_ATTR_PAGING,
DOMAIN_ATTR_WINDOWS,
+ DOMAIN_ATTR_PAMU_STASH,
+ DOMAIN_ATTR_PAMU_ENABLE,
+ DOMAIN_ATTR_FSL_PAMUV1,
DOMAIN_ATTR_MAX,
};

--
1.7.4.1

2013-04-02 15:08:49

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH 2/5 v11] powerpc: Add iommu domain pointer to device archdata

On Fri, Mar 29, 2013 at 01:23:59AM +0530, Varun Sethi wrote:
> Add an iommu domain pointer to device (powerpc) archdata. Devices
> are attached to iommu domains and this pointer provides a mechanism
> to correlate between a device and the associated iommu domain. This
> field is set when a device is attached to a domain.
>
> Signed-off-by: Varun Sethi <[email protected]>

This patch needs to be Acked by the PPC maintainers.

2013-04-02 15:10:20

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH 4/5 v11] iommu/fsl: Add additional iommu attributes required by the PAMU driver.

On Fri, Mar 29, 2013 at 01:24:01AM +0530, Varun Sethi wrote:
> +/* cache stash targets */
> +enum stash_target {
> + IOMMU_ATTR_CACHE_L1 = 1,
> + IOMMU_ATTR_CACHE_L2,
> + IOMMU_ATTR_CACHE_L3,
> +};
> +
> +/* This attribute corresponds to IOMMUs capable of generating
> + * a stash transaction. A stash transaction is typically a
> + * hardware initiated prefetch of data from memory to cache.
> + * This attribute allows configuring stashig specific parameters
> + * in the IOMMU hardware.
> + */
> +
> +struct iommu_stash_attribute {
> + u32 cpu; /* cpu number */
> + u32 cache; /* cache to stash to: L1,L2,L3 */
> +};
> +

I would prefer these PAMU specific enum and struct to be in a
pamu-specific iommu-header.


Joerg

2013-04-02 15:29:40

by Yoder Stuart-B08248

[permalink] [raw]
Subject: RE: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation.


> -----Original Message-----
> From: Sethi Varun-B16395
> Sent: Thursday, March 28, 2013 2:54 PM
> To: [email protected]; Yoder Stuart-B08248; Wood Scott-B07421; [email protected]; linuxppc-
> [email protected]; [email protected]; [email protected]; [email protected]
> Cc: Sethi Varun-B16395
> Subject: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation.
>
> Following is a brief description of the PAMU hardware:
> PAMU determines what action to take and whether to authorize the action on
> the basis of the memory address, a Logical IO Device Number (LIODN), and
> PAACT table (logically) indexed by LIODN and address. Hardware devices which
> need to access memory must provide an LIODN in addition to the memory address.
>
> Peripheral Access Authorization and Control Tables (PAACTs) are the primary
> data structures used by PAMU. A PAACT is a table of peripheral access
> authorization and control entries (PAACE).Each PAACE defines the range of
> I/O bus address space that is accessible by the LIOD and the associated access
> capabilities.
>
> There are two types of PAACTs: primary PAACT (PPAACT) and secondary PAACT
> (SPAACT).A given physical I/O device may be able to act as one or more
> independent logical I/O devices (LIODs). Each such logical I/O device is
> assigned an identifier called logical I/O device number (LIODN). A LIODN is
> allocated a contiguous portion of the I/O bus address space called the DSA window
> for performing DSA operations. The DSA window may optionally be divided into
> multiple sub-windows, each of which may be used to map to a region in system
> storage space. The first sub-window is referred to as the primary sub-window
> and the remaining are called secondary sub-windows.
>
> This patch provides the PAMU driver (fsl_pamu.c) and the corresponding IOMMU
> API implementation (fsl_pamu_domain.c). The PAMU hardware driver (fsl_pamu.c)
> has been derived from the work done by Ashish Kalra and Timur Tabi.
>
> Signed-off-by: Timur Tabi <<[email protected]>
> Signed-off-by: Varun Sethi <[email protected]>
> ---
> changes in v11:
> - changed iova to dma_addr_t in iova_to_phys API.
> changes in v10:
> - Support for new guts compatibe string for T4 & B4 devices.
> - Modified comment about port ID and mentioned the errata number.
> - Fixed the issue where data pointer was not freed in case of a an error.
> - Pass data pointer while freeing irq.
> - Whle initializing the SPAACE entry clear the valid bit.
> changes in v9:
> - Merged and createad a single function to delete
> a device from domain list.
> - Refactored the add_device API code.
> - Renamed the paace and spaace init fucntions.
> - Renamed functions for mapping windows and subwindows.
> - Changed the MAX LIODN value to MAX value u-boot can
> program.
> - Hard coded maximum number of subwindows.
> changes in v8:
> - implemented the new API for window based IOMMUs.
> changes in v7:
> - Set max_subwidows in the geometry attribute.
> - Add checking for maximum supported LIODN value.
> - Use upper_32_bits and lower_32_bits macros while
> intializing PAMU data structures.
> changes in v6:
> - Simplified complex conditional statements.
> - Fixed indentation issues.
> - Added comments for IOMMU API implementation.
> changes in v5:
> - Addressed comments from Timur.
> changes in v4:
> - Addressed comments from Timur and Scott.
> changes in v3:
> - Addressed comments by Kumar Gala
> - dynamic fspi allocation
> - fixed alignment check in map and unmap
> arch/powerpc/sysdev/fsl_pci.h | 5 +
> drivers/iommu/Kconfig | 8 +
> drivers/iommu/Makefile | 1 +
> drivers/iommu/fsl_pamu.c | 1269 +++++++++++++++++++++++++++++++++++++++
> drivers/iommu/fsl_pamu.h | 405 +++++++++++++
> drivers/iommu/fsl_pamu_domain.c | 1137 +++++++++++++++++++++++++++++++++++
> drivers/iommu/fsl_pamu_domain.h | 85 +++
> 7 files changed, 2910 insertions(+), 0 deletions(-)
> create mode 100644 drivers/iommu/fsl_pamu.c
> create mode 100644 drivers/iommu/fsl_pamu.h
> create mode 100644 drivers/iommu/fsl_pamu_domain.c
> create mode 100644 drivers/iommu/fsl_pamu_domain.h

Ack.

Stuart

2013-04-02 16:18:22

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation.

Cc'ing Alex Williamson

Alex, can you please review the iommu-group part of this patch?

My comments so far are below:

On Fri, Mar 29, 2013 at 01:24:02AM +0530, Varun Sethi wrote:
> +config FSL_PAMU
> + bool "Freescale IOMMU support"
> + depends on PPC_E500MC
> + select IOMMU_API
> + select GENERIC_ALLOCATOR
> + help
> + Freescale PAMU support.

A bit lame for a help text. Can you elaborate more what PAMU is and when
it should be enabled?

> +int pamu_enable_liodn(int liodn)
> +{
> + struct paace *ppaace;
> +
> + ppaace = pamu_get_ppaace(liodn);
> + if (!ppaace) {
> + pr_err("Invalid primary paace entry\n");
> + return -ENOENT;
> + }
> +
> + if (!get_bf(ppaace->addr_bitfields, PPAACE_AF_WSE)) {
> + pr_err("liodn %d not configured\n", liodn);
> + return -EINVAL;
> + }
> +
> + /* Ensure that all other stores to the ppaace complete first */
> + mb();
> +
> + ppaace->addr_bitfields |= PAACE_V_VALID;
> + mb();

Why is it sufficient to set the bit in a variable when enabling liodn
but when disabling it set_bf needs to be called? This looks a bit
assymetric.

> +/* Derive the window size encoding for a particular PAACE entry */
> +static unsigned int map_addrspace_size_to_wse(phys_addr_t addrspace_size)
> +{
> + /* Bug if not a power of 2 */
> + BUG_ON((addrspace_size & (addrspace_size - 1)));

Please use is_power_of_2 here.

> +
> + /* window size is 2^(WSE+1) bytes */
> + return __ffs(addrspace_size >> PAMU_PAGE_SHIFT) + PAMU_PAGE_SHIFT - 1;

The PAMU_PAGE_SHIFT shifting and adding looks redundant.

> + if ((win_size & (win_size - 1)) || win_size < PAMU_PAGE_SIZE) {
> + pr_err("window size too small or not a power of two %llx\n", win_size);
> + return -EINVAL;
> + }
> +
> + if (win_addr & (win_size - 1)) {
> + pr_err("window address is not aligned with window size\n");
> + return -EINVAL;
> + }

Again, use is_power_of_2 instead of hand-coding.

> + if (~stashid != 0)
> + set_bf(paace->impl_attr, PAACE_IA_CID, stashid);
> +
> + smp_wmb();
> +
> + if (enable)
> + paace->addr_bitfields |= PAACE_V_VALID;

Havn't you written a helper funtion to set this bit?

> +irqreturn_t pamu_av_isr(int irq, void *arg)
> +{
> + struct pamu_isr_data *data = arg;
> + phys_addr_t phys;
> + unsigned int i, j;
> +
> + pr_emerg("fsl-pamu: access violation interrupt\n");
> +
> + for (i = 0; i < data->count; i++) {
> + void __iomem *p = data->pamu_reg_base + i * PAMU_OFFSET;
> + u32 pics = in_be32(p + PAMU_PICS);
> +
> + if (pics & PAMU_ACCESS_VIOLATION_STAT) {
> + pr_emerg("POES1=%08x\n", in_be32(p + PAMU_POES1));
> + pr_emerg("POES2=%08x\n", in_be32(p + PAMU_POES2));
> + pr_emerg("AVS1=%08x\n", in_be32(p + PAMU_AVS1));
> + pr_emerg("AVS2=%08x\n", in_be32(p + PAMU_AVS2));
> + pr_emerg("AVA=%016llx\n", make64(in_be32(p + PAMU_AVAH),
> + in_be32(p + PAMU_AVAL)));
> + pr_emerg("UDAD=%08x\n", in_be32(p + PAMU_UDAD));
> + pr_emerg("POEA=%016llx\n", make64(in_be32(p + PAMU_POEAH),
> + in_be32(p + PAMU_POEAL)));
> +
> + phys = make64(in_be32(p + PAMU_POEAH),
> + in_be32(p + PAMU_POEAL));
> +
> + /* Assume that POEA points to a PAACE */
> + if (phys) {
> + u32 *paace = phys_to_virt(phys);
> +
> + /* Only the first four words are relevant */
> + for (j = 0; j < 4; j++)
> + pr_emerg("PAACE[%u]=%08x\n", j, in_be32(paace + j));
> + }
> + }
> + }
> +
> + panic("\n");

A kernel panic seems like an over-reaction to an access violation.
Besides the device that caused the violation the system should still
work, no?

> +#define make64(high, low) (((u64)(high) << 32) | (low))

You redefined this make64 here.

> +static int map_subwins(int liodn, struct fsl_dma_domain *dma_domain)
> +{
> + struct dma_window *sub_win_ptr =
> + &dma_domain->win_arr[0];
> + int i, ret;
> + unsigned long rpn;
> +
> + for (i = 0; i < dma_domain->win_cnt; i++) {
> + if (sub_win_ptr[i].valid) {
> + rpn = sub_win_ptr[i].paddr >>
> + PAMU_PAGE_SHIFT;
> + spin_lock(&iommu_lock);

IOMMU code might run in interrupt context, so please use
spin_lock_irqsave for the iommu_lock.

> +static void detach_device(struct device *dev, struct fsl_dma_domain *dma_domain)
> +{
> + struct device_domain_info *info;
> + struct list_head *entry, *tmp;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&dma_domain->domain_lock, flags);
> + /* Remove the device from the domain device list */
> + if (!list_empty(&dma_domain->devices)) {
> + list_for_each_safe(entry, tmp, &dma_domain->devices) {
> + info = list_entry(entry, struct device_domain_info, link);
> + if (!dev || (info->dev == dev))
> + remove_device_ref(info, dma_domain->win_cnt);
> + }
> + }
> + spin_unlock_irqrestore(&dma_domain->domain_lock, flags);

list_empty check is not needed. You can also use
list_for_each_entry_safe.

> +static void attach_device(struct fsl_dma_domain *dma_domain, int liodn, struct device *dev)
> +{
> + struct device_domain_info *info, *old_domain_info;
> +
> + spin_lock(&device_domain_lock);
> + /*
> + * Check here if the device is already attached to domain or not.
> + * If the device is already attached to a domain detach it.
> + */
> + old_domain_info = find_domain(dev);
> + if (old_domain_info && old_domain_info->domain != dma_domain) {
> + spin_unlock(&device_domain_lock);
> + detach_device(dev, old_domain_info->domain);
> + spin_lock(&device_domain_lock);
> + }
> +
> + info = kmem_cache_zalloc(iommu_devinfo_cache, GFP_KERNEL);
> +
> + info->dev = dev;
> + info->liodn = liodn;
> + info->domain = dma_domain;
> +
> + list_add(&info->link, &dma_domain->devices);
> + /*
> + * In case of devices with multiple LIODNs just store
> + * the info for the first LIODN as all
> + * LIODNs share the same domain
> + */
> + if (!old_domain_info)
> + dev->archdata.iommu_domain = info;
> + spin_unlock(&device_domain_lock);

Don't you have to tell the hardware that a device was added to a domain?
I don't see that, what I am missing?

> +static void swap_pci_ref(struct pci_dev **from, struct pci_dev *to)
> +{
> + pci_dev_put(*from);
> + *from = to;
> +}

Hmm, looks like this function is re-implemented in a few IOMMU drivers.
Want to use the chance to consolidate these implementations?


Joerg

2013-04-02 16:23:17

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH 0/5 v11] iommu/fsl: Freescale PAMU driver and IOMMU API implementation.

On Fri, Mar 29, 2013 at 01:23:57AM +0530, Varun Sethi wrote:
> This patchset provides the Freescale PAMU (Peripheral Access Management Unit) driver
> and the corresponding IOMMU API implementation. PAMU is the IOMMU present on Freescale
> QorIQ platforms. PAMU can authorize memory access, remap the memory address, and remap
> the I/O transaction type.
>
> This set consists of the following patches:
> 1. Make iova dma_addr_t in the iommu_iova_to_phys API.
> 2. Addition of new field in the device (powerpc) archdata structure for storing iommu domain information
> pointer.
> 3. Add window permission flags in the iommu_domain_window_enable API.
> 4. Add domain attributes for FSL PAMU driver.
> 5. PAMU driver and IOMMU API implementation.

Okay, I am about to apply patches 1 and 3 to a new ppc/pamu branch in my
tree.

As a general question, how did you test the IOMMU driver and what will
you do in the future to avoid regressions?


Joerg

2013-04-02 17:51:10

by Sethi Varun-B16395

[permalink] [raw]
Subject: RE: [PATCH 0/5 v11] iommu/fsl: Freescale PAMU driver and IOMMU API implementation.



> -----Original Message-----
> From: Joerg Roedel [mailto:[email protected]]
> Sent: Tuesday, April 02, 2013 9:53 PM
> To: Sethi Varun-B16395
> Cc: Yoder Stuart-B08248; Wood Scott-B07421; [email protected]
> foundation.org; [email protected]; linux-
> [email protected]; [email protected];
> [email protected]
> Subject: Re: [PATCH 0/5 v11] iommu/fsl: Freescale PAMU driver and IOMMU
> API implementation.
>
> On Fri, Mar 29, 2013 at 01:23:57AM +0530, Varun Sethi wrote:
> > This patchset provides the Freescale PAMU (Peripheral Access
> > Management Unit) driver and the corresponding IOMMU API
> > implementation. PAMU is the IOMMU present on Freescale QorIQ
> > platforms. PAMU can authorize memory access, remap the memory address,
> and remap the I/O transaction type.
> >
> > This set consists of the following patches:
> > 1. Make iova dma_addr_t in the iommu_iova_to_phys API.
> > 2. Addition of new field in the device (powerpc) archdata structure for
> storing iommu domain information
> > pointer.
> > 3. Add window permission flags in the iommu_domain_window_enable API.
> > 4. Add domain attributes for FSL PAMU driver.
> > 5. PAMU driver and IOMMU API implementation.
>
> Okay, I am about to apply patches 1 and 3 to a new ppc/pamu branch in my
> tree.
>
> As a general question, how did you test the IOMMU driver and what will
> you do in the future to avoid regressions?
>
I use a kernel module for testing the iommu_api support. The module allows for dynamic creation and deletion of iommu domains for the devices in the device tree. Also, the vfio support (under development) for Freescale SOCs with APMU hardware would depend on the PAMU driver.

-Varun

2013-04-03 01:45:57

by Timur Tabi

[permalink] [raw]
Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation.

On Tue, Apr 2, 2013 at 11:18 AM, Joerg Roedel <[email protected]> wrote:

> > + panic("\n");
>
> A kernel panic seems like an over-reaction to an access violation.

We have no way to determining what code caused the violation, so we
can't just kill the process. I agree it seems like overkill, but what
else should we do? Does the IOMMU layer have a way for the IOMMU
driver to stop the device that caused the problem?

> Besides the device that caused the violation the system should still
> work, no?

Not really. The PAMU was designed to add IOMMU support to legacy
devices, which have no concept of an MMU. If the PAMU detects an
access violation, there's no way for the device to recover, because it
has no idea that a violation has occurred. It's going to keep on
writing to bad data.

Maybe we need a mechanism where a driver can register a callback
function to handle IOMMU exceptions?

> > + /*
> > + * In case of devices with multiple LIODNs just store
> > + * the info for the first LIODN as all
> > + * LIODNs share the same domain
> > + */
> > + if (!old_domain_info)
> > + dev->archdata.iommu_domain = info;
> > + spin_unlock(&device_domain_lock);
>
> Don't you have to tell the hardware that a device was added to a domain?
> I don't see that, what I am missing?

I'm not sure I understand. What "hardware" do you think needs to be notified?

The PAMU reads everything it needs from the PAACT, which we update.
The PAMU does not know anything about the devices that it monitors,
and the devices don't know anything about the PAMU.

2013-04-03 01:52:41

by Scott Wood

[permalink] [raw]
Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation.

On 04/02/2013 08:35:54 PM, Timur Tabi wrote:
> On Tue, Apr 2, 2013 at 11:18 AM, Joerg Roedel <[email protected]> wrote:
>
> > > + panic("\n");
> >
> > A kernel panic seems like an over-reaction to an access violation.
>
> We have no way to determining what code caused the violation, so we
> can't just kill the process. I agree it seems like overkill, but what
> else should we do? Does the IOMMU layer have a way for the IOMMU
> driver to stop the device that caused the problem?

At a minimum, log a message and continue. Probably turn off the LIODN,
at least if it continues to be noisy (otherwise we could get stuck in
an interrupt storm as you note). Possibly let the user know somehow,
especially if it's a VFIO domain.

Don't take down the whole kernel. It's not just overkill; it
undermines VFIO's efforts to make it safe for users to control devices.

> > Besides the device that caused the violation the system should still
> > work, no?
>
> Not really. The PAMU was designed to add IOMMU support to legacy
> devices, which have no concept of an MMU. If the PAMU detects an
> access violation, there's no way for the device to recover, because it
> has no idea that a violation has occurred. It's going to keep on
> writing to bad data.

I think that's only the case for posted writes (or devices which fail
to take a hint and stop even after they see an I/O error).

-Scott

2013-04-03 05:12:21

by Sethi Varun-B16395

[permalink] [raw]
Subject: RE: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation.



> -----Original Message-----
> From: Wood Scott-B07421
> Sent: Wednesday, April 03, 2013 7:23 AM
> To: Timur Tabi
> Cc: Joerg Roedel; Sethi Varun-B16395; lkml; Kumar Gala; Yoder Stuart-
> B08248; [email protected]; Benjamin Herrenschmidt;
> [email protected]
> Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu
> implementation.
>
> On 04/02/2013 08:35:54 PM, Timur Tabi wrote:
> > On Tue, Apr 2, 2013 at 11:18 AM, Joerg Roedel <[email protected]> wrote:
> >
> > > > + panic("\n");
> > >
> > > A kernel panic seems like an over-reaction to an access violation.
> >
> > We have no way to determining what code caused the violation, so we
> > can't just kill the process. I agree it seems like overkill, but what
> > else should we do? Does the IOMMU layer have a way for the IOMMU
> > driver to stop the device that caused the problem?
>
> At a minimum, log a message and continue. Probably turn off the LIODN,
> at least if it continues to be noisy (otherwise we could get stuck in an
> interrupt storm as you note). Possibly let the user know somehow,
> especially if it's a VFIO domain.
[Sethi Varun-B16395] Can definitely log the message and disable the LIODN (to avoid an interrupt storm), but
we definitely need a mechanism to inform vfio subsystem about the error. Also, disabling LIODN may not be a viable
option with the new LIODN allocation scheme (where LIODN would be associated with a domain).

>
> Don't take down the whole kernel. It's not just overkill; it undermines
> VFIO's efforts to make it safe for users to control devices.
>
> > > Besides the device that caused the violation the system should still
> > > work, no?
> >
> > Not really. The PAMU was designed to add IOMMU support to legacy
> > devices, which have no concept of an MMU. If the PAMU detects an
> > access violation, there's no way for the device to recover, because it
> > has no idea that a violation has occurred. It's going to keep on
> > writing to bad data.
>
> I think that's only the case for posted writes (or devices which fail to
> take a hint and stop even after they see an I/O error).
>
[Sethi Varun-B16395] Even in the case where the guest driver detects a failure, it may not be able to fix the problem without intervention from the VFIO subsystem.

-Varun

2013-04-03 05:17:12

by Sethi Varun-B16395

[permalink] [raw]
Subject: RE: [PATCH 2/5 v11] powerpc: Add iommu domain pointer to device archdata

Kumar/Ben,
Any comments?

(Had checked with Ben (on IRC) sometime back, he was fine with this patch)

Regards
Varun


> -----Original Message-----
> From: Joerg Roedel [mailto:[email protected]]
> Sent: Tuesday, April 02, 2013 8:39 PM
> To: Sethi Varun-B16395
> Cc: Yoder Stuart-B08248; Wood Scott-B07421; [email protected]
> foundation.org; [email protected]; linux-
> [email protected]; [email protected];
> [email protected]
> Subject: Re: [PATCH 2/5 v11] powerpc: Add iommu domain pointer to device
> archdata
>
> On Fri, Mar 29, 2013 at 01:23:59AM +0530, Varun Sethi wrote:
> > Add an iommu domain pointer to device (powerpc) archdata. Devices are
> > attached to iommu domains and this pointer provides a mechanism to
> > correlate between a device and the associated iommu domain. This
> > field is set when a device is attached to a domain.
> >
> > Signed-off-by: Varun Sethi <[email protected]>
>
> This patch needs to be Acked by the PPC maintainers.
>
>

2013-04-03 05:21:37

by Sethi Varun-B16395

[permalink] [raw]
Subject: RE: [PATCH 4/5 v11] iommu/fsl: Add additional iommu attributes required by the PAMU driver.



> -----Original Message-----
> From: Joerg Roedel [mailto:[email protected]]
> Sent: Tuesday, April 02, 2013 8:40 PM
> To: Sethi Varun-B16395
> Cc: Yoder Stuart-B08248; Wood Scott-B07421; [email protected]
> foundation.org; [email protected]; linux-
> [email protected]; [email protected];
> [email protected]
> Subject: Re: [PATCH 4/5 v11] iommu/fsl: Add additional iommu attributes
> required by the PAMU driver.
>
> On Fri, Mar 29, 2013 at 01:24:01AM +0530, Varun Sethi wrote:
> > +/* cache stash targets */
> > +enum stash_target {
> > + IOMMU_ATTR_CACHE_L1 = 1,
> > + IOMMU_ATTR_CACHE_L2,
> > + IOMMU_ATTR_CACHE_L3,
> > +};
> > +
> > +/* This attribute corresponds to IOMMUs capable of generating
> > + * a stash transaction. A stash transaction is typically a
> > + * hardware initiated prefetch of data from memory to cache.
> > + * This attribute allows configuring stashig specific parameters
> > + * in the IOMMU hardware.
> > + */
> > +
> > +struct iommu_stash_attribute {
> > + u32 cpu; /* cpu number */
> > + u32 cache; /* cache to stash to: L1,L2,L3 */
> > +};
> > +
>
> I would prefer these PAMU specific enum and struct to be in a pamu-
> specific iommu-header.
>

[Sethi Varun-B16395] But, these would be used by the IOMMU API users (e.g. VFIO), they shouldn't depend on PAMU specific headers.

-Varun

2013-04-03 07:01:15

by Sethi Varun-B16395

[permalink] [raw]
Subject: RE: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation.



> -----Original Message-----
> From: Joerg Roedel [mailto:[email protected]]
> Sent: Tuesday, April 02, 2013 9:48 PM
> To: Sethi Varun-B16395
> Cc: Yoder Stuart-B08248; Wood Scott-B07421; [email protected]
> foundation.org; [email protected]; linux-
> [email protected]; [email protected];
> [email protected]; Alex Williamson
> Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu
> implementation.
>
> Cc'ing Alex Williamson
>
> Alex, can you please review the iommu-group part of this patch?
>
> My comments so far are below:
>
> On Fri, Mar 29, 2013 at 01:24:02AM +0530, Varun Sethi wrote:
> > +config FSL_PAMU
> > + bool "Freescale IOMMU support"
> > + depends on PPC_E500MC
> > + select IOMMU_API
> > + select GENERIC_ALLOCATOR
> > + help
> > + Freescale PAMU support.
>
> A bit lame for a help text. Can you elaborate more what PAMU is and when
> it should be enabled?
>
[Sethi Varun-B16395] Will update the description.

> > +int pamu_enable_liodn(int liodn)
> > +{
> > + struct paace *ppaace;
> > +
> > + ppaace = pamu_get_ppaace(liodn);
> > + if (!ppaace) {
> > + pr_err("Invalid primary paace entry\n");
> > + return -ENOENT;
> > + }
> > +
> > + if (!get_bf(ppaace->addr_bitfields, PPAACE_AF_WSE)) {
> > + pr_err("liodn %d not configured\n", liodn);
> > + return -EINVAL;
> > + }
> > +
> > + /* Ensure that all other stores to the ppaace complete first */
> > + mb();
> > +
> > + ppaace->addr_bitfields |= PAACE_V_VALID;
> > + mb();
>
> Why is it sufficient to set the bit in a variable when enabling liodn but
> when disabling it set_bf needs to be called? This looks a bit assymetric.
>
[Sethi Varun-B16395] Will make it symetric :)

> > +/* Derive the window size encoding for a particular PAACE entry */
> > +static unsigned int map_addrspace_size_to_wse(phys_addr_t
> > +addrspace_size) {
> > + /* Bug if not a power of 2 */
> > + BUG_ON((addrspace_size & (addrspace_size - 1)));
>
> Please use is_power_of_2 here.
>
> > +
> > + /* window size is 2^(WSE+1) bytes */
> > + return __ffs(addrspace_size >> PAMU_PAGE_SHIFT) + PAMU_PAGE_SHIFT -
> > +1;
>
> The PAMU_PAGE_SHIFT shifting and adding looks redundant.
>
> > + if ((win_size & (win_size - 1)) || win_size < PAMU_PAGE_SIZE) {
> > + pr_err("window size too small or not a power of two %llx\n",
> win_size);
> > + return -EINVAL;
> > + }
> > +
> > + if (win_addr & (win_size - 1)) {
> > + pr_err("window address is not aligned with window size\n");
> > + return -EINVAL;
> > + }
>
> Again, use is_power_of_2 instead of hand-coding.
>
[Sethi Varun-B16395] ok

> > + if (~stashid != 0)
> > + set_bf(paace->impl_attr, PAACE_IA_CID, stashid);
> > +
> > + smp_wmb();
> > +
> > + if (enable)
> > + paace->addr_bitfields |= PAACE_V_VALID;
>
> Havn't you written a helper funtion to set this bit?
>
[Sethi Varun-B16395] We already have a PAACE entry with us here so we can directly manipulate it here.

> > +irqreturn_t pamu_av_isr(int irq, void *arg) {
> > + struct pamu_isr_data *data = arg;
> > + phys_addr_t phys;
> > + unsigned int i, j;
> > +
> > + pr_emerg("fsl-pamu: access violation interrupt\n");
> > +
> > + for (i = 0; i < data->count; i++) {
> > + void __iomem *p = data->pamu_reg_base + i * PAMU_OFFSET;
> > + u32 pics = in_be32(p + PAMU_PICS);
> > +
> > + if (pics & PAMU_ACCESS_VIOLATION_STAT) {
> > + pr_emerg("POES1=%08x\n", in_be32(p + PAMU_POES1));
> > + pr_emerg("POES2=%08x\n", in_be32(p + PAMU_POES2));
> > + pr_emerg("AVS1=%08x\n", in_be32(p + PAMU_AVS1));
> > + pr_emerg("AVS2=%08x\n", in_be32(p + PAMU_AVS2));
> > + pr_emerg("AVA=%016llx\n", make64(in_be32(p +
> PAMU_AVAH),
> > + in_be32(p + PAMU_AVAL)));
> > + pr_emerg("UDAD=%08x\n", in_be32(p + PAMU_UDAD));
> > + pr_emerg("POEA=%016llx\n", make64(in_be32(p +
> PAMU_POEAH),
> > + in_be32(p + PAMU_POEAL)));
> > +
> > + phys = make64(in_be32(p + PAMU_POEAH),
> > + in_be32(p + PAMU_POEAL));
> > +
> > + /* Assume that POEA points to a PAACE */
> > + if (phys) {
> > + u32 *paace = phys_to_virt(phys);
> > +
> > + /* Only the first four words are relevant */
> > + for (j = 0; j < 4; j++)
> > + pr_emerg("PAACE[%u]=%08x\n", j,
> in_be32(paace + j));
> > + }
> > + }
> > + }
> > +
> > + panic("\n");
>
> A kernel panic seems like an over-reaction to an access violation.
> Besides the device that caused the violation the system should still
> work, no?
>
[Sethi Varun-B16395] Well, if device continues to DMA outside the set aperture then it's a serious problem. We can run in to an interrupt storm (access violations).
Alternative could be to disable LIODN, but after that device DMAs would never go through. So, for the guest device is not functional.

> > +#define make64(high, low) (((u64)(high) << 32) | (low))
>
> You redefined this make64 here.
[Sethi Varun-B16395] :(

>
> > +static int map_subwins(int liodn, struct fsl_dma_domain *dma_domain)
> > +{
> > + struct dma_window *sub_win_ptr =
> > + &dma_domain->win_arr[0];
> > + int i, ret;
> > + unsigned long rpn;
> > +
> > + for (i = 0; i < dma_domain->win_cnt; i++) {
> > + if (sub_win_ptr[i].valid) {
> > + rpn = sub_win_ptr[i].paddr >>
> > + PAMU_PAGE_SHIFT;
> > + spin_lock(&iommu_lock);
>
> IOMMU code might run in interrupt context, so please use
> spin_lock_irqsave for the iommu_lock.
[Sethi Varun-B16395] ok

>
> > +static void detach_device(struct device *dev, struct fsl_dma_domain
> > +*dma_domain) {
> > + struct device_domain_info *info;
> > + struct list_head *entry, *tmp;
> > + unsigned long flags;
> > +
> > + spin_lock_irqsave(&dma_domain->domain_lock, flags);
> > + /* Remove the device from the domain device list */
> > + if (!list_empty(&dma_domain->devices)) {
> > + list_for_each_safe(entry, tmp, &dma_domain->devices) {
> > + info = list_entry(entry, struct device_domain_info,
> link);
> > + if (!dev || (info->dev == dev))
> > + remove_device_ref(info, dma_domain->win_cnt);
> > + }
> > + }
> > + spin_unlock_irqrestore(&dma_domain->domain_lock, flags);
>
> list_empty check is not needed. You can also use
> list_for_each_entry_safe.
[Sethi Varun-B16395] ok.

>
> > +static void attach_device(struct fsl_dma_domain *dma_domain, int
> > +liodn, struct device *dev) {
> > + struct device_domain_info *info, *old_domain_info;
> > +
> > + spin_lock(&device_domain_lock);
> > + /*
> > + * Check here if the device is already attached to domain or not.
> > + * If the device is already attached to a domain detach it.
> > + */
> > + old_domain_info = find_domain(dev);
> > + if (old_domain_info && old_domain_info->domain != dma_domain) {
> > + spin_unlock(&device_domain_lock);
> > + detach_device(dev, old_domain_info->domain);
> > + spin_lock(&device_domain_lock);
> > + }
> > +
> > + info = kmem_cache_zalloc(iommu_devinfo_cache, GFP_KERNEL);
> > +
> > + info->dev = dev;
> > + info->liodn = liodn;
> > + info->domain = dma_domain;
> > +
> > + list_add(&info->link, &dma_domain->devices);
> > + /*
> > + * In case of devices with multiple LIODNs just store
> > + * the info for the first LIODN as all
> > + * LIODNs share the same domain
> > + */
> > + if (!old_domain_info)
> > + dev->archdata.iommu_domain = info;
> > + spin_unlock(&device_domain_lock);
>
> Don't you have to tell the hardware that a device was added to a domain?
> I don't see that, what I am missing?
[Sethi Varun-B16395] Not sure I understand, once the device is attached to the domain we can do the PAMU window setup corresponding to the device LIODN (if the window information is available).

>
> > +static void swap_pci_ref(struct pci_dev **from, struct pci_dev *to) {
> > + pci_dev_put(*from);
> > + *from = to;
> > +}
>
> Hmm, looks like this function is re-implemented in a few IOMMU drivers.
> Want to use the chance to consolidate these implementations?
>

[Sethi Varun-B16395] Will consolidate :)

-Varun

2013-04-03 08:09:07

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH 4/5 v11] iommu/fsl: Add additional iommu attributes required by the PAMU driver.

On Wed, Apr 03, 2013 at 05:21:16AM +0000, Sethi Varun-B16395 wrote:
> > I would prefer these PAMU specific enum and struct to be in a pamu-
> > specific iommu-header.
> >
>
> [Sethi Varun-B16395] But, these would be used by the IOMMU API users
> (e.g. VFIO), they shouldn't depend on PAMU specific headers.

Drivers that use PAMU specifics (like the PAMU specific VFIO parts) can
also include the PAMU specific header.


Joerg

2013-04-03 15:56:05

by Yoder Stuart-B08248

[permalink] [raw]
Subject: RE: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation.



> -----Original Message-----
> From: Sethi Varun-B16395
> Sent: Wednesday, April 03, 2013 12:12 AM
> To: Wood Scott-B07421; Timur Tabi
> Cc: Joerg Roedel; lkml; Kumar Gala; Yoder Stuart-B08248; [email protected]; Benjamin
> Herrenschmidt; [email protected]
> Subject: RE: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation.
>
>
>
> > -----Original Message-----
> > From: Wood Scott-B07421
> > Sent: Wednesday, April 03, 2013 7:23 AM
> > To: Timur Tabi
> > Cc: Joerg Roedel; Sethi Varun-B16395; lkml; Kumar Gala; Yoder Stuart-
> > B08248; [email protected]; Benjamin Herrenschmidt;
> > [email protected]
> > Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu
> > implementation.
> >
> > On 04/02/2013 08:35:54 PM, Timur Tabi wrote:
> > > On Tue, Apr 2, 2013 at 11:18 AM, Joerg Roedel <[email protected]> wrote:
> > >
> > > > > + panic("\n");
> > > >
> > > > A kernel panic seems like an over-reaction to an access violation.
> > >
> > > We have no way to determining what code caused the violation, so we
> > > can't just kill the process. I agree it seems like overkill, but what
> > > else should we do? Does the IOMMU layer have a way for the IOMMU
> > > driver to stop the device that caused the problem?
> >
> > At a minimum, log a message and continue. Probably turn off the LIODN,
> > at least if it continues to be noisy (otherwise we could get stuck in an
> > interrupt storm as you note). Possibly let the user know somehow,
> > especially if it's a VFIO domain.
> [Sethi Varun-B16395] Can definitely log the message and disable the LIODN (to avoid an interrupt storm),
> but
> we definitely need a mechanism to inform vfio subsystem about the error. Also, disabling LIODN may not
> be a viable
> option with the new LIODN allocation scheme (where LIODN would be associated with a domain).

I think for phase 1 of this, just log the error, shut down DMA as you described.
We can implement more full featured error management, like notifying vfio
or the VM somehow in the future.

Stuart

2013-04-03 18:01:46

by Alex Williamson

[permalink] [raw]
Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation.

On Tue, 2013-04-02 at 18:18 +0200, Joerg Roedel wrote:
> Cc'ing Alex Williamson
>
> Alex, can you please review the iommu-group part of this patch?

Sure, it looks pretty reasonable. AIUI, all PCI devices are below some
kind of host bridge that is either new and supports partitioning or old
and doesn't. I don't know if that's a visibility or isolation
requirement, perhaps PCI ACS-ish. In the new host bridge case, each
device gets a group. This seems not to have any quirks for
multifunction devices though. On AMD and Intel IOMMUs we test
multifunction device ACS support to determine whether all the functions
should be in the same group. Is there any reason to trust multifunction
devices on PAMU?

I also find it curious what happens to the iommu group of the host
bridge. In the partitionable case the host bridge group is removed, in
the non-partitionable case the host bridge group becomes the group for
the children, removing the host bridge. It's unique to PAMU so far that
these host bridges are even in an iommu group (x86 only adds pci
devices), but I don't see it as necessarily wrong leaving it in either
scenario. Does it solve some problem to remove them from the groups?
Thanks,

Alex

2013-04-04 13:00:45

by Sethi Varun-B16395

[permalink] [raw]
Subject: RE: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation.



> -----Original Message-----
> From: Alex Williamson [mailto:[email protected]]
> Sent: Wednesday, April 03, 2013 11:32 PM
> To: Joerg Roedel
> Cc: Sethi Varun-B16395; Yoder Stuart-B08248; Wood Scott-B07421;
> [email protected]; [email protected]; linux-
> [email protected]; [email protected];
> [email protected]
> Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu
> implementation.
>
> On Tue, 2013-04-02 at 18:18 +0200, Joerg Roedel wrote:
> > Cc'ing Alex Williamson
> >
> > Alex, can you please review the iommu-group part of this patch?
>
> Sure, it looks pretty reasonable. AIUI, all PCI devices are below some
> kind of host bridge that is either new and supports partitioning or old
> and doesn't. I don't know if that's a visibility or isolation
> requirement, perhaps PCI ACS-ish. In the new host bridge case, each
> device gets a group. This seems not to have any quirks for multifunction
> devices though. On AMD and Intel IOMMUs we test multifunction device ACS
> support to determine whether all the functions should be in the same
> group. Is there any reason to trust multifunction devices on PAMU?
>
[Sethi Varun-B16395] In the case where we can partition endpoints we can distinguish transactions based on the bus,device,function number combination. This support is available in the PCIe controller (host bridge).

> I also find it curious what happens to the iommu group of the host
> bridge. In the partitionable case the host bridge group is removed, in
> the non-partitionable case the host bridge group becomes the group for
> the children, removing the host bridge. It's unique to PAMU so far that
> these host bridges are even in an iommu group (x86 only adds pci
> devices), but I don't see it as necessarily wrong leaving it in either
> scenario. Does it solve some problem to remove them from the groups?
> Thanks,
[Sethi Varun-B16395] The PCIe controller isn't a partitionable entity, it would always be owned by the host.

-Varun

????{.n?+???????+%?????ݶ??w??{.n?+????{??G?????{ay?ʇڙ?,j??f???h?????????z_??(?階?ݢj"???m??????G????????????&???~???iO???z??v?^?m???? ????????I?

2013-04-04 15:22:28

by Alex Williamson

[permalink] [raw]
Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation.

On Thu, 2013-04-04 at 13:00 +0000, Sethi Varun-B16395 wrote:
>
> > -----Original Message-----
> > From: Alex Williamson [mailto:[email protected]]
> > Sent: Wednesday, April 03, 2013 11:32 PM
> > To: Joerg Roedel
> > Cc: Sethi Varun-B16395; Yoder Stuart-B08248; Wood Scott-B07421;
> > [email protected]; [email protected]; linux-
> > [email protected]; [email protected];
> > [email protected]
> > Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu
> > implementation.
> >
> > On Tue, 2013-04-02 at 18:18 +0200, Joerg Roedel wrote:
> > > Cc'ing Alex Williamson
> > >
> > > Alex, can you please review the iommu-group part of this patch?
> >
> > Sure, it looks pretty reasonable. AIUI, all PCI devices are below some
> > kind of host bridge that is either new and supports partitioning or old
> > and doesn't. I don't know if that's a visibility or isolation
> > requirement, perhaps PCI ACS-ish. In the new host bridge case, each
> > device gets a group. This seems not to have any quirks for multifunction
> > devices though. On AMD and Intel IOMMUs we test multifunction device ACS
> > support to determine whether all the functions should be in the same
> > group. Is there any reason to trust multifunction devices on PAMU?
> >
> [Sethi Varun-B16395] In the case where we can partition endpoints we
> can distinguish transactions based on the bus,device,function number
> combination. This support is available in the PCIe controller (host
> bridge).

So can x86 IOMMUs, that's the visibility aspect of IOMMU groups.
Visibility alone doesn't necessarily imply that a device is isolated
though. A multifunction PCI device that doesn't expose ACS support may
not isolate functions from each other. For example a peer-to-peer DMA
between functions may not be translated by the upstream IOMMU. IOMMU
groups should encompass both visibility and isolation.

> > I also find it curious what happens to the iommu group of the host
> > bridge. In the partitionable case the host bridge group is removed, in
> > the non-partitionable case the host bridge group becomes the group for
> > the children, removing the host bridge. It's unique to PAMU so far that
> > these host bridges are even in an iommu group (x86 only adds pci
> > devices), but I don't see it as necessarily wrong leaving it in either
> > scenario. Does it solve some problem to remove them from the groups?
> > Thanks,
> [Sethi Varun-B16395] The PCIe controller isn't a partitionable entity,
> it would always be owned by the host.

Ownership of a device shouldn't play into the group context. An IOMMU
group should be defined by it's visibility and isolation from other
devices. Whether the PCIe controller is allowed to be handed to
userspace is a question for VFIO. Thanks,

Alex

2013-04-04 16:35:48

by Sethi Varun-B16395

[permalink] [raw]
Subject: RE: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation.



> -----Original Message-----
> From: Alex Williamson [mailto:[email protected]]
> Sent: Thursday, April 04, 2013 8:52 PM
> To: Sethi Varun-B16395
> Cc: Joerg Roedel; Yoder Stuart-B08248; Wood Scott-B07421;
> [email protected]; [email protected]; linux-
> [email protected]; [email protected];
> [email protected]
> Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu
> implementation.
>
> On Thu, 2013-04-04 at 13:00 +0000, Sethi Varun-B16395 wrote:
> >
> > > -----Original Message-----
> > > From: Alex Williamson [mailto:[email protected]]
> > > Sent: Wednesday, April 03, 2013 11:32 PM
> > > To: Joerg Roedel
> > > Cc: Sethi Varun-B16395; Yoder Stuart-B08248; Wood Scott-B07421;
> > > [email protected]; [email protected];
> > > linux- [email protected]; [email protected];
> > > [email protected]
> > > Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and
> > > iommu implementation.
> > >
> > > On Tue, 2013-04-02 at 18:18 +0200, Joerg Roedel wrote:
> > > > Cc'ing Alex Williamson
> > > >
> > > > Alex, can you please review the iommu-group part of this patch?
> > >
> > > Sure, it looks pretty reasonable. AIUI, all PCI devices are below
> > > some kind of host bridge that is either new and supports
> > > partitioning or old and doesn't. I don't know if that's a
> > > visibility or isolation requirement, perhaps PCI ACS-ish. In the
> > > new host bridge case, each device gets a group. This seems not to
> > > have any quirks for multifunction devices though. On AMD and Intel
> > > IOMMUs we test multifunction device ACS support to determine whether
> > > all the functions should be in the same group. Is there any reason
> to trust multifunction devices on PAMU?
> > >
> > [Sethi Varun-B16395] In the case where we can partition endpoints we
> > can distinguish transactions based on the bus,device,function number
> > combination. This support is available in the PCIe controller (host
> > bridge).
>
> So can x86 IOMMUs, that's the visibility aspect of IOMMU groups.
> Visibility alone doesn't necessarily imply that a device is isolated
> though. A multifunction PCI device that doesn't expose ACS support may
> not isolate functions from each other. For example a peer-to-peer DMA
> between functions may not be translated by the upstream IOMMU. IOMMU
> groups should encompass both visibility and isolation.
[Sethi Varun-B16395] We can isolate the DMA access to the host based on the to the pci bus,device,function number.
I thought that was enough to put devices in to separate iommu groups. This is a PCIe controller property which allows us to partition PCIe devices. But, what I can understand from your point is that we also need to consider isolation at PCIe device level as well. I will check for the case of multifunction devices.

>
> > > I also find it curious what happens to the iommu group of the host
> > > bridge. In the partitionable case the host bridge group is removed,
> > > in the non-partitionable case the host bridge group becomes the
> > > group for the children, removing the host bridge. It's unique to
> > > PAMU so far that these host bridges are even in an iommu group (x86
> > > only adds pci devices), but I don't see it as necessarily wrong
> > > leaving it in either scenario. Does it solve some problem to remove
> them from the groups?
> > > Thanks,
> > [Sethi Varun-B16395] The PCIe controller isn't a partitionable entity,
> > it would always be owned by the host.
>
> Ownership of a device shouldn't play into the group context. An IOMMU
> group should be defined by it's visibility and isolation from other
> devices. Whether the PCIe controller is allowed to be handed to
> userspace is a question for VFIO.
[Sethi Varun-B16395] The problem is in the case, where we can't partition PCIe devices. PCIe devices share the same device group as the PCI controller. This becomes a problem while assigning the devices to the guest, as you are required to unbind all the PCIe devices including the controller from the host. PCIe controller can't be unbound from the host, so we simply delete the controller iommu_group.

-Varun


????{.n?+???????+%?????ݶ??w??{.n?+????{??G?????{ay?ʇڙ?,j??f???h?????????z_??(?階?ݢj"???m??????G????????????&???~???iO???z??v?^?m???? ????????I?

2013-04-04 16:43:13

by Yoder Stuart-B08248

[permalink] [raw]
Subject: RE: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation.



> -----Original Message-----
> From: Alex Williamson [mailto:[email protected]]
> Sent: Thursday, April 04, 2013 10:22 AM
> To: Sethi Varun-B16395
> Cc: Joerg Roedel; Yoder Stuart-B08248; Wood Scott-B07421; [email protected]; linuxppc-
> [email protected]; [email protected]; [email protected]; [email protected]
> Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation.
>
> On Thu, 2013-04-04 at 13:00 +0000, Sethi Varun-B16395 wrote:
> >
> > > -----Original Message-----
> > > From: Alex Williamson [mailto:[email protected]]
> > > Sent: Wednesday, April 03, 2013 11:32 PM
> > > To: Joerg Roedel
> > > Cc: Sethi Varun-B16395; Yoder Stuart-B08248; Wood Scott-B07421;
> > > [email protected]; [email protected]; linux-
> > > [email protected]; [email protected];
> > > [email protected]
> > > Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu
> > > implementation.
> > >
> > > On Tue, 2013-04-02 at 18:18 +0200, Joerg Roedel wrote:
> > > > Cc'ing Alex Williamson
> > > >
> > > > Alex, can you please review the iommu-group part of this patch?
> > >
> > > Sure, it looks pretty reasonable. AIUI, all PCI devices are below some
> > > kind of host bridge that is either new and supports partitioning or old
> > > and doesn't. I don't know if that's a visibility or isolation
> > > requirement, perhaps PCI ACS-ish. In the new host bridge case, each
> > > device gets a group. This seems not to have any quirks for multifunction
> > > devices though. On AMD and Intel IOMMUs we test multifunction device ACS
> > > support to determine whether all the functions should be in the same
> > > group. Is there any reason to trust multifunction devices on PAMU?
> > >
> > [Sethi Varun-B16395] In the case where we can partition endpoints we
> > can distinguish transactions based on the bus,device,function number
> > combination. This support is available in the PCIe controller (host
> > bridge).
>
> So can x86 IOMMUs, that's the visibility aspect of IOMMU groups.
> Visibility alone doesn't necessarily imply that a device is isolated
> though. A multifunction PCI device that doesn't expose ACS support may
> not isolate functions from each other. For example a peer-to-peer DMA
> between functions may not be translated by the upstream IOMMU. IOMMU
> groups should encompass both visibility and isolation.
>
> > > I also find it curious what happens to the iommu group of the host
> > > bridge. In the partitionable case the host bridge group is removed, in
> > > the non-partitionable case the host bridge group becomes the group for
> > > the children, removing the host bridge. It's unique to PAMU so far that
> > > these host bridges are even in an iommu group (x86 only adds pci
> > > devices), but I don't see it as necessarily wrong leaving it in either
> > > scenario. Does it solve some problem to remove them from the groups?
> > > Thanks,
> > [Sethi Varun-B16395] The PCIe controller isn't a partitionable entity,
> > it would always be owned by the host.
>
> Ownership of a device shouldn't play into the group context. An IOMMU
> group should be defined by it's visibility and isolation from other
> devices. Whether the PCIe controller is allowed to be handed to
> userspace is a question for VFIO. Thanks,

Right now the add_device() callback gets called for all platform
devices (including PCI controller) and PCI devices. PCI controllers
are a kind of special case in that their device tree node has a
property indicating that it is DMA capable...but in fact the
PCI controller itself does not DMA, but it's the PCI endpoints
under it.

So, as you noted the bridge/controller shouldn't be in an IOMMU
group and so since the platform 'add device' code didn't special
case PCI controllers, this patch removes their group if it's there.

Stuart
????{.n?+???????+%?????ݶ??w??{.n?+????{??G?????{ay?ʇڙ?,j??f???h?????????z_??(?階?ݢj"???m??????G????????????&???~???iO???z??v?^?m???? ????????I?

2013-04-04 16:43:56

by Alex Williamson

[permalink] [raw]
Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation.

On Thu, 2013-04-04 at 16:35 +0000, Sethi Varun-B16395 wrote:
>
> > -----Original Message-----
> > From: Alex Williamson [mailto:[email protected]]
> > Sent: Thursday, April 04, 2013 8:52 PM
> > To: Sethi Varun-B16395
> > Cc: Joerg Roedel; Yoder Stuart-B08248; Wood Scott-B07421;
> > [email protected]; [email protected]; linux-
> > [email protected]; [email protected];
> > [email protected]
> > Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu
> > implementation.
> >
> > On Thu, 2013-04-04 at 13:00 +0000, Sethi Varun-B16395 wrote:
> > >
> > > > -----Original Message-----
> > > > From: Alex Williamson [mailto:[email protected]]
> > > > Sent: Wednesday, April 03, 2013 11:32 PM
> > > > To: Joerg Roedel
> > > > Cc: Sethi Varun-B16395; Yoder Stuart-B08248; Wood Scott-B07421;
> > > > [email protected]; [email protected];
> > > > linux- [email protected]; [email protected];
> > > > [email protected]
> > > > Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and
> > > > iommu implementation.
> > > >
> > > > On Tue, 2013-04-02 at 18:18 +0200, Joerg Roedel wrote:
> > > > > Cc'ing Alex Williamson
> > > > >
> > > > > Alex, can you please review the iommu-group part of this patch?
> > > >
> > > > Sure, it looks pretty reasonable. AIUI, all PCI devices are below
> > > > some kind of host bridge that is either new and supports
> > > > partitioning or old and doesn't. I don't know if that's a
> > > > visibility or isolation requirement, perhaps PCI ACS-ish. In the
> > > > new host bridge case, each device gets a group. This seems not to
> > > > have any quirks for multifunction devices though. On AMD and Intel
> > > > IOMMUs we test multifunction device ACS support to determine whether
> > > > all the functions should be in the same group. Is there any reason
> > to trust multifunction devices on PAMU?
> > > >
> > > [Sethi Varun-B16395] In the case where we can partition endpoints we
> > > can distinguish transactions based on the bus,device,function number
> > > combination. This support is available in the PCIe controller (host
> > > bridge).
> >
> > So can x86 IOMMUs, that's the visibility aspect of IOMMU groups.
> > Visibility alone doesn't necessarily imply that a device is isolated
> > though. A multifunction PCI device that doesn't expose ACS support may
> > not isolate functions from each other. For example a peer-to-peer DMA
> > between functions may not be translated by the upstream IOMMU. IOMMU
> > groups should encompass both visibility and isolation.
> [Sethi Varun-B16395] We can isolate the DMA access to the host based
> on the to the pci bus,device,function number.

The IOMMU can only isolate DMA that it can see. A multifunction device
may never expose peer-to-peer DMA to the upstream device, it's
implementation specific. The ACS flags allow that possibility to be
controlled and prevented.

> I thought that was enough to put devices in to separate iommu groups.
> This is a PCIe controller property which allows us to partition PCIe
> devices. But, what I can understand from your point is that we also
> need to consider isolation at PCIe device level as well. I will check
> for the case of multifunction devices.
>
> >
> > > > I also find it curious what happens to the iommu group of the host
> > > > bridge. In the partitionable case the host bridge group is removed,
> > > > in the non-partitionable case the host bridge group becomes the
> > > > group for the children, removing the host bridge. It's unique to
> > > > PAMU so far that these host bridges are even in an iommu group (x86
> > > > only adds pci devices), but I don't see it as necessarily wrong
> > > > leaving it in either scenario. Does it solve some problem to remove
> > them from the groups?
> > > > Thanks,
> > > [Sethi Varun-B16395] The PCIe controller isn't a partitionable entity,
> > > it would always be owned by the host.
> >
> > Ownership of a device shouldn't play into the group context. An IOMMU
> > group should be defined by it's visibility and isolation from other
> > devices. Whether the PCIe controller is allowed to be handed to
> > userspace is a question for VFIO.
> [Sethi Varun-B16395] The problem is in the case, where we can't
> partition PCIe devices. PCIe devices share the same device group as
> the PCI controller. This becomes a problem while assigning the devices
> to the guest, as you are required to unbind all the PCIe devices
> including the controller from the host. PCIe controller can't be
> unbound from the host, so we simply delete the controller iommu_group.

Unbinding devices is a VFIO implementation, it shouldn't leak into IOMMU
groups. Also note that VFIO has a driver white list where we can have
exceptions to the rule. I recently added pciehp to that list because
the host driver provides functionality. Being attached to the host
driver means the device is not accessible to the user through VFIO, but
other devices in the group are. Thanks,

Alex

2013-04-05 00:01:44

by Sethi Varun-B16395

[permalink] [raw]
Subject: RE: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation.



> -----Original Message-----
> From: Alex Williamson [mailto:[email protected]]
> Sent: Thursday, April 04, 2013 10:14 PM
> To: Sethi Varun-B16395
> Cc: Joerg Roedel; Yoder Stuart-B08248; Wood Scott-B07421;
> [email protected]; [email protected]; linux-
> [email protected]; [email protected];
> [email protected]
> Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu
> implementation.
>
> On Thu, 2013-04-04 at 16:35 +0000, Sethi Varun-B16395 wrote:
> >
> > > -----Original Message-----
> > > From: Alex Williamson [mailto:[email protected]]
> > > Sent: Thursday, April 04, 2013 8:52 PM
> > > To: Sethi Varun-B16395
> > > Cc: Joerg Roedel; Yoder Stuart-B08248; Wood Scott-B07421;
> > > [email protected]; [email protected];
> > > linux- [email protected]; [email protected];
> > > [email protected]
> > > Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and
> > > iommu implementation.
> > >
> > > On Thu, 2013-04-04 at 13:00 +0000, Sethi Varun-B16395 wrote:
> > > >
> > > > > -----Original Message-----
> > > > > From: Alex Williamson [mailto:[email protected]]
> > > > > Sent: Wednesday, April 03, 2013 11:32 PM
> > > > > To: Joerg Roedel
> > > > > Cc: Sethi Varun-B16395; Yoder Stuart-B08248; Wood Scott-B07421;
> > > > > [email protected]; [email protected];
> > > > > linux- [email protected]; [email protected];
> > > > > [email protected]
> > > > > Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver
> > > > > and iommu implementation.
> > > > >
> > > > > On Tue, 2013-04-02 at 18:18 +0200, Joerg Roedel wrote:
> > > > > > Cc'ing Alex Williamson
> > > > > >
> > > > > > Alex, can you please review the iommu-group part of this patch?
> > > > >
> > > > > Sure, it looks pretty reasonable. AIUI, all PCI devices are
> > > > > below some kind of host bridge that is either new and supports
> > > > > partitioning or old and doesn't. I don't know if that's a
> > > > > visibility or isolation requirement, perhaps PCI ACS-ish. In
> > > > > the new host bridge case, each device gets a group. This seems
> > > > > not to have any quirks for multifunction devices though. On AMD
> > > > > and Intel IOMMUs we test multifunction device ACS support to
> > > > > determine whether all the functions should be in the same group.
> > > > > Is there any reason
> > > to trust multifunction devices on PAMU?
> > > > >
> > > > [Sethi Varun-B16395] In the case where we can partition endpoints
> > > > we can distinguish transactions based on the bus,device,function
> > > > number combination. This support is available in the PCIe
> > > > controller (host bridge).
> > >
> > > So can x86 IOMMUs, that's the visibility aspect of IOMMU groups.
> > > Visibility alone doesn't necessarily imply that a device is isolated
> > > though. A multifunction PCI device that doesn't expose ACS support
> > > may not isolate functions from each other. For example a
> > > peer-to-peer DMA between functions may not be translated by the
> > > upstream IOMMU. IOMMU groups should encompass both visibility and
> isolation.
> > [Sethi Varun-B16395] We can isolate the DMA access to the host based
> > on the to the pci bus,device,function number.
>
> The IOMMU can only isolate DMA that it can see. A multifunction device
> may never expose peer-to-peer DMA to the upstream device, it's
> implementation specific. The ACS flags allow that possibility to be
> controlled and prevented.
>
> > I thought that was enough to put devices in to separate iommu groups.
> > This is a PCIe controller property which allows us to partition PCIe
> > devices. But, what I can understand from your point is that we also
> > need to consider isolation at PCIe device level as well. I will check
> > for the case of multifunction devices.
> >
> > >
> > > > > I also find it curious what happens to the iommu group of the
> > > > > host bridge. In the partitionable case the host bridge group is
> > > > > removed, in the non-partitionable case the host bridge group
> > > > > becomes the group for the children, removing the host bridge.
> > > > > It's unique to PAMU so far that these host bridges are even in
> > > > > an iommu group (x86 only adds pci devices), but I don't see it
> > > > > as necessarily wrong leaving it in either scenario. Does it
> > > > > solve some problem to remove
> > > them from the groups?
> > > > > Thanks,
> > > > [Sethi Varun-B16395] The PCIe controller isn't a partitionable
> > > > entity, it would always be owned by the host.
> > >
> > > Ownership of a device shouldn't play into the group context. An
> > > IOMMU group should be defined by it's visibility and isolation from
> > > other devices. Whether the PCIe controller is allowed to be handed
> > > to userspace is a question for VFIO.
> > [Sethi Varun-B16395] The problem is in the case, where we can't
> > partition PCIe devices. PCIe devices share the same device group as
> > the PCI controller. This becomes a problem while assigning the devices
> > to the guest, as you are required to unbind all the PCIe devices
> > including the controller from the host. PCIe controller can't be
> > unbound from the host, so we simply delete the controller iommu_group.
>
> Unbinding devices is a VFIO implementation, it shouldn't leak into IOMMU
> groups. Also note that VFIO has a driver white list where we can have
> exceptions to the rule. I recently added pciehp to that list because the
> host driver provides functionality. Being attached to the host driver
> means the device is not accessible to the user through VFIO, but other
> devices in the group are. Thanks,
>
Also, as Stuart pointed out the PCIe controller aren't the actual DMA devices (endpoints are the actual DMA devices). So, we remove the device group allocated for the PCIe controllers.

-Varun

????{.n?+???????+%?????ݶ??w??{.n?+????{??G?????{ay?ʇڙ?,j??f???h?????????z_??(?階?ݢj"???m??????G????????????&???~???iO???z??v?^?m???? ????????I?

2013-04-11 18:16:51

by Kumar Gala

[permalink] [raw]
Subject: Re: [PATCH 2/5 v11] powerpc: Add iommu domain pointer to device archdata


On Mar 28, 2013, at 2:53 PM, Varun Sethi wrote:

> Add an iommu domain pointer to device (powerpc) archdata. Devices
> are attached to iommu domains and this pointer provides a mechanism
> to correlate between a device and the associated iommu domain. This
> field is set when a device is attached to a domain.
>
> Signed-off-by: Varun Sethi <[email protected]>
> ---
> - no change in v11.
> - no change in v10.
> - Added CONFIG_IOMMU_API in v9.
> arch/powerpc/include/asm/device.h | 6 ++++++
> 1 files changed, 6 insertions(+), 0 deletions(-)

Acked-by: Kumar Gala <[email protected]>

- k

2013-06-20 14:30:27

by Sethi Varun-B16395

[permalink] [raw]
Subject: RE: [PATCH 2/5 v11] powerpc: Add iommu domain pointer to device archdata

Hi Joerg,
My PAMU driver patches depend on this patch which was Ack by Kumar. Should I resubmit this patch as well?

Regards
Varun

> -----Original Message-----
> From: Kumar Gala [mailto:[email protected]]
> Sent: Thursday, April 11, 2013 11:46 PM
> To: Sethi Varun-B16395
> Cc: [email protected]; Yoder Stuart-B08248; Wood Scott-B07421;
> [email protected]; [email protected]; linux-
> [email protected]; [email protected]
> Subject: Re: [PATCH 2/5 v11] powerpc: Add iommu domain pointer to device
> archdata
>
>
> On Mar 28, 2013, at 2:53 PM, Varun Sethi wrote:
>
> > Add an iommu domain pointer to device (powerpc) archdata. Devices are
> > attached to iommu domains and this pointer provides a mechanism to
> > correlate between a device and the associated iommu domain. This
> > field is set when a device is attached to a domain.
> >
> > Signed-off-by: Varun Sethi <[email protected]>
> > ---
> > - no change in v11.
> > - no change in v10.
> > - Added CONFIG_IOMMU_API in v9.
> > arch/powerpc/include/asm/device.h | 6 ++++++
> > 1 files changed, 6 insertions(+), 0 deletions(-)
>
> Acked-by: Kumar Gala <[email protected]>
>
> - k

2013-06-20 14:41:57

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH 2/5 v11] powerpc: Add iommu domain pointer to device archdata

On Thu, Jun 20, 2013 at 02:29:30PM +0000, Sethi Varun-B16395 wrote:
> Hi Joerg,
> My PAMU driver patches depend on this patch which was Ack by Kumar. Should I resubmit this patch as well?

Yes, please. Add the collected Acked-bys and submit everything that is
missing in v3.10-rc6.


Joerg