2020-09-30 15:39:11

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: [PATCH v7 00/18] Implement NTB Controller using multiple PCI EP

This series is about implementing SW defined Non-Transparent Bridge (NTB)
using multiple endpoint (EP) instances. This series has been tested using
2 endpoint instances in J7 connected to J7 board on one end and DRA7 board
on the other end. However there is nothing platform specific for the NTB
functionality.

This was presented in Linux Plumbers Conference. Link to presentation
and video can be found @ [1]

RFC patch series can be found @ [2]
v1 patch series can be found @ [3]
v2 patch series can be found @ [4]
v3 patch series can be found @ [5]
v4 patch series can be found @ [6]
v5 patch series can be found @ [7]
v6 patch series can be found @ [8]

Changes from v6:
1) Fixed issues when multiple NTB devices are creating using multiple
functions
2) Fixed issue with writing scratchpad register
3) Created a video demo @ [9]

Changes from v5:
1) Fixed a formatting issue in Kconfig pointed out by Randy
2) Checked for Error or Null in pci_epc_add_epf()

Changes from v4:
1) Fixed error condition checks in pci_epc_add_epf()

Changes from v3:
1) Fixed Documentation edits suggested by Randy Dunlap <[email protected]>

Changes from v2:
1) Add support for the user to create sub-directory of 'EPF Device'
directory (for endpoint function specific configuration using
configfs).
2) Add documentation for NTB specific attributes in configfs
3) Check for PCI_CLASS_MEMORY_RAM (PCIe class) before binding ntb_hw_epf
driver
4) Other documentation fixes

Changes from v1:
1) As per Rob's comment, removed support for creating NTB function
device from DT
2) Add support to create NTB EPF device using configfs (added support in
configfs to associate primary and secondary EPC with EPF.

Changes from RFC:
1) Converted the DT binding patches to YAML schema and merged the
DT binding patches together
2) NTB documentation is converted to .rst
3) One HOST can now interrupt the other HOST using MSI-X interrupts
4) Added support for teardown of memory window and doorbell
configuration
5) Add support to provide support 64-bit memory window size from
DT

[1] -> https://linuxplumbersconf.org/event/4/contributions/395/
[2] -> http://lore.kernel.org/r/[email protected]
[3] -> http://lore.kernel.org/r/[email protected]
[4] -> http://lore.kernel.org/r/[email protected]
[5] -> http://lore.kernel.org/r/[email protected]
[6] -> http://lore.kernel.org/r/[email protected]
[7] -> http://lore.kernel.org/r/[email protected]
[8] -> http://lore.kernel.org/r/[email protected]
[9] -> https://youtu.be/dLKKxrg5-rY

Kishon Vijay Abraham I (18):
Documentation: PCI: Add specification for the *PCI NTB* function
device
PCI: endpoint: Make *_get_first_free_bar() take into account 64 bit
BAR
PCI: endpoint: Add helper API to get the 'next' unreserved BAR
PCI: endpoint: Make *_free_bar() to return error codes on failure
PCI: endpoint: Remove unused pci_epf_match_device()
PCI: endpoint: Add support to associate secondary EPC with EPF
PCI: endpoint: Add support in configfs to associate two EPCs with EPF
PCI: endpoint: Add pci_epc_ops to map MSI irq
PCI: endpoint: Add pci_epf_ops for epf drivers to expose function
specific attrs
PCI: endpoint: Allow user to create sub-directory of 'EPF Device'
directory
PCI: cadence: Implement ->msi_map_irq() ops
PCI: cadence: Configure LM_EP_FUNC_CFG based on epc->function_num_map
PCI: endpoint: Add EP function driver to provide NTB functionality
PCI: Add TI J721E device to pci ids
NTB: Add support for EPF PCI-Express Non-Transparent Bridge
NTB: tool: Enable the NTB/PCIe link on the local or remote side of
bridge
Documentation: PCI: Add configfs binding documentation for pci-ntb
endpoint function
Documentation: PCI: Add userguide for PCI endpoint NTB function

.../PCI/endpoint/function/binding/pci-ntb.rst | 38 +
Documentation/PCI/endpoint/index.rst | 3 +
.../PCI/endpoint/pci-endpoint-cfs.rst | 10 +
.../PCI/endpoint/pci-ntb-function.rst | 351 +++
Documentation/PCI/endpoint/pci-ntb-howto.rst | 160 ++
drivers/misc/pci_endpoint_test.c | 1 -
drivers/ntb/hw/Kconfig | 1 +
drivers/ntb/hw/Makefile | 1 +
drivers/ntb/hw/epf/Kconfig | 6 +
drivers/ntb/hw/epf/Makefile | 1 +
drivers/ntb/hw/epf/ntb_hw_epf.c | 755 ++++++
drivers/ntb/test/ntb_tool.c | 1 +
.../pci/controller/cadence/pcie-cadence-ep.c | 60 +-
drivers/pci/endpoint/functions/Kconfig | 12 +
drivers/pci/endpoint/functions/Makefile | 1 +
drivers/pci/endpoint/functions/pci-epf-ntb.c | 2114 +++++++++++++++++
drivers/pci/endpoint/functions/pci-epf-test.c | 13 +-
drivers/pci/endpoint/pci-ep-cfs.c | 176 +-
drivers/pci/endpoint/pci-epc-core.c | 130 +-
drivers/pci/endpoint/pci-epf-core.c | 105 +-
include/linux/pci-epc.h | 39 +-
include/linux/pci-epf.h | 28 +-
include/linux/pci_ids.h | 1 +
23 files changed, 3934 insertions(+), 73 deletions(-)
create mode 100644 Documentation/PCI/endpoint/function/binding/pci-ntb.rst
create mode 100644 Documentation/PCI/endpoint/pci-ntb-function.rst
create mode 100644 Documentation/PCI/endpoint/pci-ntb-howto.rst
create mode 100644 drivers/ntb/hw/epf/Kconfig
create mode 100644 drivers/ntb/hw/epf/Makefile
create mode 100644 drivers/ntb/hw/epf/ntb_hw_epf.c
create mode 100644 drivers/pci/endpoint/functions/pci-epf-ntb.c

--
2.17.1


2020-09-30 15:39:31

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: [PATCH v7 07/18] PCI: endpoint: Add support in configfs to associate two EPCs with EPF

Now that PCI endpoint core supports to add secondary endpoint
controller (EPC) with endpoint function (EPF), Add support in configfs
to associate two EPCs with EPF. This creates "primary" and "secondary"
directory inside the directory created by users for EPF device. Users
have to add a symlink of endpoint controller (pci_ep/controllers/) to
"primary" or "secondary" directory to bind EPF to primary and secondary
EPF interfaces respectively. Existing method of linking directory
representing EPF device to directory representing EPC device to
associate a single EPC device with a EPF device will continue to work.

Signed-off-by: Kishon Vijay Abraham I <[email protected]>
---
.../PCI/endpoint/pci-endpoint-cfs.rst | 10 ++
drivers/pci/endpoint/pci-ep-cfs.c | 147 ++++++++++++++++++
2 files changed, 157 insertions(+)

diff --git a/Documentation/PCI/endpoint/pci-endpoint-cfs.rst b/Documentation/PCI/endpoint/pci-endpoint-cfs.rst
index 1bbd81ed06c8..696f8eeb4738 100644
--- a/Documentation/PCI/endpoint/pci-endpoint-cfs.rst
+++ b/Documentation/PCI/endpoint/pci-endpoint-cfs.rst
@@ -68,6 +68,16 @@ created)
... subsys_vendor_id
... subsys_id
... interrupt_pin
+ ... primary/
+ ... <Symlink EPC Device1>/
+ ... secondary/
+ ... <Symlink EPC Device2>/
+
+If an EPF device has to be associated with 2 EPCs (like in the case of
+Non-transparent bridge), symlink of endpoint controller connected to primary
+interface should be added in 'primary' directory and symlink of endpoint
+controller connected to secondary interface should be added in 'secondary'
+directory.

EPC Device
==========
diff --git a/drivers/pci/endpoint/pci-ep-cfs.c b/drivers/pci/endpoint/pci-ep-cfs.c
index 6ca9e2f92460..8f750961d6ab 100644
--- a/drivers/pci/endpoint/pci-ep-cfs.c
+++ b/drivers/pci/endpoint/pci-ep-cfs.c
@@ -21,6 +21,9 @@ static struct config_group *controllers_group;

struct pci_epf_group {
struct config_group group;
+ struct config_group primary_epc_group;
+ struct config_group secondary_epc_group;
+ struct delayed_work cfs_work;
struct pci_epf *epf;
int index;
};
@@ -41,6 +44,127 @@ static inline struct pci_epc_group *to_pci_epc_group(struct config_item *item)
return container_of(to_config_group(item), struct pci_epc_group, group);
}

+static int pci_secondary_epc_epf_link(struct config_item *epf_item,
+ struct config_item *epc_item)
+{
+ int ret;
+ struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
+ struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
+ struct pci_epc *epc = epc_group->epc;
+ struct pci_epf *epf = epf_group->epf;
+
+ ret = pci_epc_add_epf(epc, epf, SECONDARY_INTERFACE);
+ if (ret)
+ return ret;
+
+ ret = pci_epf_bind(epf);
+ if (ret) {
+ pci_epc_remove_epf(epc, epf, SECONDARY_INTERFACE);
+ return ret;
+ }
+
+ return 0;
+}
+
+static void pci_secondary_epc_epf_unlink(struct config_item *epc_item,
+ struct config_item *epf_item)
+{
+ struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
+ struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
+ struct pci_epc *epc;
+ struct pci_epf *epf;
+
+ WARN_ON_ONCE(epc_group->start);
+
+ epc = epc_group->epc;
+ epf = epf_group->epf;
+ pci_epf_unbind(epf);
+ pci_epc_remove_epf(epc, epf, SECONDARY_INTERFACE);
+}
+
+static struct configfs_item_operations pci_secondary_epc_item_ops = {
+ .allow_link = pci_secondary_epc_epf_link,
+ .drop_link = pci_secondary_epc_epf_unlink,
+};
+
+static const struct config_item_type pci_secondary_epc_type = {
+ .ct_item_ops = &pci_secondary_epc_item_ops,
+ .ct_owner = THIS_MODULE,
+};
+
+static struct config_group
+*pci_ep_cfs_add_secondary_group(struct pci_epf_group *epf_group)
+{
+ struct config_group *secondary_epc_group;
+
+ secondary_epc_group = &epf_group->secondary_epc_group;
+ config_group_init_type_name(secondary_epc_group, "secondary",
+ &pci_secondary_epc_type);
+ configfs_register_group(&epf_group->group, secondary_epc_group);
+
+ return secondary_epc_group;
+}
+
+static int pci_primary_epc_epf_link(struct config_item *epf_item,
+ struct config_item *epc_item)
+{
+ int ret;
+ struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
+ struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
+ struct pci_epc *epc = epc_group->epc;
+ struct pci_epf *epf = epf_group->epf;
+
+ ret = pci_epc_add_epf(epc, epf, PRIMARY_INTERFACE);
+ if (ret)
+ return ret;
+
+ ret = pci_epf_bind(epf);
+ if (ret) {
+ pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE);
+ return ret;
+ }
+
+ return 0;
+}
+
+static void pci_primary_epc_epf_unlink(struct config_item *epc_item,
+ struct config_item *epf_item)
+{
+ struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
+ struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
+ struct pci_epc *epc;
+ struct pci_epf *epf;
+
+ WARN_ON_ONCE(epc_group->start);
+
+ epc = epc_group->epc;
+ epf = epf_group->epf;
+ pci_epf_unbind(epf);
+ pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE);
+}
+
+static struct configfs_item_operations pci_primary_epc_item_ops = {
+ .allow_link = pci_primary_epc_epf_link,
+ .drop_link = pci_primary_epc_epf_unlink,
+};
+
+static const struct config_item_type pci_primary_epc_type = {
+ .ct_item_ops = &pci_primary_epc_item_ops,
+ .ct_owner = THIS_MODULE,
+};
+
+static struct config_group
+*pci_ep_cfs_add_primary_group(struct pci_epf_group *epf_group)
+{
+ struct config_group *primary_epc_group = &epf_group->primary_epc_group;
+
+ config_group_init_type_name(primary_epc_group, "primary",
+ &pci_primary_epc_type);
+ configfs_register_group(&epf_group->group, primary_epc_group);
+
+ return primary_epc_group;
+}
+
static ssize_t pci_epc_start_store(struct config_item *item, const char *page,
size_t len)
{
@@ -372,6 +496,25 @@ static const struct config_item_type pci_epf_type = {
.ct_owner = THIS_MODULE,
};

+static void pci_epf_cfs_work(struct work_struct *work)
+{
+ struct pci_epf_group *epf_group;
+ struct config_group *group;
+
+ epf_group = container_of(work, struct pci_epf_group, cfs_work.work);
+ group = pci_ep_cfs_add_primary_group(epf_group);
+ if (IS_ERR(group)) {
+ pr_err("failed to create 'primary' EPC interface\n");
+ return;
+ }
+
+ group = pci_ep_cfs_add_secondary_group(epf_group);
+ if (IS_ERR(group)) {
+ pr_err("failed to create 'secondary' EPC interface\n");
+ return;
+ }
+}
+
static struct config_group *pci_epf_make(struct config_group *group,
const char *name)
{
@@ -414,6 +557,10 @@ static struct config_group *pci_epf_make(struct config_group *group,

kfree(epf_name);

+ INIT_DELAYED_WORK(&epf_group->cfs_work, pci_epf_cfs_work);
+ queue_delayed_work(system_wq, &epf_group->cfs_work,
+ msecs_to_jiffies(1));
+
return &epf_group->group;

free_name:
--
2.17.1

2020-09-30 15:39:38

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: [PATCH v7 12/18] PCI: cadence: Configure LM_EP_FUNC_CFG based on epc->function_num_map

The number of functions supported by the endpoint controller is
configured in LM_EP_FUNC_CFG based on func_no member of struct pci_epf.
Now that an endpoint function can be associated with two endpoint
controllers (primary and secondary), just using func_no will
not suffice as that will take into account only if the endpoint
controller is associated with the primary interface of endpoint
function. Instead use epc->function_num_map which will already have the
configured functions information (irrespective of whether the endpoint
controller is associated with primary or secondary interface).

Signed-off-by: Kishon Vijay Abraham I <[email protected]>
---
drivers/pci/controller/cadence/pcie-cadence-ep.c | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c
index 5df492a12042..59ce57744345 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-ep.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c
@@ -507,18 +507,13 @@ static int cdns_pcie_ep_start(struct pci_epc *epc)
struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
struct cdns_pcie *pcie = &ep->pcie;
struct device *dev = pcie->dev;
- struct pci_epf *epf;
- u32 cfg;
int ret;

/*
* BIT(0) is hardwired to 1, hence function 0 is always enabled
* and can't be disabled anyway.
*/
- cfg = BIT(0);
- list_for_each_entry(epf, &epc->pci_epf, list)
- cfg |= BIT(epf->func_no);
- cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, cfg);
+ cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, epc->function_num_map);

ret = cdns_pcie_start_link(pcie);
if (ret) {
--
2.17.1

2020-09-30 15:39:45

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: [PATCH v7 08/18] PCI: endpoint: Add pci_epc_ops to map MSI irq

Add pci_epc_ops to map physical address to MSI address and return MSI
data. The physical address is an address in the outbound region. This is
required to implement doorbell functionality of NTB (non transparent
bridge) wherein EPC on either side of the interface (primary and
secondary) can directly write to the physical address (in outbound
region) of the other interface to ring doorbell using MSI.

Signed-off-by: Kishon Vijay Abraham I <[email protected]>
---
drivers/pci/endpoint/pci-epc-core.c | 41 +++++++++++++++++++++++++++++
include/linux/pci-epc.h | 8 ++++++
2 files changed, 49 insertions(+)

diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c
index 3693eca5b030..cc8f9eb2b177 100644
--- a/drivers/pci/endpoint/pci-epc-core.c
+++ b/drivers/pci/endpoint/pci-epc-core.c
@@ -230,6 +230,47 @@ int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no,
}
EXPORT_SYMBOL_GPL(pci_epc_raise_irq);

+/**
+ * pci_epc_map_msi_irq() - Map physical address to MSI address and return
+ * MSI data
+ * @epc: the EPC device which has the MSI capability
+ * @func_no: the physical endpoint function number in the EPC device
+ * @phys_addr: the physical address of the outbound region
+ * @interrupt_num: the MSI interrupt number
+ * @entry_size: Size of Outbound address region for each interrupt
+ * @msi_data: the data that should be written in order to raise MSI interrupt
+ * with interrupt number as 'interrupt num'
+ * @msi_addr_offset: Offset of MSI address from the aligned outbound address
+ * to which the MSI address is mapped
+ *
+ * Invoke to map physical address to MSI address and return MSI data. The
+ * physical address should be an address in the outbound region. This is
+ * required to implement doorbell functionality of NTB wherein EPC on either
+ * side of the interface (primary and secondary) can directly write to the
+ * physical address (in outbound region) of the other interface to ring
+ * doorbell.
+ */
+int pci_epc_map_msi_irq(struct pci_epc *epc, u8 func_no, phys_addr_t phys_addr,
+ u8 interrupt_num, u32 entry_size, u32 *msi_data,
+ u32 *msi_addr_offset)
+{
+ int ret;
+
+ if (IS_ERR_OR_NULL(epc))
+ return -EINVAL;
+
+ if (!epc->ops->map_msi_irq)
+ return -EINVAL;
+
+ mutex_lock(&epc->lock);
+ ret = epc->ops->map_msi_irq(epc, func_no, phys_addr, interrupt_num,
+ entry_size, msi_data, msi_addr_offset);
+ mutex_unlock(&epc->lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(pci_epc_map_msi_irq);
+
/**
* pci_epc_get_msi() - get the number of MSI interrupt numbers allocated
* @epc: the EPC device to which MSI interrupts was requested
diff --git a/include/linux/pci-epc.h b/include/linux/pci-epc.h
index d9cb3944fb87..b82c9b100e97 100644
--- a/include/linux/pci-epc.h
+++ b/include/linux/pci-epc.h
@@ -55,6 +55,7 @@ pci_epc_interface_string(enum pci_epc_interface_type type)
* @get_msix: ops to get the number of MSI-X interrupts allocated by the RC
* from the MSI-X capability register
* @raise_irq: ops to raise a legacy, MSI or MSI-X interrupt
+ * @map_msi_irq: ops to map physical address to MSI address and return MSI data
* @start: ops to start the PCI link
* @stop: ops to stop the PCI link
* @owner: the module owner containing the ops
@@ -77,6 +78,10 @@ struct pci_epc_ops {
int (*get_msix)(struct pci_epc *epc, u8 func_no);
int (*raise_irq)(struct pci_epc *epc, u8 func_no,
enum pci_epc_irq_type type, u16 interrupt_num);
+ int (*map_msi_irq)(struct pci_epc *epc, u8 func_no,
+ phys_addr_t phys_addr, u8 interrupt_num,
+ u32 entry_size, u32 *msi_data,
+ u32 *msi_addr_offset);
int (*start)(struct pci_epc *epc);
void (*stop)(struct pci_epc *epc);
const struct pci_epc_features* (*get_features)(struct pci_epc *epc,
@@ -216,6 +221,9 @@ int pci_epc_get_msi(struct pci_epc *epc, u8 func_no);
int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts,
enum pci_barno, u32 offset);
int pci_epc_get_msix(struct pci_epc *epc, u8 func_no);
+int pci_epc_map_msi_irq(struct pci_epc *epc, u8 func_no,
+ phys_addr_t phys_addr, u8 interrupt_num,
+ u32 entry_size, u32 *msi_data, u32 *msi_addr_offset);
int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no,
enum pci_epc_irq_type type, u16 interrupt_num);
int pci_epc_start(struct pci_epc *epc);
--
2.17.1

2020-09-30 15:39:48

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: [PATCH v7 09/18] PCI: endpoint: Add pci_epf_ops for epf drivers to expose function specific attrs

In addition to the attributes that are generic across function drivers
documented in Documentation/PCI/endpoint/pci-endpoint-cfs.rst, there
could be function specific attributes that has to be exposed by the
function driver to be configured by the user. Add ->add_cfs()
in pci_epf_ops to be populated by the function driver if it has to
expose any function specific attributes and pci_epf_type_add_cfs() to
be invoked by pci-ep-cfs.c when sub-directory to main function directory
is created.

Signed-off-by: Kishon Vijay Abraham I <[email protected]>
---
drivers/pci/endpoint/pci-epf-core.c | 32 +++++++++++++++++++++++++++++
include/linux/pci-epf.h | 5 +++++
2 files changed, 37 insertions(+)

diff --git a/drivers/pci/endpoint/pci-epf-core.c b/drivers/pci/endpoint/pci-epf-core.c
index 79329ec6373c..7646c8660d42 100644
--- a/drivers/pci/endpoint/pci-epf-core.c
+++ b/drivers/pci/endpoint/pci-epf-core.c
@@ -20,6 +20,38 @@ static DEFINE_MUTEX(pci_epf_mutex);
static struct bus_type pci_epf_bus_type;
static const struct device_type pci_epf_type;

+/**
+ * pci_epf_type_add_cfs() - Help function drivers to expose function specific
+ * attributes in configfs
+ * @epf: the EPF device that has to be configured using configfs
+ * @group: the parent configfs group (corresponding to entries in
+ * pci_epf_device_id)
+ *
+ * Invoke to expose function specific attributes in configfs. If the function
+ * driver does not have anything to expose (attributes configured by user),
+ * return NULL.
+ */
+struct config_group *pci_epf_type_add_cfs(struct pci_epf *epf,
+ struct config_group *group)
+{
+ struct config_group *epf_type_group;
+
+ if (!epf->driver) {
+ dev_err(&epf->dev, "epf device not bound to driver\n");
+ return NULL;
+ }
+
+ if (!epf->driver->ops->add_cfs)
+ return NULL;
+
+ mutex_lock(&epf->lock);
+ epf_type_group = epf->driver->ops->add_cfs(epf, group);
+ mutex_unlock(&epf->lock);
+
+ return epf_type_group;
+}
+EXPORT_SYMBOL_GPL(pci_epf_type_add_cfs);
+
/**
* pci_epf_unbind() - Notify the function driver that the binding between the
* EPF device and EPC device has been lost
diff --git a/include/linux/pci-epf.h b/include/linux/pci-epf.h
index 1dc66824f5a8..b241e7dd171f 100644
--- a/include/linux/pci-epf.h
+++ b/include/linux/pci-epf.h
@@ -62,10 +62,13 @@ struct pci_epf_header {
* @bind: ops to perform when a EPC device has been bound to EPF device
* @unbind: ops to perform when a binding has been lost between a EPC device
* and EPF device
+ * @add_cfs: ops to initialize function specific configfs attributes
*/
struct pci_epf_ops {
int (*bind)(struct pci_epf *epf);
void (*unbind)(struct pci_epf *epf);
+ struct config_group *(*add_cfs)(struct pci_epf *epf,
+ struct config_group *group);
};

/**
@@ -188,4 +191,6 @@ void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar,
enum pci_epc_interface_type type);
int pci_epf_bind(struct pci_epf *epf);
void pci_epf_unbind(struct pci_epf *epf);
+struct config_group *pci_epf_type_add_cfs(struct pci_epf *epf,
+ struct config_group *group);
#endif /* __LINUX_PCI_EPF_H */
--
2.17.1

2020-09-30 15:39:56

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: [PATCH v7 14/18] PCI: Add TI J721E device to pci ids

Add TI J721E device to the pci id database. Since this device has
a configurable PCIe endpoint, it could be used with different
drivers.

Signed-off-by: Kishon Vijay Abraham I <[email protected]>
---
drivers/misc/pci_endpoint_test.c | 1 -
include/linux/pci_ids.h | 1 +
2 files changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c
index e060796f9caa..03fade34aeac 100644
--- a/drivers/misc/pci_endpoint_test.c
+++ b/drivers/misc/pci_endpoint_test.c
@@ -68,7 +68,6 @@
#define PCI_ENDPOINT_TEST_FLAGS 0x2c
#define FLAG_USE_DMA BIT(0)

-#define PCI_DEVICE_ID_TI_J721E 0xb00d
#define PCI_DEVICE_ID_TI_AM654 0xb00c

#define is_am654_pci_dev(pdev) \
diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
index 1ab1e24bcbce..6ddeb64049b5 100644
--- a/include/linux/pci_ids.h
+++ b/include/linux/pci_ids.h
@@ -880,6 +880,7 @@
#define PCI_DEVICE_ID_TI_X620 0xac8d
#define PCI_DEVICE_ID_TI_X420 0xac8e
#define PCI_DEVICE_ID_TI_XX20_FM 0xac8f
+#define PCI_DEVICE_ID_TI_J721E 0xb00d
#define PCI_DEVICE_ID_TI_DRA74x 0xb500
#define PCI_DEVICE_ID_TI_DRA72x 0xb501

--
2.17.1

2020-09-30 15:40:05

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: [PATCH v7 13/18] PCI: endpoint: Add EP function driver to provide NTB functionality

Add a new endpoint function driver to provide NTB functionality
using multiple PCIe endpoint instances.

Signed-off-by: Kishon Vijay Abraham I <[email protected]>
---
drivers/pci/endpoint/functions/Kconfig | 12 +
drivers/pci/endpoint/functions/Makefile | 1 +
drivers/pci/endpoint/functions/pci-epf-ntb.c | 2114 ++++++++++++++++++
3 files changed, 2127 insertions(+)
create mode 100644 drivers/pci/endpoint/functions/pci-epf-ntb.c

diff --git a/drivers/pci/endpoint/functions/Kconfig b/drivers/pci/endpoint/functions/Kconfig
index 8820d0f7ec77..24bfb2af65a1 100644
--- a/drivers/pci/endpoint/functions/Kconfig
+++ b/drivers/pci/endpoint/functions/Kconfig
@@ -12,3 +12,15 @@ config PCI_EPF_TEST
for PCI Endpoint.

If in doubt, say "N" to disable Endpoint test driver.
+
+config PCI_EPF_NTB
+ tristate "PCI Endpoint NTB driver"
+ depends on PCI_ENDPOINT
+ help
+ Select this configuration option to enable the NTB driver
+ for PCI Endpoint. NTB driver implements NTB controller
+ functionality using multiple PCIe endpoint instances. It
+ can support NTB endpoint function devices created using
+ device tree.
+
+ If in doubt, say "N" to disable Endpoint NTB driver.
diff --git a/drivers/pci/endpoint/functions/Makefile b/drivers/pci/endpoint/functions/Makefile
index d6fafff080e2..96ab932a537a 100644
--- a/drivers/pci/endpoint/functions/Makefile
+++ b/drivers/pci/endpoint/functions/Makefile
@@ -4,3 +4,4 @@
#

obj-$(CONFIG_PCI_EPF_TEST) += pci-epf-test.o
+obj-$(CONFIG_PCI_EPF_NTB) += pci-epf-ntb.o
diff --git a/drivers/pci/endpoint/functions/pci-epf-ntb.c b/drivers/pci/endpoint/functions/pci-epf-ntb.c
new file mode 100644
index 000000000000..e2dc5cae5c81
--- /dev/null
+++ b/drivers/pci/endpoint/functions/pci-epf-ntb.c
@@ -0,0 +1,2114 @@
+// SPDX-License-Identifier: GPL-2.0
+/**
+ * Endpoint Function Driver to implement Non-Transparent Bridge functionality
+ *
+ * Copyright (C) 2020 Texas Instruments
+ * Author: Kishon Vijay Abraham I <[email protected]>
+ */
+
+/*
+ *The PCI NTB function driver configures the SoC with multiple PCIe Endpoint(EP)
+ *controller instances (see diagram below) in such a way that transaction from
+ *one EP controller is routed to the other EP controller. Once PCI NTB function
+ *driver configures the SoC with multiple EP instances, HOST1 and HOST2 can
+ *communicate with each other using SoC as a bridge.
+ *
+ * +-------------+ +-------------+
+ * | | | |
+ * | HOST1 | | HOST2 |
+ * | | | |
+ * +------^------+ +------^------+
+ * | |
+ * | |
+ *+---------|-------------------------------------------------|---------+
+ *| +------v------+ +------v------+ |
+ *| | | | | |
+ *| | EP | | EP | |
+ *| | CONTROLLER1 | | CONTROLLER2 | |
+ *| | <-----------------------------------> | |
+ *| | | | | |
+ *| | | | | |
+ *| | | SoC With Multiple EP Instances | | |
+ *| | | (Configured using NTB Function) | | |
+ *| +-------------+ +-------------+ |
+ *+---------------------------------------------------------------------+
+ */
+
+#include <linux/delay.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+
+#include <linux/pci-epc.h>
+#include <linux/pci-epf.h>
+
+static struct workqueue_struct *kpcintb_workqueue;
+
+#define COMMAND_CONFIGURE_DOORBELL 1
+#define COMMAND_TEARDOWN_DOORBELL 2
+#define COMMAND_CONFIGURE_MW 3
+#define COMMAND_TEARDOWN_MW 4
+#define COMMAND_LINK_UP 5
+#define COMMAND_LINK_DOWN 6
+
+#define COMMAND_STATUS_OK 1
+#define COMMAND_STATUS_ERROR 2
+
+#define LINK_STATUS_UP BIT(0)
+
+#define SPAD_COUNT 64
+#define DB_COUNT 4
+#define NTB_MW_OFFSET 2
+#define DB_COUNT_MASK GENMASK(15, 0)
+#define MSIX_ENABLE BIT(16)
+#define MAX_DB_COUNT 32
+#define MAX_MW 4
+
+enum epf_ntb_bar {
+ BAR_CONFIG,
+ BAR_PEER_SPAD,
+ BAR_DB_MW1,
+ BAR_MW2,
+ BAR_MW3,
+ BAR_MW4,
+};
+
+struct epf_ntb {
+ u32 num_mws;
+ u32 db_count;
+ u32 spad_count;
+ struct pci_epf *epf;
+ u64 mws_size[MAX_MW];
+ struct config_group group;
+ struct epf_ntb_epc *epc[2];
+};
+
+#define to_epf_ntb(epf_group) container_of((epf_group), struct epf_ntb, group)
+
+struct epf_ntb_epc {
+ u8 func_no;
+ bool linkup;
+ bool is_msix;
+ int msix_bar;
+ u32 spad_size;
+ struct pci_epc *epc;
+ struct epf_ntb *epf_ntb;
+ void __iomem *mw_addr[6];
+ size_t msix_table_offset;
+ struct epf_ntb_ctrl *reg;
+ struct pci_epf_bar *epf_bar;
+ enum pci_barno epf_ntb_bar[6];
+ struct delayed_work cmd_handler;
+ enum pci_epc_interface_type type;
+ const struct pci_epc_features *epc_features;
+};
+
+struct epf_ntb_ctrl {
+ u32 command;
+ u32 argument;
+ u16 command_status;
+ u16 link_status;
+ u32 topology;
+ u64 addr;
+ u64 size;
+ u32 num_mws;
+ u32 mw1_offset;
+ u32 spad_offset;
+ u32 spad_count;
+ u32 db_entry_size;
+ u32 db_data[MAX_DB_COUNT];
+ u32 db_offset[MAX_DB_COUNT];
+} __packed;
+
+static struct pci_epf_header epf_ntb_header = {
+ .vendorid = PCI_ANY_ID,
+ .deviceid = PCI_ANY_ID,
+ .baseclass_code = PCI_BASE_CLASS_MEMORY,
+ .interrupt_pin = PCI_INTERRUPT_INTA,
+};
+
+/**
+ * epf_ntb_link_up() - Raise link_up interrupt to both the hosts
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @link_up: true or false indicating Link is UP or Down
+ *
+ * Once NTB function in HOST1 and the NTB function in HOST2 invoke
+ * ntb_link_enable(), this NTB function driver will trigger a link event to
+ * the NTB client in both the hosts.
+ */
+static int epf_ntb_link_up(struct epf_ntb *ntb, bool link_up)
+{
+ enum pci_epc_interface_type type;
+ enum pci_epc_irq_type irq_type;
+ struct epf_ntb_epc *ntb_epc;
+ struct epf_ntb_ctrl *ctrl;
+ bool is_msix;
+ u8 func_no;
+ int ret;
+
+ for (type = PRIMARY_INTERFACE; type <= SECONDARY_INTERFACE; type++) {
+ ntb_epc = ntb->epc[type];
+ func_no = ntb_epc->func_no;
+ is_msix = ntb_epc->is_msix;
+ ctrl = ntb_epc->reg;
+ if (link_up)
+ ctrl->link_status |= LINK_STATUS_UP;
+ else
+ ctrl->link_status &= ~LINK_STATUS_UP;
+ irq_type = is_msix ? PCI_EPC_IRQ_MSIX : PCI_EPC_IRQ_MSI;
+ ret = pci_epc_raise_irq(ntb_epc->epc, func_no, irq_type,
+ 1);
+ if (ret < 0) {
+ WARN(1, "%s intf: Failed to raise Link Up IRQ\n",
+ pci_epc_interface_string(type));
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * epf_ntb_configure_mw() - Configure the Outbound Address Space for one host
+ * to access the memory window of other host
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ * @mw: Index of the memory window (either 0, 1, 2 or 3)
+ *
+ *+-----------------+ +----->+----------------+-----------+-----------------+
+ *| BAR0 | | | Doorbell 1 +-----------> MSI|X ADDRESS 1 |
+ *+-----------------+ | +----------------+ +-----------------+
+ *| BAR1 | | | Doorbell 2 +---------+ | |
+ *+-----------------+----+ +----------------+ | | |
+ *| BAR2 | | Doorbell 3 +-------+ | +-----------------+
+ *+-----------------+----+ +----------------+ | +-> MSI|X ADDRESS 2 |
+ *| BAR3 | | | Doorbell 4 +-----+ | +-----------------+
+ *+-----------------+ | |----------------+ | | | |
+ *| BAR4 | | | | | | +-----------------+
+ *+-----------------+ | | MW1 +---+ | +-->+ MSI|X ADDRESS 3||
+ *| BAR5 | | | | | | +-----------------+
+ *+-----------------+ +----->-----------------+ | | | |
+ * EP CONTROLLER 1 | | | | +-----------------+
+ * | | | +---->+ MSI|X ADDRESS 4 |
+ * +----------------+ | +-----------------+
+ * (A) EP CONTROLLER 2 | | |
+ * (OB SPACE) | | |
+ * +-------> MW1 |
+ * | |
+ * | |
+ * (B) +-----------------+
+ * | |
+ * | |
+ * | |
+ * | |
+ * | |
+ * +-----------------+
+ * PCI Address Space
+ * (Managed by HOST2)
+ *
+ * This function performs stage (B) in the above diagram (see MW1) i.e map OB
+ * address space of memory window to PCI address space.
+ *
+ * This operation requires 3 parameters
+ * 1) Address in the outbound address space
+ * 2) Address in the PCIe Address space
+ * 3) Size of the address region that is requested to be mapped
+ *
+ * The address in the outbound address space (for MW1, MW2, MW3 and MW4) is
+ * stored in epf_bar corresponding to BAR_DB_MW1 for MW1 and BAR_MW2, BAR_MW3
+ * BAR_MW4 for rest of the BARs of epf_ntb_epc that is connected to HOST1. This
+ * is populated in epf_ntb_alloc_peer_mem() in this driver.
+ *
+ * The address and size of the PCIe address region that has to be mapped would
+ * be provided by HOST2 in ctrl->addr and ctrl->size of epf_ntb_epc that is
+ * connected to HOST2.
+ *
+ * Please note Memory window1 (MW1) and Doorbell registers together will be
+ * mapped to a single BAR (BAR2) above for 32-bit BARs. The exact BAR that's
+ * used for Memory window (MW) can be obtained from epf_ntb_bar[BAR_DB_MW1],
+ * epf_ntb_bar[BAR_MW2], epf_ntb_bar[BAR_MW2], epf_ntb_bar[BAR_MW2].
+ */
+static int
+epf_ntb_configure_mw(struct epf_ntb *ntb, enum pci_epc_interface_type type,
+ u32 mw)
+{
+ struct epf_ntb_epc *peer_ntb_epc, *ntb_epc;
+ struct pci_epf_bar *peer_epf_bar;
+ enum pci_barno peer_barno;
+ struct epf_ntb_ctrl *ctrl;
+ phys_addr_t phys_addr;
+ struct pci_epc *epc;
+ u64 addr, size;
+ int ret = 0;
+ u8 func_no;
+
+ ntb_epc = ntb->epc[type];
+ epc = ntb_epc->epc;
+
+ peer_ntb_epc = ntb->epc[!type];
+ peer_barno = peer_ntb_epc->epf_ntb_bar[mw + NTB_MW_OFFSET];
+ peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno];
+
+ phys_addr = peer_epf_bar->phys_addr;
+ ctrl = ntb_epc->reg;
+ addr = ctrl->addr;
+ size = ctrl->size;
+ if (mw + NTB_MW_OFFSET == BAR_DB_MW1)
+ phys_addr += ctrl->mw1_offset;
+
+ if (size > ntb->mws_size[mw]) {
+ WARN(1, "%s intf: MW: %d Req Sz:%llxx > Supported Sz:%llx\n",
+ pci_epc_interface_string(type), mw, size,
+ ntb->mws_size[mw]);
+ ret = -EINVAL;
+ goto err_invalid_size;
+ }
+
+ func_no = ntb_epc->func_no;
+
+ ret = pci_epc_map_addr(epc, func_no, phys_addr, addr, size);
+ WARN(ret < 0, "%s intf: Failed to map memory window %d address\n",
+ pci_epc_interface_string(type), mw);
+
+err_invalid_size:
+
+ return ret;
+}
+
+/**
+ * epf_ntb_teardown_mw() - Teardown the configured OB ATU
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ * @mw: Index of the memory window (either 0, 1, 2 or 3)
+ *
+ * Teardown the configured OB ATU configured in epf_ntb_configure_mw() using
+ * pci_epc_unmap_addr()
+ */
+static void
+epf_ntb_teardown_mw(struct epf_ntb *ntb, enum pci_epc_interface_type type,
+ u32 mw)
+{
+ struct epf_ntb_epc *peer_ntb_epc, *ntb_epc;
+ struct pci_epf_bar *peer_epf_bar;
+ enum pci_barno peer_barno;
+ struct epf_ntb_ctrl *ctrl;
+ phys_addr_t phys_addr;
+ struct pci_epc *epc;
+ u8 func_no;
+
+ ntb_epc = ntb->epc[type];
+ epc = ntb_epc->epc;
+
+ peer_ntb_epc = ntb->epc[!type];
+ peer_barno = peer_ntb_epc->epf_ntb_bar[mw + NTB_MW_OFFSET];
+ peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno];
+
+ phys_addr = peer_epf_bar->phys_addr;
+ ctrl = ntb_epc->reg;
+ if (mw + NTB_MW_OFFSET == BAR_DB_MW1)
+ phys_addr += ctrl->mw1_offset;
+ func_no = ntb_epc->func_no;
+
+ pci_epc_unmap_addr(epc, func_no, phys_addr);
+}
+
+/**
+ * epf_ntb_configure_msi() - Map OB address space to MSI address
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ * @db_count: Number of doorbell interrupts to map
+ *
+ *+-----------------+ +----->+----------------+-----------+-----------------+
+ *| BAR0 | | | Doorbell 1 +---+-------> MSI ADDRESS |
+ *+-----------------+ | +----------------+ | +-----------------+
+ *| BAR1 | | | Doorbell 2 +---+ | |
+ *+-----------------+----+ +----------------+ | | |
+ *| BAR2 | | Doorbell 3 +---+ | |
+ *+-----------------+----+ +----------------+ | | |
+ *| BAR3 | | | Doorbell 4 +---+ | |
+ *+-----------------+ | |----------------+ | |
+ *| BAR4 | | | | | |
+ *+-----------------+ | | MW1 | | |
+ *| BAR5 | | | | | |
+ *+-----------------+ +----->-----------------+ | |
+ * EP CONTROLLER 1 | | | |
+ * | | | |
+ * +----------------+ +-----------------+
+ * (A) EP CONTROLLER 2 | |
+ * (OB SPACE) | |
+ * | MW1 |
+ * | |
+ * | |
+ * (B) +-----------------+
+ * | |
+ * | |
+ * | |
+ * | |
+ * | |
+ * +-----------------+
+ * PCI Address Space
+ * (Managed by HOST2)
+ *
+ *
+ * This function performs stage (B) in the above diagram (see Doorbell 1,
+ * Doorbell 2, Doorbell 3, Doorbell 4) i.e map OB address space corresponding to
+ * doorbell to MSI address in PCI address space.
+ *
+ * This operation requires 3 parameters
+ * 1) Address reserved for doorbell in the outbound address space
+ * 2) MSI-X address in the PCIe Address space
+ * 3) Number of MSI-X interrupts that has to be configured
+ *
+ * The address in the outbound address space (for the Doorbell) is stored in
+ * epf_bar corresponding to BAR_DB_MW1 of epf_ntb_epc that is connected to
+ * HOST1. This is populated in epf_ntb_alloc_peer_mem() in this driver along
+ * with address for MW1.
+ *
+ * pci_epc_map_msi_irq() takes the MSI address from MSI capability register
+ * and maps the OB address (obtained in epf_ntb_alloc_peer_mem()) to the MSI
+ * address.
+ *
+ * epf_ntb_configure_msi() also stores the MSI data to raise each interrupt
+ * in db_data of the peer's control region. This helps the peer to raise
+ * doorbell of the other host by writing db_data to the BAR corresponding to
+ * BAR_DB_MW1.
+ */
+static int
+epf_ntb_configure_msi(struct epf_ntb *ntb, enum pci_epc_interface_type type,
+ u16 db_count)
+{
+ struct epf_ntb_epc *peer_ntb_epc, *ntb_epc;
+ u32 db_entry_size, db_data, db_offset;
+ struct pci_epf_bar *peer_epf_bar;
+ struct epf_ntb_ctrl *peer_ctrl;
+ enum pci_barno peer_barno;
+ phys_addr_t phys_addr;
+ struct pci_epc *epc;
+ u8 func_no;
+ int ret, i;
+
+ ntb_epc = ntb->epc[type];
+ epc = ntb_epc->epc;
+
+ peer_ntb_epc = ntb->epc[!type];
+ peer_barno = peer_ntb_epc->epf_ntb_bar[BAR_DB_MW1];
+ peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno];
+ peer_ctrl = peer_ntb_epc->reg;
+ db_entry_size = peer_ctrl->db_entry_size;
+
+ phys_addr = peer_epf_bar->phys_addr;
+ func_no = ntb_epc->func_no;
+
+ ret = pci_epc_map_msi_irq(epc, func_no, phys_addr, db_count,
+ db_entry_size, &db_data, &db_offset);
+ if (ret < 0) {
+ WARN(1, "%s intf: Failed to map MSI IRQ\n",
+ pci_epc_interface_string(type));
+ return ret;
+ }
+
+ for (i = 0; i < db_count; i++) {
+ peer_ctrl->db_data[i] = db_data | i;
+ peer_ctrl->db_offset[i] = db_offset;
+ }
+
+ return 0;
+}
+
+/**
+ * epf_ntb_configure_msix() - Map OB address space to MSI-X address
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ * @db_count: Number of doorbell interrupts to map
+ *
+ *+-----------------+ +----->+----------------+-----------+-----------------+
+ *| BAR0 | | | Doorbell 1 +-----------> MSI-X ADDRESS 1 |
+ *+-----------------+ | +----------------+ +-----------------+
+ *| BAR1 | | | Doorbell 2 +---------+ | |
+ *+-----------------+----+ +----------------+ | | |
+ *| BAR2 | | Doorbell 3 +-------+ | +-----------------+
+ *+-----------------+----+ +----------------+ | +-> MSI-X ADDRESS 2 |
+ *| BAR3 | | | Doorbell 4 +-----+ | +-----------------+
+ *+-----------------+ | |----------------+ | | | |
+ *| BAR4 | | | | | | +-----------------+
+ *+-----------------+ | | MW1 + | +-->+ MSI-X ADDRESS 3||
+ *| BAR5 | | | | | +-----------------+
+ *+-----------------+ +----->-----------------+ | | |
+ * EP CONTROLLER 1 | | | +-----------------+
+ * | | +---->+ MSI-X ADDRESS 4 |
+ * +----------------+ +-----------------+
+ * (A) EP CONTROLLER 2 | |
+ * (OB SPACE) | |
+ * | MW1 |
+ * | |
+ * | |
+ * (B) +-----------------+
+ * | |
+ * | |
+ * | |
+ * | |
+ * | |
+ * +-----------------+
+ * PCI Address Space
+ * (Managed by HOST2)
+ *
+ * This function performs stage (B) in the above diagram (see Doorbell 1,
+ * Doorbell 2, Doorbell 3, Doorbell 4) i.e map OB address space corresponding to
+ * doorbell to MSI-X address in PCI address space.
+ *
+ * This operation requires 3 parameters
+ * 1) Address reserved for doorbell in the outbound address space
+ * 2) MSI-X address in the PCIe Address space
+ * 3) Number of MSI-X interrupts that has to be configured
+ *
+ * The address in the outbound address space (for the Doorbell) is stored in
+ * epf_bar corresponding to BAR_DB_MW1 of epf_ntb_epc that is connected to
+ * HOST1. This is populated in epf_ntb_alloc_peer_mem() in this driver along
+ * with address for MW1.
+ * The MSI-X address is in the MSI-X table of EP CONTROLLER 2 and
+ * the count of doorbell is in ctrl->argument of epf_ntb_epc that is connected
+ * to HOST2. MSI-X table is stored memory mapped to ntb_epc->msix_bar and the
+ * offset is in ntb_epc->msix_table_offset. From this epf_ntb_configure_msix()
+ * gets the MSI-X address and MSI-X data
+ *
+ * epf_ntb_configure_msix() also stores the MSI-X data to raise each interrupt
+ * in db_data of the peer's control region. This helps the peer to raise
+ * doorbell of the other host by writing db_data to the BAR corresponding to
+ * BAR_DB_MW1.
+ */
+static int epf_ntb_configure_msix(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type,
+ u16 db_count)
+{
+ const struct pci_epc_features *epc_features;
+ struct epf_ntb_epc *peer_ntb_epc, *ntb_epc;
+ struct pci_epf_bar *peer_epf_bar, *epf_bar;
+ struct pci_epf_msix_tbl *msix_tbl;
+ struct epf_ntb_ctrl *peer_ctrl;
+ u32 db_entry_size, msg_data;
+ enum pci_barno peer_barno;
+ phys_addr_t phys_addr;
+ struct pci_epc *epc;
+ size_t align;
+ u64 msg_addr;
+ u8 func_no;
+ int ret, i;
+
+ ntb_epc = ntb->epc[type];
+ epc = ntb_epc->epc;
+
+ epf_bar = &ntb_epc->epf_bar[ntb_epc->msix_bar];
+ msix_tbl = epf_bar->addr + ntb_epc->msix_table_offset;
+
+ peer_ntb_epc = ntb->epc[!type];
+ peer_barno = peer_ntb_epc->epf_ntb_bar[BAR_DB_MW1];
+ peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno];
+ phys_addr = peer_epf_bar->phys_addr;
+ peer_ctrl = peer_ntb_epc->reg;
+ epc_features = ntb_epc->epc_features;
+ align = epc_features->align;
+
+ func_no = ntb_epc->func_no;
+ db_entry_size = peer_ctrl->db_entry_size;
+
+ for (i = 0; i < db_count; i++) {
+ msg_addr = ALIGN_DOWN(msix_tbl[i].msg_addr, align);
+ msg_data = msix_tbl[i].msg_data;
+ ret = pci_epc_map_addr(epc, func_no, phys_addr, msg_addr,
+ db_entry_size);
+ if (ret)
+ return ret;
+ phys_addr = phys_addr + db_entry_size;
+ peer_ctrl->db_data[i] = msg_data;
+ peer_ctrl->db_offset[i] = msix_tbl[i].msg_addr & (align - 1);
+ }
+ ntb_epc->is_msix = true;
+
+ return 0;
+}
+
+/**
+ * epf_ntb_configure_db() - Configure the Outbound Address Space for one host
+ * to ring the doorbell of other host
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ * @db_count: Count of the number of doorbells that has to be configured
+ * @msix: Indicates whether MSI-X or MSI should be used
+ *
+ * Invokes epf_ntb_configure_msix() or epf_ntb_configure_msi() required for
+ * one HOST to ring the doorbell of other HOST.
+ */
+static int
+epf_ntb_configure_db(struct epf_ntb *ntb, enum pci_epc_interface_type type,
+ u16 db_count, bool msix)
+{
+ int ret;
+
+ if (db_count > MAX_DB_COUNT)
+ return -EINVAL;
+
+ if (msix)
+ ret = epf_ntb_configure_msix(ntb, type, db_count);
+ else
+ ret = epf_ntb_configure_msi(ntb, type, db_count);
+
+ WARN(ret < 0, "%s intf: Failed to configure DB\n",
+ pci_epc_interface_string(type));
+
+ return ret;
+}
+
+/**
+ * epf_ntb_teardown_db() - Unmap address in OB address space to MSI/MSI-X
+ * address
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * Invoke pci_epc_unmap_addr() to unmap OB address to MSI/MSI-X address.
+ */
+static void
+epf_ntb_teardown_db(struct epf_ntb *ntb, enum pci_epc_interface_type type)
+{
+ struct epf_ntb_epc *peer_ntb_epc, *ntb_epc;
+ struct pci_epf_bar *peer_epf_bar;
+ enum pci_barno peer_barno;
+ phys_addr_t phys_addr;
+ struct pci_epc *epc;
+ u8 func_no;
+
+ ntb_epc = ntb->epc[type];
+ epc = ntb_epc->epc;
+
+ peer_ntb_epc = ntb->epc[!type];
+ peer_barno = peer_ntb_epc->epf_ntb_bar[BAR_DB_MW1];
+ peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno];
+ phys_addr = peer_epf_bar->phys_addr;
+ func_no = ntb_epc->func_no;
+
+ pci_epc_unmap_addr(epc, func_no, phys_addr);
+}
+
+/**
+ * epf_ntb_cmd_handler() - Handle commands provided by the NTB Host
+ * @work: work_struct for the two epf_ntb_epc (PRIMARY and SECONDARY)
+ *
+ * Workqueue function that gets invoked for the two epf_ntb_epc
+ * periodically (once every 5ms) to see if it has received any commands
+ * from NTB host. The host can send commands to configure doorbell or
+ * configure memory window or to update link status.
+ */
+static void epf_ntb_cmd_handler(struct work_struct *work)
+{
+ enum pci_epc_interface_type type;
+ struct epf_ntb_epc *ntb_epc;
+ struct epf_ntb_ctrl *ctrl;
+ u32 command, argument;
+ struct epf_ntb *ntb;
+ struct device *dev;
+ u16 db_count;
+ bool is_msix;
+ int ret;
+
+ ntb_epc = container_of(work, struct epf_ntb_epc, cmd_handler.work);
+ ctrl = ntb_epc->reg;
+ command = ctrl->command;
+ if (!command)
+ goto reset_handler;
+ argument = ctrl->argument;
+
+ ctrl->command = 0;
+ ctrl->argument = 0;
+
+ ctrl = ntb_epc->reg;
+ type = ntb_epc->type;
+ ntb = ntb_epc->epf_ntb;
+ dev = &ntb->epf->dev;
+
+ switch (command) {
+ case COMMAND_CONFIGURE_DOORBELL:
+ db_count = argument & DB_COUNT_MASK;
+ is_msix = argument & MSIX_ENABLE;
+ ret = epf_ntb_configure_db(ntb, type, db_count, is_msix);
+ if (ret < 0)
+ ctrl->command_status = COMMAND_STATUS_ERROR;
+ else
+ ctrl->command_status = COMMAND_STATUS_OK;
+ break;
+ case COMMAND_TEARDOWN_DOORBELL:
+ epf_ntb_teardown_db(ntb, type);
+ ctrl->command_status = COMMAND_STATUS_OK;
+ break;
+ case COMMAND_CONFIGURE_MW:
+ ret = epf_ntb_configure_mw(ntb, type, argument);
+ if (ret < 0)
+ ctrl->command_status = COMMAND_STATUS_ERROR;
+ else
+ ctrl->command_status = COMMAND_STATUS_OK;
+ break;
+ case COMMAND_TEARDOWN_MW:
+ epf_ntb_teardown_mw(ntb, type, argument);
+ ctrl->command_status = COMMAND_STATUS_OK;
+ break;
+ case COMMAND_LINK_UP:
+ ntb_epc->linkup = true;
+ if (ntb->epc[PRIMARY_INTERFACE]->linkup &&
+ ntb->epc[SECONDARY_INTERFACE]->linkup) {
+ ret = epf_ntb_link_up(ntb, true);
+ if (ret < 0)
+ ctrl->command_status = COMMAND_STATUS_ERROR;
+ else
+ ctrl->command_status = COMMAND_STATUS_OK;
+ goto reset_handler;
+ }
+ ctrl->command_status = COMMAND_STATUS_OK;
+ break;
+ case COMMAND_LINK_DOWN:
+ ntb_epc->linkup = false;
+ ret = epf_ntb_link_up(ntb, false);
+ if (ret < 0)
+ ctrl->command_status = COMMAND_STATUS_ERROR;
+ else
+ ctrl->command_status = COMMAND_STATUS_OK;
+ break;
+ default:
+ dev_err(dev, "%s intf UNKNOWN command: %d\n",
+ pci_epc_interface_string(type), command);
+ break;
+ }
+
+reset_handler:
+ queue_delayed_work(kpcintb_workqueue, &ntb_epc->cmd_handler,
+ msecs_to_jiffies(5));
+}
+
+/**
+ * epf_ntb_peer_spad_bar_clear() - Clears Peer Scratchpad BAR
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ *+-----------------+------->+------------------+ +-----------------+
+ *| BAR0 | | CONFIG REGION | | BAR0 |
+ *+-----------------+----+ +------------------+<-------+-----------------+
+ *| BAR1 | | |SCRATCHPAD REGION | | BAR1 |
+ *+-----------------+ +-->+------------------+<-------+-----------------+
+ *| BAR2 | Local Memory | BAR2 |
+ *+-----------------+ +-----------------+
+ *| BAR3 | | BAR3 |
+ *+-----------------+ +-----------------+
+ *| BAR4 | | BAR4 |
+ *+-----------------+ +-----------------+
+ *| BAR5 | | BAR5 |
+ *+-----------------+ +-----------------+
+ * EP CONTROLLER 1 EP CONTROLLER 2
+ *
+ * It clears BAR1 of EP CONTROLLER 2 which contains the HOST2's peer scratchpad
+ * region. While BAR1 is the default peer scratchpad BAR, an NTB could have
+ * other BARs for peer scratchpad (because of 64-bit BARs or reserved BARs).
+ * This function can get the exact BAR used for peer scratchpad from
+ * epf_ntb_bar[BAR_PEER_SPAD].
+ *
+ * Since HOST2's peer scratchpad is also HOST1's self scratchpad, this function
+ * gets the address of peer scratchpad from
+ * peer_ntb_epc->epf_ntb_bar[BAR_CONFIG]
+ */
+static void epf_ntb_peer_spad_bar_clear(struct epf_ntb_epc *ntb_epc)
+{
+ struct pci_epf_bar *epf_bar;
+ enum pci_barno barno;
+ struct pci_epc *epc;
+ u8 func_no;
+
+ epc = ntb_epc->epc;
+ func_no = ntb_epc->func_no;
+ barno = ntb_epc->epf_ntb_bar[BAR_PEER_SPAD];
+ epf_bar = &ntb_epc->epf_bar[barno];
+ pci_epc_clear_bar(epc, func_no, epf_bar);
+}
+
+/**
+ * epf_ntb_peer_spad_bar_set() - Sets peer scratchpad BAR
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ *+-----------------+------->+------------------+ +-----------------+
+ *| BAR0 | | CONFIG REGION | | BAR0 |
+ *+-----------------+----+ +------------------+<-------+-----------------+
+ *| BAR1 | | |SCRATCHPAD REGION | | BAR1 |
+ *+-----------------+ +-->+------------------+<-------+-----------------+
+ *| BAR2 | Local Memory | BAR2 |
+ *+-----------------+ +-----------------+
+ *| BAR3 | | BAR3 |
+ *+-----------------+ +-----------------+
+ *| BAR4 | | BAR4 |
+ *+-----------------+ +-----------------+
+ *| BAR5 | | BAR5 |
+ *+-----------------+ +-----------------+
+ * EP CONTROLLER 1 EP CONTROLLER 2
+ *
+ * It sets BAR1 of EP CONTROLLER 2 which contains the HOST2's peer scratchpad
+ * region. While BAR1 is the default peer scratchpad BAR, an NTB could have
+ * other BARs for peer scratchpad (because of 64-bit BARs or reserved BARs).
+ * This function can get the exact BAR used for peer scratchpad from
+ * epf_ntb_bar[BAR_PEER_SPAD].
+ *
+ * Since HOST2's peer scratchpad is also HOST1's self scratchpad, this function
+ * gets the address of peer scratchpad from
+ * peer_ntb_epc->epf_ntb_bar[BAR_CONFIG]
+ */
+static int
+epf_ntb_peer_spad_bar_set(struct epf_ntb *ntb, enum pci_epc_interface_type type)
+{
+ struct epf_ntb_epc *peer_ntb_epc, *ntb_epc;
+ struct pci_epf_bar *peer_epf_bar, *epf_bar;
+ enum pci_barno peer_barno, barno;
+ u32 peer_spad_offset;
+ struct pci_epc *epc;
+ struct device *dev;
+ u8 func_no;
+ int ret;
+
+ dev = &ntb->epf->dev;
+
+ peer_ntb_epc = ntb->epc[!type];
+ peer_barno = peer_ntb_epc->epf_ntb_bar[BAR_CONFIG];
+ peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno];
+
+ ntb_epc = ntb->epc[type];
+ barno = ntb_epc->epf_ntb_bar[BAR_PEER_SPAD];
+ epf_bar = &ntb_epc->epf_bar[barno];
+ func_no = ntb_epc->func_no;
+ epc = ntb_epc->epc;
+
+ peer_spad_offset = peer_ntb_epc->reg->spad_offset;
+ epf_bar->phys_addr = peer_epf_bar->phys_addr + peer_spad_offset;
+ epf_bar->size = peer_ntb_epc->spad_size;
+ epf_bar->barno = barno;
+ epf_bar->flags = PCI_BASE_ADDRESS_MEM_TYPE_32;
+
+ ret = pci_epc_set_bar(ntb_epc->epc, func_no, epf_bar);
+ if (ret) {
+ dev_err(dev, "%s intf: peer SPAD BAR set failed\n",
+ pci_epc_interface_string(type));
+ return ret;
+ }
+
+ return 0;
+}
+
+/**
+ * epf_ntb_config_sspad_bar_clear() - Clears Config + Self scratchpad BAR
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ *+-----------------+------->+------------------+ +-----------------+
+ *| BAR0 | | CONFIG REGION | | BAR0 |
+ *+-----------------+----+ +------------------+<-------+-----------------+
+ *| BAR1 | | |SCRATCHPAD REGION | | BAR1 |
+ *+-----------------+ +-->+------------------+<-------+-----------------+
+ *| BAR2 | Local Memory | BAR2 |
+ *+-----------------+ +-----------------+
+ *| BAR3 | | BAR3 |
+ *+-----------------+ +-----------------+
+ *| BAR4 | | BAR4 |
+ *+-----------------+ +-----------------+
+ *| BAR5 | | BAR5 |
+ *+-----------------+ +-----------------+
+ * EP CONTROLLER 1 EP CONTROLLER 2
+ *
+ * It clears BAR0 of EP CONTROLLER 1 which contains the HOST1's config and
+ * self scratchpad region (removes inbound ATU configuration). While BAR0 is
+ * the default self scratchpad BAR, an NTB could have other BARs for self
+ * scratchpad (because of reserved BARs). This function can get the exact BAR
+ * used for self scratchpad from epf_ntb_bar[BAR_CONFIG].
+ *
+ * Please note the self scratchpad region and config region is combined to
+ * a single region and mapped using the same BAR. Also note HOST2's peer
+ * scratchpad is HOST1's self scratchpad.
+ */
+static void epf_ntb_config_sspad_bar_clear(struct epf_ntb_epc *ntb_epc)
+{
+ struct pci_epf_bar *epf_bar;
+ enum pci_barno barno;
+ struct pci_epc *epc;
+ u8 func_no;
+
+ epc = ntb_epc->epc;
+ func_no = ntb_epc->func_no;
+ barno = ntb_epc->epf_ntb_bar[BAR_CONFIG];
+ epf_bar = &ntb_epc->epf_bar[barno];
+ pci_epc_clear_bar(epc, func_no, epf_bar);
+}
+
+/**
+ * epf_ntb_config_sspad_bar_set() - Sets Config + Self scratchpad BAR
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ *+-----------------+------->+------------------+ +-----------------+
+ *| BAR0 | | CONFIG REGION | | BAR0 |
+ *+-----------------+----+ +------------------+<-------+-----------------+
+ *| BAR1 | | |SCRATCHPAD REGION | | BAR1 |
+ *+-----------------+ +-->+------------------+<-------+-----------------+
+ *| BAR2 | Local Memory | BAR2 |
+ *+-----------------+ +-----------------+
+ *| BAR3 | | BAR3 |
+ *+-----------------+ +-----------------+
+ *| BAR4 | | BAR4 |
+ *+-----------------+ +-----------------+
+ *| BAR5 | | BAR5 |
+ *+-----------------+ +-----------------+
+ * EP CONTROLLER 1 EP CONTROLLER 2
+ *
+ * It maps BAR0 of EP CONTROLLER 1 which contains the HOST1's config and
+ * self scratchpad region. While BAR0 is the default self scratchpad BAR, an
+ * NTB could have other BARs for self scratchpad (because of reserved BARs).
+ * This function can get the exact BAR used for self scratchpad from
+ * epf_ntb_bar[BAR_CONFIG].
+ *
+ * Please note the self scratchpad region and config region is combined to
+ * a single region and mapped using the same BAR. Also note HOST2's peer
+ * scratchpad is HOST1's self scratchpad.
+ */
+static int epf_ntb_config_sspad_bar_set(struct epf_ntb_epc *ntb_epc)
+{
+ struct pci_epf_bar *epf_bar;
+ enum pci_barno barno;
+ struct epf_ntb *ntb;
+ struct pci_epc *epc;
+ struct device *dev;
+ u8 func_no;
+ int ret;
+
+ ntb = ntb_epc->epf_ntb;
+ dev = &ntb->epf->dev;
+
+ epc = ntb_epc->epc;
+ func_no = ntb_epc->func_no;
+ barno = ntb_epc->epf_ntb_bar[BAR_CONFIG];
+ epf_bar = &ntb_epc->epf_bar[barno];
+
+ ret = pci_epc_set_bar(epc, func_no, epf_bar);
+ if (ret) {
+ dev_err(dev, "%s inft: Config/Status/SPAD BAR set failed\n",
+ pci_epc_interface_string(ntb_epc->type));
+ return ret;
+ }
+
+ return 0;
+}
+
+/**
+ * epf_ntb_config_spad_bar_free() - Free the physical memory associated with
+ * config + scratchpad region
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ *+-----------------+------->+------------------+ +-----------------+
+ *| BAR0 | | CONFIG REGION | | BAR0 |
+ *+-----------------+----+ +------------------+<-------+-----------------+
+ *| BAR1 | | |SCRATCHPAD REGION | | BAR1 |
+ *+-----------------+ +-->+------------------+<-------+-----------------+
+ *| BAR2 | Local Memory | BAR2 |
+ *+-----------------+ +-----------------+
+ *| BAR3 | | BAR3 |
+ *+-----------------+ +-----------------+
+ *| BAR4 | | BAR4 |
+ *+-----------------+ +-----------------+
+ *| BAR5 | | BAR5 |
+ *+-----------------+ +-----------------+
+ * EP CONTROLLER 1 EP CONTROLLER 2
+ *
+ * This function frees the Local Memory mentioned in the above diagram. After
+ * invoking this function, any of config + self scrathpad region of HOST1 or
+ * peer scratchpad region of HOST2 should not be accessed.
+ */
+static void epf_ntb_config_spad_bar_free(struct epf_ntb *ntb)
+{
+ enum pci_epc_interface_type type;
+ struct epf_ntb_epc *ntb_epc;
+ enum pci_barno barno;
+ struct pci_epf *epf;
+
+ epf = ntb->epf;
+ for (type = PRIMARY_INTERFACE; type <= SECONDARY_INTERFACE; type++) {
+ ntb_epc = ntb->epc[type];
+ barno = ntb_epc->epf_ntb_bar[BAR_CONFIG];
+ if (ntb_epc->reg)
+ pci_epf_free_space(epf, ntb_epc->reg, barno, type);
+ }
+}
+
+/**
+ * epf_ntb_config_spad_bar_alloc() - Allocate memory for config + scratchpad
+ * region
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ *+-----------------+------->+------------------+ +-----------------+
+ *| BAR0 | | CONFIG REGION | | BAR0 |
+ *+-----------------+----+ +------------------+<-------+-----------------+
+ *| BAR1 | | |SCRATCHPAD REGION | | BAR1 |
+ *+-----------------+ +-->+------------------+<-------+-----------------+
+ *| BAR2 | Local Memory | BAR2 |
+ *+-----------------+ +-----------------+
+ *| BAR3 | | BAR3 |
+ *+-----------------+ +-----------------+
+ *| BAR4 | | BAR4 |
+ *+-----------------+ +-----------------+
+ *| BAR5 | | BAR5 |
+ *+-----------------+ +-----------------+
+ * EP CONTROLLER 1 EP CONTROLLER 2
+ *
+ * This function allocates the Local Memory mentioned in the above diagram.
+ * The size of CONFIG REGION is sizeof(struct epf_ntb_ctrl) and size of
+ * SCRATCHPAD REGION is obtained from "spad-count" device tree property.
+ *
+ * The size of both config region and scratchpad region has to be aligned,
+ * since the scratchpad region will also be mapped as PEER SCRATCHPAD of
+ * other host using a separate BAR.
+ */
+static int
+epf_ntb_config_spad_bar_alloc(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type)
+{
+ const struct pci_epc_features *peer_epc_features, *epc_features;
+ struct epf_ntb_epc *peer_ntb_epc, *ntb_epc;
+ size_t msix_table_size, pba_size, align;
+ enum pci_barno peer_barno, barno;
+ struct epf_ntb_ctrl *ctrl;
+ u32 spad_size, ctrl_size;
+ u64 size, peer_size;
+ struct pci_epc *epc;
+ struct pci_epf *epf;
+ struct device *dev;
+ bool msix_capable;
+ u32 spad_count;
+ void *base;
+
+ epf = ntb->epf;
+ dev = &epf->dev;
+ ntb_epc = ntb->epc[type];
+ epc = ntb_epc->epc;
+
+ epc_features = ntb_epc->epc_features;
+ barno = ntb_epc->epf_ntb_bar[BAR_CONFIG];
+ size = epc_features->bar_fixed_size[barno];
+ align = epc_features->align;
+
+ peer_ntb_epc = ntb->epc[!type];
+ peer_epc_features = peer_ntb_epc->epc_features;
+ peer_barno = ntb_epc->epf_ntb_bar[BAR_PEER_SPAD];
+ peer_size = peer_epc_features->bar_fixed_size[barno];
+
+ /* Check if epc_features is populated incorrectly */
+ if ((!IS_ALIGNED(size, align)))
+ return -EINVAL;
+
+ spad_count = ntb->spad_count;
+
+ ctrl_size = sizeof(struct epf_ntb_ctrl);
+ spad_size = spad_count * 4;
+
+ msix_capable = epc_features->msix_capable;
+ if (msix_capable) {
+ msix_table_size = PCI_MSIX_ENTRY_SIZE * ntb->db_count;
+ ctrl_size = ALIGN(ctrl_size, 8);
+ ntb_epc->msix_table_offset = ctrl_size;
+ ntb_epc->msix_bar = barno;
+ /* Align to QWORD or 8 Bytes */
+ pba_size = ALIGN(DIV_ROUND_UP(ntb->db_count, 8), 8);
+ ctrl_size = ctrl_size + msix_table_size + pba_size;
+ }
+
+ if (!align) {
+ ctrl_size = roundup_pow_of_two(ctrl_size);
+ spad_size = roundup_pow_of_two(spad_size);
+ } else {
+ ctrl_size = ALIGN(ctrl_size, align);
+ spad_size = ALIGN(spad_size, align);
+ }
+
+ if (peer_size) {
+ if (peer_size < spad_size)
+ spad_count = peer_size / 4;
+ spad_size = peer_size;
+ }
+
+ /*
+ * In order to make sure SPAD offset is aligned to its size,
+ * expand control region size to the size of SPAD if SPAD size
+ * is greater than control region size.
+ */
+ if (spad_size > ctrl_size)
+ ctrl_size = spad_size;
+
+ if (!size)
+ size = ctrl_size + spad_size;
+ else if (size < ctrl_size + spad_size)
+ return -EINVAL;
+
+ base = pci_epf_alloc_space(epf, size, barno, align, type);
+ if (!base) {
+ dev_err(dev, "%s intf: Config/Status/SPAD alloc region fail\n",
+ pci_epc_interface_string(type));
+ return -ENOMEM;
+ }
+
+ ntb_epc->reg = base;
+
+ ctrl = ntb_epc->reg;
+ ctrl->spad_offset = ctrl_size;
+ ctrl->spad_count = spad_count;
+ ctrl->num_mws = ntb->num_mws;
+ ctrl->db_entry_size = align ? align : 4;
+ ntb_epc->spad_size = spad_size;
+
+ return 0;
+}
+
+/**
+ * epf_ntb_config_spad_bar_alloc_interface() - Allocate memory for config +
+ * scratchpad region for each of PRIMARY and SECONDARY interface
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ * Wrapper for epf_ntb_config_spad_bar_alloc() which allocates memory for
+ * config + scratchpad region for a specific interface
+ */
+static int epf_ntb_config_spad_bar_alloc_interface(struct epf_ntb *ntb)
+{
+ enum pci_epc_interface_type type;
+ struct device *dev;
+ int ret;
+
+ dev = &ntb->epf->dev;
+
+ for (type = PRIMARY_INTERFACE; type <= SECONDARY_INTERFACE; type++) {
+ ret = epf_ntb_config_spad_bar_alloc(ntb, type);
+ if (ret) {
+ dev_err(dev, "%s intf: Config/SPAD BAR alloc failed\n",
+ pci_epc_interface_string(type));
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * epf_ntb_free_peer_mem() - Free's memory allocated in peers outbound address
+ * space
+ * @ntb_epc: EPC associated with one of the HOST which holds peers outbound
+ * address regions
+ *
+ *+-----------------+ +----->+----------------+-----------+-----------------+
+ *| BAR0 | | | Doorbell 1 +-----------> MSI|X ADDRESS 1 |
+ *+-----------------+ | +----------------+ +-----------------+
+ *| BAR1 | | | Doorbell 2 +---------+ | |
+ *+-----------------+----+ +----------------+ | | |
+ *| BAR2 | | Doorbell 3 +-------+ | +-----------------+
+ *+-----------------+----+ +----------------+ | +-> MSI|X ADDRESS 2 |
+ *| BAR3 | | | Doorbell 4 +-----+ | +-----------------+
+ *+-----------------+ | |----------------+ | | | |
+ *| BAR4 | | | | | | +-----------------+
+ *+-----------------+ | | MW1 +---+ | +-->+ MSI|X ADDRESS 3||
+ *| BAR5 | | | | | | +-----------------+
+ *+-----------------+ +----->-----------------+ | | | |
+ * EP CONTROLLER 1 | | | | +-----------------+
+ * | | | +---->+ MSI|X ADDRESS 4 |
+ * +----------------+ | +-----------------+
+ * (A) EP CONTROLLER 2 | | |
+ * (OB SPACE) | | |
+ * +-------> MW1 |
+ * | |
+ * | |
+ * (B) +-----------------+
+ * | |
+ * | |
+ * | |
+ * | |
+ * | |
+ * +-----------------+
+ * PCI Address Space
+ * (Managed by HOST2)
+ *
+ * This function frees memory allocated in EP CONTROLLER 2 (OB SPACE) in the
+ * above diagram. It'll free Doorbell 1, Doorbell 2, Doorbell 3, Doorbell 4,
+ * MW1 (and MW2, MW3, MW4).
+ */
+static void epf_ntb_free_peer_mem(struct epf_ntb_epc *ntb_epc)
+{
+ struct pci_epf_bar *epf_bar;
+ void __iomem *mw_addr;
+ phys_addr_t phys_addr;
+ enum epf_ntb_bar bar;
+ enum pci_barno barno;
+ struct pci_epc *epc;
+ size_t size;
+
+ epc = ntb_epc->epc;
+
+ for (bar = BAR_DB_MW1; bar < BAR_MW4; bar++) {
+ barno = ntb_epc->epf_ntb_bar[bar];
+ mw_addr = ntb_epc->mw_addr[barno];
+ epf_bar = &ntb_epc->epf_bar[barno];
+ phys_addr = epf_bar->phys_addr;
+ size = epf_bar->size;
+ if (mw_addr) {
+ pci_epc_mem_free_addr(epc, phys_addr, mw_addr, size);
+ ntb_epc->mw_addr[barno] = NULL;
+ }
+ }
+}
+
+/**
+ * epf_ntb_db_mw_bar_clear() - Clears doorbell and memory BAR
+ * @ntb_epc: EPC associated with one of the HOST which holds peers outbound
+ * address
+ *
+ *+-----------------+ +----->+----------------+-----------+-----------------+
+ *| BAR0 | | | Doorbell 1 +-----------> MSI|X ADDRESS 1 |
+ *+-----------------+ | +----------------+ +-----------------+
+ *| BAR1 | | | Doorbell 2 +---------+ | |
+ *+-----------------+----+ +----------------+ | | |
+ *| BAR2 | | Doorbell 3 +-------+ | +-----------------+
+ *+-----------------+----+ +----------------+ | +-> MSI|X ADDRESS 2 |
+ *| BAR3 | | | Doorbell 4 +-----+ | +-----------------+
+ *+-----------------+ | |----------------+ | | | |
+ *| BAR4 | | | | | | +-----------------+
+ *+-----------------+ | | MW1 +---+ | +-->+ MSI|X ADDRESS 3||
+ *| BAR5 | | | | | | +-----------------+
+ *+-----------------+ +----->-----------------+ | | | |
+ * EP CONTROLLER 1 | | | | +-----------------+
+ * | | | +---->+ MSI|X ADDRESS 4 |
+ * +----------------+ | +-----------------+
+ * (A) EP CONTROLLER 2 | | |
+ * (OB SPACE) | | |
+ * +-------> MW1 |
+ * | |
+ * | |
+ * (B) +-----------------+
+ * | |
+ * | |
+ * | |
+ * | |
+ * | |
+ * +-----------------+
+ * PCI Address Space
+ * (Managed by HOST2)
+ *
+ * This function clears doorbell and memory BARs (remove inbound ATU
+ * configuration). In the above diagram it clears BAR2 TO BAR5 of EP
+ * CONTROLLER 1 (Doorbell BAR, MW1 BAR, MW2 BAR, MW3 BAR and MW4 BAR).
+ */
+static void epf_ntb_db_mw_bar_clear(struct epf_ntb_epc *ntb_epc)
+{
+ struct pci_epf_bar *epf_bar;
+ enum epf_ntb_bar bar;
+ enum pci_barno barno;
+ struct pci_epc *epc;
+ u8 func_no;
+
+ epc = ntb_epc->epc;
+
+ func_no = ntb_epc->func_no;
+
+ for (bar = BAR_DB_MW1; bar < BAR_MW4; bar++) {
+ barno = ntb_epc->epf_ntb_bar[bar];
+ epf_bar = &ntb_epc->epf_bar[barno];
+ pci_epc_clear_bar(epc, func_no, epf_bar);
+ }
+}
+
+/**
+ * epf_ntb_db_mw_bar_cleanup() - Clears doorbell/memory BAR and free memory
+ * allocated in peers outbound address space
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * This function is a wrapper for epf_ntb_db_mw_bar_clear() which clears
+ * HOST1's BAR and epf_ntb_free_peer_mem() which frees up HOST2 outbound
+ * memory.
+ */
+static void epf_ntb_db_mw_bar_cleanup(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type)
+{
+ struct epf_ntb_epc *peer_ntb_epc, *ntb_epc;
+
+ ntb_epc = ntb->epc[type];
+ peer_ntb_epc = ntb->epc[!type];
+
+ epf_ntb_db_mw_bar_clear(ntb_epc);
+ epf_ntb_free_peer_mem(peer_ntb_epc);
+}
+
+/**
+ * epf_ntb_configure_interrupt() - Configure MSI/MSI-X capaiblity
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * Configures MSI/MSI-X capability for each interface with number of
+ * interrupts equal to "db-count" device tree parameter.
+ */
+static int epf_ntb_configure_interrupt(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type)
+{
+ const struct pci_epc_features *epc_features;
+ bool msix_capable, msi_capable;
+ struct epf_ntb_epc *ntb_epc;
+ struct pci_epc *epc;
+ struct device *dev;
+ u32 db_count;
+ u8 func_no;
+ int ret;
+
+ ntb_epc = ntb->epc[type];
+ dev = &ntb->epf->dev;
+
+ epc_features = ntb_epc->epc_features;
+ msix_capable = epc_features->msix_capable;
+ msi_capable = epc_features->msi_capable;
+
+ if (!(msix_capable || msi_capable)) {
+ dev_err(dev, "MSI or MSI-X is required for doorbell\n");
+ return -EINVAL;
+ }
+
+ func_no = ntb_epc->func_no;
+
+ db_count = ntb->db_count;
+ if (db_count > MAX_DB_COUNT) {
+ dev_err(dev, "DB count cannot be more than %d\n", MAX_DB_COUNT);
+ return -EINVAL;
+ }
+
+ ntb->db_count = db_count;
+ epc = ntb_epc->epc;
+
+ if (msi_capable) {
+ ret = pci_epc_set_msi(epc, func_no, db_count);
+ if (ret) {
+ dev_err(dev, "%s intf: MSI configuration failed\n",
+ pci_epc_interface_string(type));
+ return ret;
+ }
+ }
+
+ if (msix_capable) {
+ ret = pci_epc_set_msix(epc, func_no, db_count,
+ ntb_epc->msix_bar,
+ ntb_epc->msix_table_offset);
+ if (ret) {
+ dev_err(dev, "MSI configuration failed\n");
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * epf_ntb_alloc_peer_mem() - Allocate memory in peers outbound address space
+ * @ntb_epc: EPC associated with one of the HOST whose BAR holds peers outbound
+ * address
+ * @bar: BAR of @ntb_epc in for which memory has to be allocated (could be
+ * BAR_DB_MW1, BAR_MW2, BAR_MW3, BAR_MW4)
+ * @peer_ntb_epc: EPC associated with HOST whose outbound address space is
+ * used by @ntb_epc
+ * @size: Size of the address region that has to be allocated in peers OB SPACE
+ *
+ *
+ *+-----------------+ +----->+----------------+-----------+-----------------+
+ *| BAR0 | | | Doorbell 1 +-----------> MSI|X ADDRESS 1 |
+ *+-----------------+ | +----------------+ +-----------------+
+ *| BAR1 | | | Doorbell 2 +---------+ | |
+ *+-----------------+----+ +----------------+ | | |
+ *| BAR2 | | Doorbell 3 +-------+ | +-----------------+
+ *+-----------------+----+ +----------------+ | +-> MSI|X ADDRESS 2 |
+ *| BAR3 | | | Doorbell 4 +-----+ | +-----------------+
+ *+-----------------+ | |----------------+ | | | |
+ *| BAR4 | | | | | | +-----------------+
+ *+-----------------+ | | MW1 +---+ | +-->+ MSI|X ADDRESS 3||
+ *| BAR5 | | | | | | +-----------------+
+ *+-----------------+ +----->-----------------+ | | | |
+ * EP CONTROLLER 1 | | | | +-----------------+
+ * | | | +---->+ MSI|X ADDRESS 4 |
+ * +----------------+ | +-----------------+
+ * (A) EP CONTROLLER 2 | | |
+ * (OB SPACE) | | |
+ * +-------> MW1 |
+ * | |
+ * | |
+ * (B) +-----------------+
+ * | |
+ * | |
+ * | |
+ * | |
+ * | |
+ * +-----------------+
+ * PCI Address Space
+ * (Managed by HOST2)
+ *
+ * This function allocates memory in OB space of EP CONTROLLER 2 in the
+ * above diagram. It'll allocate for Doorbell 1, Doorbell 2, Doorbell 3,
+ * Doorbell 4, MW1 (and MW2, MW3, MW4).
+ */
+static int
+epf_ntb_alloc_peer_mem(struct device *dev, struct epf_ntb_epc *ntb_epc,
+ enum epf_ntb_bar bar, struct epf_ntb_epc *peer_ntb_epc,
+ size_t size)
+{
+ const struct pci_epc_features *epc_features;
+ struct pci_epf_bar *epf_bar;
+ struct pci_epc *peer_epc;
+ phys_addr_t phys_addr;
+ void __iomem *mw_addr;
+ enum pci_barno barno;
+ size_t align;
+
+ epc_features = ntb_epc->epc_features;
+ align = epc_features->align;
+
+ if (size < 128)
+ size = 128;
+
+ if (align)
+ size = ALIGN(size, align);
+ else
+ size = roundup_pow_of_two(size);
+
+ peer_epc = peer_ntb_epc->epc;
+ mw_addr = pci_epc_mem_alloc_addr(peer_epc, &phys_addr, size);
+ if (!mw_addr) {
+ dev_err(dev, "%s intf: Failed to allocate OB address\n",
+ pci_epc_interface_string(peer_ntb_epc->type));
+ return -ENOMEM;
+ }
+
+ barno = ntb_epc->epf_ntb_bar[bar];
+ epf_bar = &ntb_epc->epf_bar[barno];
+ ntb_epc->mw_addr[barno] = mw_addr;
+
+ epf_bar->phys_addr = phys_addr;
+ epf_bar->size = size;
+ epf_bar->barno = barno;
+ epf_bar->flags = PCI_BASE_ADDRESS_MEM_TYPE_32;
+
+ return 0;
+}
+
+/**
+ * epf_ntb_db_mw_bar_init() - Configure Doorbell and Memory window BARs
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * Wrapper for epf_ntb_alloc_peer_mem() and pci_epc_set_bar() that allocates
+ * memory in OB address space of HOST2 and configures BAR of HOST1
+ */
+static int epf_ntb_db_mw_bar_init(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type)
+{
+ const struct pci_epc_features *epc_features;
+ struct epf_ntb_epc *peer_ntb_epc, *ntb_epc;
+ struct pci_epf_bar *epf_bar;
+ struct epf_ntb_ctrl *ctrl;
+ u32 num_mws, db_count;
+ enum epf_ntb_bar bar;
+ enum pci_barno barno;
+ struct pci_epc *epc;
+ struct device *dev;
+ size_t align;
+ int ret, i;
+ u8 func_no;
+ u64 size;
+
+ ntb_epc = ntb->epc[type];
+ peer_ntb_epc = ntb->epc[!type];
+
+ dev = &ntb->epf->dev;
+ epc_features = ntb_epc->epc_features;
+ align = epc_features->align;
+ func_no = ntb_epc->func_no;
+ epc = ntb_epc->epc;
+ num_mws = ntb->num_mws;
+ db_count = ntb->db_count;
+
+ for (bar = BAR_DB_MW1, i = 0; i < num_mws; bar++, i++) {
+ if (bar == BAR_DB_MW1) {
+ align = align ? align : 4;
+ size = db_count * align;
+ size = ALIGN(size, ntb->mws_size[i]);
+ ctrl = ntb_epc->reg;
+ ctrl->mw1_offset = size;
+ size += ntb->mws_size[i];
+ } else {
+ size = ntb->mws_size[i];
+ }
+
+ ret = epf_ntb_alloc_peer_mem(dev, ntb_epc, bar,
+ peer_ntb_epc, size);
+ if (ret)
+ goto err_alloc_peer_mem;
+
+ barno = ntb_epc->epf_ntb_bar[bar];
+ epf_bar = &ntb_epc->epf_bar[barno];
+
+ ret = pci_epc_set_bar(epc, func_no, epf_bar);
+ if (ret) {
+ dev_err(dev, "%s intf: DoorBell BAR set failed\n",
+ pci_epc_interface_string(type));
+ goto err_alloc_peer_mem;
+ }
+ }
+
+ return 0;
+
+err_alloc_peer_mem:
+ epf_ntb_db_mw_bar_cleanup(ntb, type);
+
+ return ret;
+}
+
+/**
+ * epf_ntb_epc_destroy_interface() - Cleanup NTB EPC interface
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * Unbind NTB function device from EPC and Relinquish reference to pci_epc
+ * for each of the interface.
+ */
+static void epf_ntb_epc_destroy_interface(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type)
+{
+ struct epf_ntb_epc *ntb_epc;
+ struct pci_epc *epc;
+ struct pci_epf *epf;
+
+ if (type < 0)
+ return;
+
+ epf = ntb->epf;
+ ntb_epc = ntb->epc[type];
+ if (!ntb_epc)
+ return;
+ epc = ntb_epc->epc;
+ pci_epc_remove_epf(epc, epf, type);
+ pci_epc_put(epc);
+}
+
+/**
+ * epf_ntb_epc_destroy() - Cleanup NTB EPC interface
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ * Wrapper for epf_ntb_epc_destroy_interface() to cleanup all the NTB interfaces
+ */
+static void epf_ntb_epc_destroy(struct epf_ntb *ntb)
+{
+ enum pci_epc_interface_type type;
+
+ for (type = PRIMARY_INTERFACE; type <= SECONDARY_INTERFACE; type++)
+ epf_ntb_epc_destroy_interface(ntb, type);
+}
+
+/**
+ * epf_ntb_epc_create_interface() - Create and initialize NTB EPC interface
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @epc: struct pci_epc to which a particular NTB interface should be associated
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * Allocate memory for NTB EPC interface and initialize it.
+ */
+static int
+epf_ntb_epc_create_interface(struct epf_ntb *ntb, struct pci_epc *epc,
+ enum pci_epc_interface_type type)
+{
+ const struct pci_epc_features *epc_features;
+ struct pci_epf_bar *epf_bar;
+ struct epf_ntb_epc *ntb_epc;
+ struct pci_epf *epf;
+ struct device *dev;
+ u8 func_no;
+
+ dev = &ntb->epf->dev;
+
+ ntb_epc = devm_kzalloc(dev, sizeof(*ntb_epc), GFP_KERNEL);
+ if (!ntb_epc)
+ return -ENOMEM;
+
+ epf = ntb->epf;
+ if (type == PRIMARY_INTERFACE) {
+ func_no = epf->func_no;
+ epf_bar = epf->bar;
+ } else {
+ func_no = epf->sec_epc_func_no;
+ epf_bar = epf->sec_epc_bar;
+ }
+
+ ntb_epc->linkup = false;
+ ntb_epc->epc = epc;
+ ntb_epc->func_no = func_no;
+ ntb_epc->type = type;
+ ntb_epc->epf_bar = epf_bar;
+ ntb_epc->epf_ntb = ntb;
+
+ epc_features = pci_epc_get_features(epc, func_no);
+ if (!epc_features)
+ return -EINVAL;
+ ntb_epc->epc_features = epc_features;
+
+ ntb->epc[type] = ntb_epc;
+
+ return 0;
+}
+
+/**
+ * epf_ntb_epc_create() - Create and initialize NTB EPC interface
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ * Get a reference to EPC device and bind NTB function device to that EPC
+ * for each of the interface. It is also a wrapper to
+ * epf_ntb_epc_create_interface() to allocate memory for NTB EPC interface
+ * and initialize it
+ */
+static int epf_ntb_epc_create(struct epf_ntb *ntb)
+{
+ struct pci_epf *epf;
+ struct device *dev;
+ int ret;
+
+ epf = ntb->epf;
+ dev = &epf->dev;
+
+ ret = epf_ntb_epc_create_interface(ntb, epf->epc, PRIMARY_INTERFACE);
+ if (ret) {
+ dev_err(dev, "PRIMARY intf: Fail to create NTB EPC\n");
+ return ret;
+ }
+
+ ret = epf_ntb_epc_create_interface(ntb, epf->sec_epc,
+ SECONDARY_INTERFACE);
+ if (ret) {
+ dev_err(dev, "SECONDARY intf: Fail to create NTB EPC\n");
+ goto err_epc_create;
+ }
+
+ return 0;
+
+err_epc_create:
+ epf_ntb_epc_destroy_interface(ntb, PRIMARY_INTERFACE);
+
+ return ret;
+}
+
+/**
+ * epf_ntb_init_epc_bar_interface() - Identify BARs to be used for each of
+ * the NTB constructs (scratchpad region, doorbell, memorywindow)
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * Identify the free BAR's to be used for each of BAR_CONFIG, BAR_PEER_SPAD,
+ * BAR_DB_MW1, BAR_MW2, BAR_MW3 and BAR_MW4.
+ */
+static int epf_ntb_init_epc_bar_interface(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type)
+{
+ const struct pci_epc_features *epc_features;
+ struct epf_ntb_epc *ntb_epc;
+ enum pci_barno barno;
+ enum epf_ntb_bar bar;
+ struct device *dev;
+ u32 num_mws;
+ int i;
+
+ barno = BAR_0;
+ ntb_epc = ntb->epc[type];
+ num_mws = ntb->num_mws;
+ dev = &ntb->epf->dev;
+ epc_features = ntb_epc->epc_features;
+
+ /* These are required BARs which are mandatory for NTB functionality */
+ for (bar = BAR_CONFIG; bar <= BAR_DB_MW1; bar++, barno++) {
+ barno = pci_epc_get_next_free_bar(epc_features, barno);
+ if (barno < 0) {
+ dev_err(dev, "%s intf: Fail to get NTB function BAR\n",
+ pci_epc_interface_string(type));
+ return barno;
+ }
+ ntb_epc->epf_ntb_bar[bar] = barno;
+ }
+
+ /* These are optional BARs which doesn't impact NTB functionality */
+ for (bar = BAR_MW2, i = 1; i < num_mws; bar++, barno++, i++) {
+ barno = pci_epc_get_next_free_bar(epc_features, barno);
+ if (barno < 0) {
+ ntb->num_mws = i;
+ dev_dbg(dev, "BAR not available for > MW%d\n", i + 1);
+ }
+ ntb_epc->epf_ntb_bar[bar] = barno;
+ }
+
+ return 0;
+}
+
+/**
+ * epf_ntb_init_epc_bar() - Identify BARs to be used for each of the NTB
+ * constructs (scratchpad region, doorbell, memorywindow)
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * Wrapper to epf_ntb_init_epc_bar_interface() to identify the free BAR's
+ * to be used for each of BAR_CONFIG, BAR_PEER_SPAD, BAR_DB_MW1, BAR_MW2,
+ * BAR_MW3 and BAR_MW4 for all the interfaces.
+ */
+static int epf_ntb_init_epc_bar(struct epf_ntb *ntb)
+{
+ enum pci_epc_interface_type type;
+ struct device *dev;
+ int ret;
+
+ dev = &ntb->epf->dev;
+ for (type = PRIMARY_INTERFACE; type <= SECONDARY_INTERFACE; type++) {
+ ret = epf_ntb_init_epc_bar_interface(ntb, type);
+ if (ret) {
+ dev_err(dev, "Fail to init EPC bar for %s interface\n",
+ pci_epc_interface_string(type));
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * epf_ntb_epc_init_interface() - Initialize NTB interface
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * Wrapper to initialize a particular EPC interface and start the workqueue
+ * to check for commands from host. This function will write to the
+ * EP controller HW for configuring it.
+ */
+static int epf_ntb_epc_init_interface(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type)
+{
+ struct epf_ntb_epc *ntb_epc;
+ struct pci_epc *epc;
+ struct pci_epf *epf;
+ struct device *dev;
+ u8 func_no;
+ int ret;
+
+ ntb_epc = ntb->epc[type];
+ epf = ntb->epf;
+ dev = &epf->dev;
+ epc = ntb_epc->epc;
+ func_no = ntb_epc->func_no;
+
+ ret = epf_ntb_config_sspad_bar_set(ntb->epc[type]);
+ if (ret) {
+ dev_err(dev, "%s intf: Config/self SPAD BAR init failed\n",
+ pci_epc_interface_string(type));
+ return ret;
+ }
+
+ ret = epf_ntb_peer_spad_bar_set(ntb, type);
+ if (ret) {
+ dev_err(dev, "%s intf: Peer SPAD BAR init failed\n",
+ pci_epc_interface_string(type));
+ goto err_peer_spad_bar_init;
+ }
+
+ ret = epf_ntb_configure_interrupt(ntb, type);
+ if (ret) {
+ dev_err(dev, "%s intf: Interrupt configuration failed\n",
+ pci_epc_interface_string(type));
+ goto err_peer_spad_bar_init;
+ }
+
+ ret = epf_ntb_db_mw_bar_init(ntb, type);
+ if (ret) {
+ dev_err(dev, "%s intf: DB/MW BAR init failed\n",
+ pci_epc_interface_string(type));
+ goto err_db_mw_bar_init;
+ }
+
+ ret = pci_epc_write_header(epc, func_no, epf->header);
+ if (ret) {
+ dev_err(dev, "%s intf: Configuration header write failed\n",
+ pci_epc_interface_string(type));
+ goto err_write_header;
+ }
+
+ INIT_DELAYED_WORK(&ntb->epc[type]->cmd_handler, epf_ntb_cmd_handler);
+ queue_work(kpcintb_workqueue, &ntb->epc[type]->cmd_handler.work);
+
+ return 0;
+
+err_write_header:
+ epf_ntb_db_mw_bar_cleanup(ntb, type);
+
+err_db_mw_bar_init:
+ epf_ntb_peer_spad_bar_clear(ntb->epc[type]);
+
+err_peer_spad_bar_init:
+ epf_ntb_config_sspad_bar_clear(ntb->epc[type]);
+
+ return ret;
+}
+
+/**
+ * epf_ntb_epc_cleanup_interface() - Cleanup NTB interface
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * Wrapper to cleanup a particular NTB interface.
+ */
+static void epf_ntb_epc_cleanup_interface(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type)
+{
+ struct epf_ntb_epc *ntb_epc;
+
+ if (type < 0)
+ return;
+
+ ntb_epc = ntb->epc[type];
+ cancel_delayed_work(&ntb_epc->cmd_handler);
+ epf_ntb_db_mw_bar_cleanup(ntb, type);
+ epf_ntb_peer_spad_bar_clear(ntb_epc);
+ epf_ntb_config_sspad_bar_clear(ntb_epc);
+}
+
+/**
+ * epf_ntb_epc_cleanup() - Cleanup all NTB interfaces
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ * Wrapper to cleanup all NTB interfaces.
+ */
+static void epf_ntb_epc_cleanup(struct epf_ntb *ntb)
+{
+ enum pci_epc_interface_type type;
+
+ for (type = PRIMARY_INTERFACE; type <= SECONDARY_INTERFACE; type++)
+ epf_ntb_epc_cleanup_interface(ntb, type);
+}
+
+/**
+ * epf_ntb_epc_init() - Initialize all NTB interfaces
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ * Wrapper to initialize all NTB interface and start the workqueue
+ * to check for commands from host.
+ */
+static int epf_ntb_epc_init(struct epf_ntb *ntb)
+{
+ enum pci_epc_interface_type type;
+ struct device *dev;
+ int ret;
+
+ dev = &ntb->epf->dev;
+
+ for (type = PRIMARY_INTERFACE; type <= SECONDARY_INTERFACE; type++) {
+ ret = epf_ntb_epc_init_interface(ntb, type);
+ if (ret) {
+ dev_err(dev, "%s intf: Failed to initialize\n",
+ pci_epc_interface_string(type));
+ goto err_init_type;
+ }
+ }
+
+ return 0;
+
+err_init_type:
+ epf_ntb_epc_cleanup_interface(ntb, type - 1);
+
+ return ret;
+}
+
+/**
+ * epf_ntb_bind() - Invoked when a EPC is bound to NTB EPF device
+ * @epf: NTB endpoint function device
+ *
+ * This is invoked when a primary interface or secondary interface is
+ * bound to EPC device. This function will complete only when EPC is bound
+ * to both the interfaces. This initializes both the endpoint controllers
+ * associated with NTB function device.
+ */
+static int epf_ntb_bind(struct pci_epf *epf)
+{
+ struct epf_ntb *ntb = epf_get_drvdata(epf);
+ struct device *dev = &epf->dev;
+ int ret;
+
+ if (!epf->epc) {
+ dev_dbg(dev, "PRIMARY EPC interface not yet bound\n");
+ return 0;
+ }
+
+ if (!epf->sec_epc) {
+ dev_dbg(dev, "SECONDARY EPC interface not yet bound\n");
+ return 0;
+ }
+
+ ret = epf_ntb_epc_create(ntb);
+ if (ret) {
+ dev_err(dev, "Failed to create NTB EPC\n");
+ return ret;
+ }
+
+ ret = epf_ntb_init_epc_bar(ntb);
+ if (ret) {
+ dev_err(dev, "Failed to create NTB EPC\n");
+ goto err_bar_init;
+ }
+
+ ret = epf_ntb_config_spad_bar_alloc_interface(ntb);
+ if (ret) {
+ dev_err(dev, "Failed to allocate BAR memory\n");
+ goto err_bar_alloc;
+ }
+
+ ret = epf_ntb_epc_init(ntb);
+ if (ret) {
+ dev_err(dev, "Failed to initialize EPC\n");
+ goto err_bar_alloc;
+ }
+
+ epf_set_drvdata(epf, ntb);
+
+ return 0;
+
+err_bar_alloc:
+ epf_ntb_config_spad_bar_free(ntb);
+
+err_bar_init:
+ epf_ntb_epc_destroy(ntb);
+
+ return ret;
+}
+
+/**
+ * epf_ntb_unbind() - Cleanup the initialization from epf_ntb_bind()
+ * @epf: NTB endpoint function device
+ *
+ * Cleanup the initialization from epf_ntb_bind()
+ */
+static void epf_ntb_unbind(struct pci_epf *epf)
+{
+ struct epf_ntb *ntb = epf_get_drvdata(epf);
+
+ epf_ntb_epc_cleanup(ntb);
+ epf_ntb_config_spad_bar_free(ntb);
+ epf_ntb_epc_destroy(ntb);
+}
+
+#define EPF_NTB_R(_name) \
+static ssize_t epf_ntb_##_name##_show(struct config_item *item, \
+ char *page) \
+{ \
+ struct config_group *group = to_config_group(item); \
+ struct epf_ntb *ntb = to_epf_ntb(group); \
+ \
+ return sprintf(page, "%d\n", ntb->_name); \
+}
+
+#define EPF_NTB_W(_name) \
+static ssize_t epf_ntb_##_name##_store(struct config_item *item, \
+ const char *page, size_t len) \
+{ \
+ struct config_group *group = to_config_group(item); \
+ struct epf_ntb *ntb = to_epf_ntb(group); \
+ u32 val; \
+ int ret; \
+ \
+ ret = kstrtou32(page, 0, &val); \
+ if (ret) \
+ return ret; \
+ \
+ ntb->_name = val; \
+ \
+ return len; \
+}
+
+#define EPF_NTB_MW_R(_name) \
+static ssize_t epf_ntb_##_name##_show(struct config_item *item, \
+ char *page) \
+{ \
+ struct config_group *group = to_config_group(item); \
+ struct epf_ntb *ntb = to_epf_ntb(group); \
+ int win_no; \
+ \
+ sscanf(#_name, "mw%d", &win_no); \
+ \
+ return sprintf(page, "%lld\n", ntb->mws_size[win_no - 1]); \
+}
+
+#define EPF_NTB_MW_W(_name) \
+static ssize_t epf_ntb_##_name##_store(struct config_item *item, \
+ const char *page, size_t len) \
+{ \
+ struct config_group *group = to_config_group(item); \
+ struct epf_ntb *ntb = to_epf_ntb(group); \
+ struct device *dev = &ntb->epf->dev; \
+ int win_no; \
+ u64 val; \
+ int ret; \
+ \
+ ret = kstrtou64(page, 0, &val); \
+ if (ret) \
+ return ret; \
+ \
+ if (sscanf(#_name, "mw%d", &win_no) != 1) \
+ return -EINVAL; \
+ \
+ if (ntb->num_mws < win_no) { \
+ dev_err(dev, "Invalid num_nws: %d value\n", ntb->num_mws); \
+ return -EINVAL; \
+ } \
+ \
+ ntb->mws_size[win_no - 1] = val; \
+ \
+ return len; \
+}
+
+static ssize_t epf_ntb_num_mws_store(struct config_item *item,
+ const char *page, size_t len)
+{
+ struct config_group *group = to_config_group(item);
+ struct epf_ntb *ntb = to_epf_ntb(group);
+ u32 val;
+ int ret;
+
+ ret = kstrtou32(page, 0, &val);
+ if (ret)
+ return ret;
+
+ if (val > MAX_MW)
+ return -EINVAL;
+
+ ntb->num_mws = val;
+
+ return len;
+}
+
+EPF_NTB_R(spad_count)
+EPF_NTB_W(spad_count)
+EPF_NTB_R(db_count)
+EPF_NTB_W(db_count)
+EPF_NTB_R(num_mws)
+EPF_NTB_MW_R(mw1)
+EPF_NTB_MW_W(mw1)
+EPF_NTB_MW_R(mw2)
+EPF_NTB_MW_W(mw2)
+EPF_NTB_MW_R(mw3)
+EPF_NTB_MW_W(mw3)
+EPF_NTB_MW_R(mw4)
+EPF_NTB_MW_W(mw4)
+
+CONFIGFS_ATTR(epf_ntb_, spad_count);
+CONFIGFS_ATTR(epf_ntb_, db_count);
+CONFIGFS_ATTR(epf_ntb_, num_mws);
+CONFIGFS_ATTR(epf_ntb_, mw1);
+CONFIGFS_ATTR(epf_ntb_, mw2);
+CONFIGFS_ATTR(epf_ntb_, mw3);
+CONFIGFS_ATTR(epf_ntb_, mw4);
+
+static struct configfs_attribute *epf_ntb_attrs[] = {
+ &epf_ntb_attr_spad_count,
+ &epf_ntb_attr_db_count,
+ &epf_ntb_attr_num_mws,
+ &epf_ntb_attr_mw1,
+ &epf_ntb_attr_mw2,
+ &epf_ntb_attr_mw3,
+ &epf_ntb_attr_mw4,
+ NULL,
+};
+
+static const struct config_item_type ntb_group_type = {
+ .ct_attrs = epf_ntb_attrs,
+ .ct_owner = THIS_MODULE,
+};
+
+/**
+ * epf_ntb_add_cfs() - Add configfs directory specific to NTB
+ * @epf: NTB endpoint function device
+ *
+ * Add configfs directory specific to NTB. This directory will hold
+ * NTB specific properties like db_count, spad_count, num_mws etc.,
+ */
+static struct config_group *epf_ntb_add_cfs(struct pci_epf *epf,
+ struct config_group *group)
+{
+ struct epf_ntb *ntb = epf_get_drvdata(epf);
+ struct config_group *ntb_group = &ntb->group;
+ struct device *dev = &epf->dev;
+
+ config_group_init_type_name(ntb_group, dev_name(dev), &ntb_group_type);
+
+ return ntb_group;
+}
+
+/**
+ * epf_ntb_probe() - Probe NTB function driver
+ * @epf: NTB endpoint function device
+ *
+ * Probe NTB function driver when endpoint function bus detects a NTB
+ * endpoint function.
+ */
+static int epf_ntb_probe(struct pci_epf *epf)
+{
+ struct epf_ntb *ntb;
+ struct device *dev;
+
+ dev = &epf->dev;
+
+ ntb = devm_kzalloc(dev, sizeof(*ntb), GFP_KERNEL);
+ if (!ntb)
+ return -ENOMEM;
+
+ epf->header = &epf_ntb_header;
+ ntb->epf = epf;
+ epf_set_drvdata(epf, ntb);
+
+ return 0;
+}
+
+static struct pci_epf_ops epf_ntb_ops = {
+ .bind = epf_ntb_bind,
+ .unbind = epf_ntb_unbind,
+ .add_cfs = epf_ntb_add_cfs,
+};
+
+static const struct pci_epf_device_id epf_ntb_ids[] = {
+ {
+ .name = "pci_epf_ntb",
+ },
+ {},
+};
+
+static struct pci_epf_driver epf_ntb_driver = {
+ .driver.name = "pci_epf_ntb",
+ .probe = epf_ntb_probe,
+ .id_table = epf_ntb_ids,
+ .ops = &epf_ntb_ops,
+ .owner = THIS_MODULE,
+};
+
+static int __init epf_ntb_init(void)
+{
+ int ret;
+
+ kpcintb_workqueue = alloc_workqueue("kpcintb", WQ_MEM_RECLAIM |
+ WQ_HIGHPRI, 0);
+ ret = pci_epf_register_driver(&epf_ntb_driver);
+ if (ret) {
+ pr_err("Failed to register pci epf ntb driver --> %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+module_init(epf_ntb_init);
+
+static void __exit epf_ntb_exit(void)
+{
+ pci_epf_unregister_driver(&epf_ntb_driver);
+}
+module_exit(epf_ntb_exit);
+
+MODULE_DESCRIPTION("PCI EPF NTB DRIVER");
+MODULE_AUTHOR("Kishon Vijay Abraham I <[email protected]>");
+MODULE_LICENSE("GPL v2");
--
2.17.1

2020-09-30 15:40:13

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: [PATCH v7 17/18] Documentation: PCI: Add configfs binding documentation for pci-ntb endpoint function

Add binding documentation for pci-ntb endpoint function that helps in
adding and configuring pci-ntb endpoint function.

Signed-off-by: Kishon Vijay Abraham I <[email protected]>
---
.../PCI/endpoint/function/binding/pci-ntb.rst | 38 +++++++++++++++++++
Documentation/PCI/endpoint/index.rst | 1 +
2 files changed, 39 insertions(+)
create mode 100644 Documentation/PCI/endpoint/function/binding/pci-ntb.rst

diff --git a/Documentation/PCI/endpoint/function/binding/pci-ntb.rst b/Documentation/PCI/endpoint/function/binding/pci-ntb.rst
new file mode 100644
index 000000000000..40253d3d5163
--- /dev/null
+++ b/Documentation/PCI/endpoint/function/binding/pci-ntb.rst
@@ -0,0 +1,38 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==========================
+PCI NTB Endpoint Function
+==========================
+
+1) Create a subdirectory to pci_epf_ntb directory in configfs.
+
+Standard EPF Configurable Fields:
+
+================ ===========================================================
+vendorid should be 0x104c
+deviceid should be 0xb00d for TI's J721E SoC
+revid don't care
+progif_code don't care
+subclass_code should be 0x00
+baseclass_code should be 0x5
+cache_line_size don't care
+subsys_vendor_id don't care
+subsys_id don't care
+interrupt_pin don't care
+msi_interrupts don't care
+msix_interrupts don't care
+================ ===========================================================
+
+2) Create a subdirectory to directory created in 1
+
+NTB EPF specific configurable fields:
+
+================ ===========================================================
+db_count Number of doorbells; default = 4
+mw1 size of memory window1
+mw2 size of memory window2
+mw3 size of memory window3
+mw4 size of memory window4
+num_mws Number of memory windows; max = 4
+spad_count Number of scratchpad registers; default = 64
+================ ===========================================================
diff --git a/Documentation/PCI/endpoint/index.rst b/Documentation/PCI/endpoint/index.rst
index ef6861128506..9cb6e5f3c4d5 100644
--- a/Documentation/PCI/endpoint/index.rst
+++ b/Documentation/PCI/endpoint/index.rst
@@ -14,3 +14,4 @@ PCI Endpoint Framework
pci-ntb-function

function/binding/pci-test
+ function/binding/pci-ntb
--
2.17.1

2020-09-30 15:40:24

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: [PATCH v7 16/18] NTB: tool: Enable the NTB/PCIe link on the local or remote side of bridge

Invoke ntb_link_enable() to enable the NTB/PCIe link on the local
or remote side of the bridge.

Signed-off-by: Kishon Vijay Abraham I <[email protected]>
---
drivers/ntb/test/ntb_tool.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/drivers/ntb/test/ntb_tool.c b/drivers/ntb/test/ntb_tool.c
index b7bf3f863d79..8230ced503e3 100644
--- a/drivers/ntb/test/ntb_tool.c
+++ b/drivers/ntb/test/ntb_tool.c
@@ -1638,6 +1638,7 @@ static int tool_probe(struct ntb_client *self, struct ntb_dev *ntb)

tool_setup_dbgfs(tc);

+ ntb_link_enable(ntb, NTB_SPEED_AUTO, NTB_WIDTH_AUTO);
return 0;

err_clear_mws:
--
2.17.1

2020-09-30 15:41:09

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-Transparent Bridge

Add support for EPF PCI-Express Non-Transparent Bridge (NTB) device.
This driver is platform independent and could be used by any platform
which have multiple PCIe endpoint instances configured using the
pci-epf-ntb driver. The driver connnects to the standard NTB sub-system
interface. The EPF NTB device has configurable number of memory windows
(Max 4), configurable number of doorbell (Max 32), and configurable
number of scratch-pad registers.

Signed-off-by: Kishon Vijay Abraham I <[email protected]>
---
drivers/ntb/hw/Kconfig | 1 +
drivers/ntb/hw/Makefile | 1 +
drivers/ntb/hw/epf/Kconfig | 6 +
drivers/ntb/hw/epf/Makefile | 1 +
drivers/ntb/hw/epf/ntb_hw_epf.c | 755 ++++++++++++++++++++++++++++++++
5 files changed, 764 insertions(+)
create mode 100644 drivers/ntb/hw/epf/Kconfig
create mode 100644 drivers/ntb/hw/epf/Makefile
create mode 100644 drivers/ntb/hw/epf/ntb_hw_epf.c

diff --git a/drivers/ntb/hw/Kconfig b/drivers/ntb/hw/Kconfig
index e77c587060ff..c325be526b80 100644
--- a/drivers/ntb/hw/Kconfig
+++ b/drivers/ntb/hw/Kconfig
@@ -2,4 +2,5 @@
source "drivers/ntb/hw/amd/Kconfig"
source "drivers/ntb/hw/idt/Kconfig"
source "drivers/ntb/hw/intel/Kconfig"
+source "drivers/ntb/hw/epf/Kconfig"
source "drivers/ntb/hw/mscc/Kconfig"
diff --git a/drivers/ntb/hw/Makefile b/drivers/ntb/hw/Makefile
index 4714d6238845..223ca592b5f9 100644
--- a/drivers/ntb/hw/Makefile
+++ b/drivers/ntb/hw/Makefile
@@ -2,4 +2,5 @@
obj-$(CONFIG_NTB_AMD) += amd/
obj-$(CONFIG_NTB_IDT) += idt/
obj-$(CONFIG_NTB_INTEL) += intel/
+obj-$(CONFIG_NTB_EPF) += epf/
obj-$(CONFIG_NTB_SWITCHTEC) += mscc/
diff --git a/drivers/ntb/hw/epf/Kconfig b/drivers/ntb/hw/epf/Kconfig
new file mode 100644
index 000000000000..6197d1aab344
--- /dev/null
+++ b/drivers/ntb/hw/epf/Kconfig
@@ -0,0 +1,6 @@
+config NTB_EPF
+ tristate "Generic EPF Non-Transparent Bridge support"
+ depends on m
+ help
+ This driver supports EPF NTB on configurable endpoint.
+ If unsure, say N.
diff --git a/drivers/ntb/hw/epf/Makefile b/drivers/ntb/hw/epf/Makefile
new file mode 100644
index 000000000000..2f560a422bc6
--- /dev/null
+++ b/drivers/ntb/hw/epf/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_NTB_EPF) += ntb_hw_epf.o
diff --git a/drivers/ntb/hw/epf/ntb_hw_epf.c b/drivers/ntb/hw/epf/ntb_hw_epf.c
new file mode 100644
index 000000000000..0a144987851a
--- /dev/null
+++ b/drivers/ntb/hw/epf/ntb_hw_epf.c
@@ -0,0 +1,755 @@
+// SPDX-License-Identifier: GPL-2.0
+/**
+ * Host side endpoint driver to implement Non-Transparent Bridge functionality
+ *
+ * Copyright (C) 2020 Texas Instruments
+ * Author: Kishon Vijay Abraham I <[email protected]>
+ */
+
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/ntb.h>
+
+#define NTB_EPF_COMMAND 0x0
+#define CMD_CONFIGURE_DOORBELL 1
+#define CMD_TEARDOWN_DOORBELL 2
+#define CMD_CONFIGURE_MW 3
+#define CMD_TEARDOWN_MW 4
+#define CMD_LINK_UP 5
+#define CMD_LINK_DOWN 6
+
+#define NTB_EPF_ARGUMENT 0x4
+#define MSIX_ENABLE BIT(16)
+
+#define NTB_EPF_CMD_STATUS 0x8
+#define COMMAND_STATUS_OK 1
+#define COMMAND_STATUS_ERROR 2
+
+#define NTB_EPF_LINK_STATUS 0x0A
+#define LINK_STATUS_UP BIT(0)
+
+#define NTB_EPF_TOPOLOGY 0x0C
+#define NTB_EPF_LOWER_ADDR 0x10
+#define NTB_EPF_UPPER_ADDR 0x14
+#define NTB_EPF_LOWER_SIZE 0x18
+#define NTB_EPF_UPPER_SIZE 0x1C
+#define NTB_EPF_MW_COUNT 0x20
+#define NTB_EPF_MW1_OFFSET 0x24
+#define NTB_EPF_SPAD_OFFSET 0x28
+#define NTB_EPF_SPAD_COUNT 0x2C
+#define NTB_EPF_DB_ENTRY_SIZE 0x30
+#define NTB_EPF_DB_DATA(n) (0x34 + (n) * 4)
+#define NTB_EPF_DB_OFFSET(n) (0xB4 + (n) * 4)
+
+#define NTB_EPF_MIN_DB_COUNT 3
+#define NTB_EPF_MAX_DB_COUNT 31
+#define NTB_EPF_MW_OFFSET 2
+
+#define NTB_EPF_COMMAND_TIMEOUT 1000 /* 1 Sec */
+
+enum pci_barno {
+ BAR_0,
+ BAR_1,
+ BAR_2,
+ BAR_3,
+ BAR_4,
+ BAR_5,
+};
+
+struct ntb_epf_dev {
+ struct ntb_dev ntb;
+ struct device *dev;
+ /* Mutex to protect providing commands to NTB EPF */
+ struct mutex cmd_lock;
+
+ enum pci_barno ctrl_reg_bar;
+ enum pci_barno peer_spad_reg_bar;
+ enum pci_barno db_reg_bar;
+
+ unsigned int mw_count;
+ unsigned int spad_count;
+ unsigned int db_count;
+
+ void __iomem *ctrl_reg;
+ void __iomem *db_reg;
+ void __iomem *peer_spad_reg;
+
+ unsigned int self_spad;
+ unsigned int peer_spad;
+
+ int db_val;
+ u64 db_valid_mask;
+};
+
+#define ntb_ndev(__ntb) container_of(__ntb, struct ntb_epf_dev, ntb)
+
+struct ntb_epf_data {
+ /* BAR that contains both control region and self spad region */
+ enum pci_barno ctrl_reg_bar;
+ /* BAR that contains peer spad region */
+ enum pci_barno peer_spad_reg_bar;
+ /* BAR that contains Doorbell region and Memory window '1' */
+ enum pci_barno db_reg_bar;
+};
+
+static int ntb_epf_send_command(struct ntb_epf_dev *ndev, u32 command,
+ u32 argument)
+{
+ ktime_t timeout;
+ bool timedout;
+ int ret = 0;
+ u32 status;
+
+ mutex_lock(&ndev->cmd_lock);
+ writel(argument, ndev->ctrl_reg + NTB_EPF_ARGUMENT);
+ writel(command, ndev->ctrl_reg + NTB_EPF_COMMAND);
+
+ timeout = ktime_add_ms(ktime_get(), NTB_EPF_COMMAND_TIMEOUT);
+ while (1) {
+ timedout = ktime_after(ktime_get(), timeout);
+ status = readw(ndev->ctrl_reg + NTB_EPF_CMD_STATUS);
+
+ if (status == COMMAND_STATUS_ERROR) {
+ ret = -EINVAL;
+ break;
+ }
+
+ if (status == COMMAND_STATUS_OK)
+ break;
+
+ if (WARN_ON(timedout)) {
+ ret = -ETIMEDOUT;
+ break;
+ }
+
+ usleep_range(5, 10);
+ }
+
+ writew(0, ndev->ctrl_reg + NTB_EPF_CMD_STATUS);
+ mutex_unlock(&ndev->cmd_lock);
+
+ return ret;
+}
+
+static int ntb_epf_mw_to_bar(struct ntb_epf_dev *ndev, int idx)
+{
+ struct device *dev = ndev->dev;
+
+ if (idx < 0 || idx > ndev->mw_count) {
+ dev_err(dev, "Unsupported Memory Window index %d\n", idx);
+ return -EINVAL;
+ }
+
+ return idx + 2;
+}
+
+static int ntb_epf_mw_count(struct ntb_dev *ntb, int pidx)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+
+ if (pidx != NTB_DEF_PEER_IDX) {
+ dev_err(dev, "Unsupported Peer ID %d\n", pidx);
+ return -EINVAL;
+ }
+
+ return ndev->mw_count;
+}
+
+static int ntb_epf_mw_get_align(struct ntb_dev *ntb, int pidx, int idx,
+ resource_size_t *addr_align,
+ resource_size_t *size_align,
+ resource_size_t *size_max)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+ int bar;
+
+ if (pidx != NTB_DEF_PEER_IDX) {
+ dev_err(dev, "Unsupported Peer ID %d\n", pidx);
+ return -EINVAL;
+ }
+
+ bar = ntb_epf_mw_to_bar(ndev, idx);
+ if (bar < 0)
+ return bar;
+
+ if (addr_align)
+ *addr_align = SZ_4K;
+
+ if (size_align)
+ *size_align = 1;
+
+ if (size_max)
+ *size_max = pci_resource_len(ndev->ntb.pdev, bar);
+
+ return 0;
+}
+
+static u64 ntb_epf_link_is_up(struct ntb_dev *ntb,
+ enum ntb_speed *speed,
+ enum ntb_width *width)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ u32 status;
+
+ status = readw(ndev->ctrl_reg + NTB_EPF_LINK_STATUS);
+
+ return status & LINK_STATUS_UP;
+}
+
+static u32 ntb_epf_spad_read(struct ntb_dev *ntb, int idx)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+ u32 offset;
+
+ if (idx < 0 || idx >= ndev->spad_count) {
+ dev_err(dev, "READ: Invalid ScratchPad Index %d\n", idx);
+ return 0;
+ }
+
+ offset = readl(ndev->ctrl_reg + NTB_EPF_SPAD_OFFSET);
+ offset += (idx << 2);
+
+ return readl(ndev->ctrl_reg + offset);
+}
+
+static int ntb_epf_spad_write(struct ntb_dev *ntb,
+ int idx, u32 val)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+ u32 offset;
+
+ if (idx < 0 || idx >= ndev->spad_count) {
+ dev_err(dev, "WRITE: Invalid ScratchPad Index %d\n", idx);
+ return -EINVAL;
+ }
+
+ offset = readl(ndev->ctrl_reg + NTB_EPF_SPAD_OFFSET);
+ offset += (idx << 2);
+ writel(val, ndev->ctrl_reg + offset);
+
+ return 0;
+}
+
+static u32 ntb_epf_peer_spad_read(struct ntb_dev *ntb, int pidx, int idx)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+ u32 offset;
+
+ if (pidx != NTB_DEF_PEER_IDX) {
+ dev_err(dev, "Unsupported Peer ID %d\n", pidx);
+ return -EINVAL;
+ }
+
+ if (idx < 0 || idx >= ndev->spad_count) {
+ dev_err(dev, "WRITE: Invalid Peer ScratchPad Index %d\n", idx);
+ return -EINVAL;
+ }
+
+ offset = (idx << 2);
+ return readl(ndev->peer_spad_reg + offset);
+}
+
+static int ntb_epf_peer_spad_write(struct ntb_dev *ntb, int pidx,
+ int idx, u32 val)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+ u32 offset;
+
+ if (pidx != NTB_DEF_PEER_IDX) {
+ dev_err(dev, "Unsupported Peer ID %d\n", pidx);
+ return -EINVAL;
+ }
+
+ if (idx < 0 || idx >= ndev->spad_count) {
+ dev_err(dev, "WRITE: Invalid Peer ScratchPad Index %d\n", idx);
+ return -EINVAL;
+ }
+
+ offset = (idx << 2);
+ writel(val, ndev->peer_spad_reg + offset);
+
+ return 0;
+}
+
+static int ntb_epf_link_enable(struct ntb_dev *ntb,
+ enum ntb_speed max_speed,
+ enum ntb_width max_width)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+ int ret;
+
+ ret = ntb_epf_send_command(ndev, CMD_LINK_UP, 0);
+ if (ret) {
+ dev_err(dev, "Fail to enable link\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static int ntb_epf_link_disable(struct ntb_dev *ntb)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+ int ret;
+
+ ret = ntb_epf_send_command(ndev, CMD_LINK_DOWN, 0);
+ if (ret) {
+ dev_err(dev, "Fail to disable link\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static irqreturn_t ntb_epf_vec_isr(int irq, void *dev)
+{
+ struct ntb_epf_dev *ndev = dev;
+ int irq_no;
+
+ irq_no = irq - pci_irq_vector(ndev->ntb.pdev, 0);
+ ndev->db_val = irq_no + 1;
+
+ if (irq_no == 0)
+ ntb_link_event(&ndev->ntb);
+ else
+ ntb_db_event(&ndev->ntb, irq_no);
+
+ return IRQ_HANDLED;
+}
+
+static int ntb_epf_init_isr(struct ntb_epf_dev *ndev, int msi_min, int msi_max)
+{
+ struct pci_dev *pdev = ndev->ntb.pdev;
+ struct device *dev = ndev->dev;
+ u32 argument = MSIX_ENABLE;
+ int irq;
+ int ret;
+ int i;
+
+ irq = pci_alloc_irq_vectors(pdev, msi_min, msi_max, PCI_IRQ_MSIX);
+ if (irq < 0) {
+ dev_dbg(dev, "Failed to get MSIX interrupts\n");
+ irq = pci_alloc_irq_vectors(pdev, msi_min, msi_max,
+ PCI_IRQ_MSI);
+ if (irq < 0) {
+ dev_err(dev, "Failed to get MSI interrupts\n");
+ return irq;
+ }
+ argument &= ~MSIX_ENABLE;
+ }
+
+ for (i = 0; i < irq; i++) {
+ ret = devm_request_irq(&pdev->dev, pci_irq_vector(pdev, i),
+ ntb_epf_vec_isr, 0, "ntb_epf", ndev);
+ if (ret) {
+ dev_err(dev, "Failed to request irq\n");
+ goto err_request_irq;
+ }
+ }
+
+ ndev->db_count = irq - 1;
+
+ ret = ntb_epf_send_command(ndev, CMD_CONFIGURE_DOORBELL,
+ argument | irq);
+ if (ret) {
+ dev_err(dev, "Failed to configure doorbell\n");
+ goto err_configure_db;
+ }
+
+ return 0;
+
+err_configure_db:
+ for (i = 0; i < ndev->db_count + 1; i++)
+ devm_free_irq(dev, pci_irq_vector(pdev, i), ndev);
+
+err_request_irq:
+ pci_free_irq_vectors(pdev);
+
+ return ret;
+}
+
+static int ntb_epf_peer_mw_count(struct ntb_dev *ntb)
+{
+ return ntb_ndev(ntb)->mw_count;
+}
+
+static int ntb_epf_spad_count(struct ntb_dev *ntb)
+{
+ return ntb_ndev(ntb)->spad_count;
+}
+
+static u64 ntb_epf_db_valid_mask(struct ntb_dev *ntb)
+{
+ return ntb_ndev(ntb)->db_valid_mask;
+}
+
+static int ntb_epf_db_set_mask(struct ntb_dev *ntb, u64 db_bits)
+{
+ return 0;
+}
+
+static int ntb_epf_mw_set_trans(struct ntb_dev *ntb, int pidx, int idx,
+ dma_addr_t addr, resource_size_t size)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+ resource_size_t mw_size;
+ int bar;
+
+ if (pidx != NTB_DEF_PEER_IDX) {
+ dev_err(dev, "Unsupported Peer ID %d\n", pidx);
+ return -EINVAL;
+ }
+
+ bar = idx + NTB_EPF_MW_OFFSET;
+
+ mw_size = pci_resource_len(ntb->pdev, bar);
+
+ if (size > mw_size) {
+ dev_err(dev, "Size:%pa is greater than the MW size %pa\n",
+ &size, &mw_size);
+ return -EINVAL;
+ }
+
+ writel(lower_32_bits(addr), ndev->ctrl_reg + NTB_EPF_LOWER_ADDR);
+ writel(upper_32_bits(addr), ndev->ctrl_reg + NTB_EPF_UPPER_ADDR);
+ writel(lower_32_bits(size), ndev->ctrl_reg + NTB_EPF_LOWER_SIZE);
+ writel(upper_32_bits(size), ndev->ctrl_reg + NTB_EPF_UPPER_SIZE);
+ ntb_epf_send_command(ndev, CMD_CONFIGURE_MW, idx);
+
+ return 0;
+}
+
+static int ntb_epf_mw_clear_trans(struct ntb_dev *ntb, int pidx, int idx)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+ int ret = 0;
+
+ ntb_epf_send_command(ndev, CMD_TEARDOWN_MW, idx);
+ if (ret)
+ dev_err(dev, "Failed to teardown memory window\n");
+
+ return ret;
+}
+
+static int ntb_epf_peer_mw_get_addr(struct ntb_dev *ntb, int idx,
+ phys_addr_t *base, resource_size_t *size)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ u32 offset = 0;
+ int bar;
+
+ if (idx == 0)
+ offset = readl(ndev->ctrl_reg + NTB_EPF_MW1_OFFSET);
+
+ bar = idx + NTB_EPF_MW_OFFSET;
+
+ if (base)
+ *base = pci_resource_start(ndev->ntb.pdev, bar) + offset;
+
+ if (size)
+ *size = pci_resource_len(ndev->ntb.pdev, bar) - offset;
+
+ return 0;
+}
+
+static int ntb_epf_peer_db_set(struct ntb_dev *ntb, u64 db_bits)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ u32 interrupt_num = ffs(db_bits) + 1;
+ struct device *dev = ndev->dev;
+ u32 db_entry_size;
+ u32 db_offset;
+ u32 db_data;
+
+ if (interrupt_num > ndev->db_count) {
+ dev_err(dev, "DB interrupt %d greater than Max Supported %d\n",
+ interrupt_num, ndev->db_count);
+ return -EINVAL;
+ }
+
+ db_entry_size = readl(ndev->ctrl_reg + NTB_EPF_DB_ENTRY_SIZE);
+
+ db_data = readl(ndev->ctrl_reg + NTB_EPF_DB_DATA(interrupt_num));
+ db_offset = readl(ndev->ctrl_reg + NTB_EPF_DB_OFFSET(interrupt_num));
+ writel(db_data, ndev->db_reg + (db_entry_size * interrupt_num) +
+ db_offset);
+
+ return 0;
+}
+
+static u64 ntb_epf_db_read(struct ntb_dev *ntb)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+
+ return ndev->db_val;
+}
+
+static int ntb_epf_db_clear_mask(struct ntb_dev *ntb, u64 db_bits)
+{
+ return 0;
+}
+
+static int ntb_epf_db_clear(struct ntb_dev *ntb, u64 db_bits)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+
+ ndev->db_val = 0;
+
+ return 0;
+}
+
+static const struct ntb_dev_ops ntb_epf_ops = {
+ .mw_count = ntb_epf_mw_count,
+ .spad_count = ntb_epf_spad_count,
+ .peer_mw_count = ntb_epf_peer_mw_count,
+ .db_valid_mask = ntb_epf_db_valid_mask,
+ .db_set_mask = ntb_epf_db_set_mask,
+ .mw_set_trans = ntb_epf_mw_set_trans,
+ .mw_clear_trans = ntb_epf_mw_clear_trans,
+ .peer_mw_get_addr = ntb_epf_peer_mw_get_addr,
+ .link_enable = ntb_epf_link_enable,
+ .spad_read = ntb_epf_spad_read,
+ .spad_write = ntb_epf_spad_write,
+ .peer_spad_read = ntb_epf_peer_spad_read,
+ .peer_spad_write = ntb_epf_peer_spad_write,
+ .peer_db_set = ntb_epf_peer_db_set,
+ .db_read = ntb_epf_db_read,
+ .mw_get_align = ntb_epf_mw_get_align,
+ .link_is_up = ntb_epf_link_is_up,
+ .db_clear_mask = ntb_epf_db_clear_mask,
+ .db_clear = ntb_epf_db_clear,
+ .link_disable = ntb_epf_link_disable,
+};
+
+static inline void ntb_epf_init_struct(struct ntb_epf_dev *ndev,
+ struct pci_dev *pdev)
+{
+ ndev->ntb.pdev = pdev;
+ ndev->ntb.topo = NTB_TOPO_NONE;
+ ndev->ntb.ops = &ntb_epf_ops;
+}
+
+static int ntb_epf_init_dev(struct ntb_epf_dev *ndev)
+{
+ struct device *dev = ndev->dev;
+ int ret;
+
+ /* One Link interrupt and rest doorbell interrupt */
+ ret = ntb_epf_init_isr(ndev, NTB_EPF_MIN_DB_COUNT + 1,
+ NTB_EPF_MAX_DB_COUNT + 1);
+ if (ret) {
+ dev_err(dev, "Failed to init ISR\n");
+ return ret;
+ }
+
+ ndev->db_valid_mask = BIT_ULL(ndev->db_count) - 1;
+ ndev->mw_count = readl(ndev->ctrl_reg + NTB_EPF_MW_COUNT);
+ ndev->spad_count = readl(ndev->ctrl_reg + NTB_EPF_SPAD_COUNT);
+
+ return 0;
+}
+
+static int ntb_epf_init_pci(struct ntb_epf_dev *ndev,
+ struct pci_dev *pdev)
+{
+ struct device *dev = ndev->dev;
+ int ret;
+
+ pci_set_drvdata(pdev, ndev);
+
+ ret = pci_enable_device(pdev);
+ if (ret) {
+ dev_err(dev, "Cannot enable PCI device\n");
+ goto err_pci_enable;
+ }
+
+ ret = pci_request_regions(pdev, "ntb");
+ if (ret) {
+ dev_err(dev, "Cannot obtain PCI resources\n");
+ goto err_pci_regions;
+ }
+
+ pci_set_master(pdev);
+
+ ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
+ if (ret) {
+ ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
+ if (ret) {
+ dev_err(dev, "Cannot set DMA mask\n");
+ goto err_dma_mask;
+ }
+ dev_warn(&pdev->dev, "Cannot DMA highmem\n");
+ }
+
+ ndev->ctrl_reg = pci_iomap(pdev, 0, 0);
+ if (!ndev->ctrl_reg) {
+ ret = -EIO;
+ goto err_dma_mask;
+ }
+
+ ndev->peer_spad_reg = pci_iomap(pdev, 1, 0);
+ if (!ndev->peer_spad_reg) {
+ ret = -EIO;
+ goto err_dma_mask;
+ }
+
+ ndev->db_reg = pci_iomap(pdev, 2, 0);
+ if (!ndev->db_reg) {
+ ret = -EIO;
+ goto err_dma_mask;
+ }
+
+ return 0;
+
+err_dma_mask:
+ pci_clear_master(pdev);
+
+err_pci_regions:
+ pci_disable_device(pdev);
+
+err_pci_enable:
+ pci_set_drvdata(pdev, NULL);
+
+ return ret;
+}
+
+static void ntb_epf_deinit_pci(struct ntb_epf_dev *ndev)
+{
+ struct pci_dev *pdev = ndev->ntb.pdev;
+
+ pci_iounmap(pdev, ndev->ctrl_reg);
+ pci_iounmap(pdev, ndev->peer_spad_reg);
+ pci_iounmap(pdev, ndev->db_reg);
+
+ pci_clear_master(pdev);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+}
+
+static void ntb_epf_cleanup_isr(struct ntb_epf_dev *ndev)
+{
+ struct pci_dev *pdev = ndev->ntb.pdev;
+ struct device *dev = &pdev->dev;
+ int i;
+
+ ntb_epf_send_command(ndev, CMD_TEARDOWN_DOORBELL, ndev->db_count + 1);
+
+ for (i = 0; i < ndev->db_count + 1; i++)
+ devm_free_irq(dev, pci_irq_vector(pdev, i), ndev);
+ pci_free_irq_vectors(pdev);
+}
+
+static int ntb_epf_pci_probe(struct pci_dev *pdev,
+ const struct pci_device_id *id)
+{
+ enum pci_barno peer_spad_reg_bar = BAR_1;
+ enum pci_barno ctrl_reg_bar = BAR_0;
+ enum pci_barno db_reg_bar = BAR_2;
+ struct device *dev = &pdev->dev;
+ struct ntb_epf_data *data;
+ struct ntb_epf_dev *ndev;
+ int ret;
+
+ if (pci_is_bridge(pdev))
+ return -ENODEV;
+
+ ndev = devm_kzalloc(dev, sizeof(*ndev), GFP_KERNEL);
+ if (!ndev)
+ return -ENOMEM;
+
+ data = (struct ntb_epf_data *)id->driver_data;
+ if (data) {
+ if (data->peer_spad_reg_bar)
+ peer_spad_reg_bar = data->peer_spad_reg_bar;
+ if (data->ctrl_reg_bar)
+ ctrl_reg_bar = data->ctrl_reg_bar;
+ if (data->db_reg_bar)
+ db_reg_bar = data->db_reg_bar;
+ }
+
+ ndev->peer_spad_reg_bar = peer_spad_reg_bar;
+ ndev->ctrl_reg_bar = ctrl_reg_bar;
+ ndev->db_reg_bar = db_reg_bar;
+ ndev->dev = dev;
+
+ ntb_epf_init_struct(ndev, pdev);
+ mutex_init(&ndev->cmd_lock);
+
+ ret = ntb_epf_init_pci(ndev, pdev);
+ if (ret) {
+ dev_err(dev, "Failed to init PCI\n");
+ return ret;
+ }
+
+ ret = ntb_epf_init_dev(ndev);
+ if (ret) {
+ dev_err(dev, "Failed to init device\n");
+ goto err_init_dev;
+ }
+
+ ret = ntb_register_device(&ndev->ntb);
+ if (ret) {
+ dev_err(dev, "Failed to register NTB device\n");
+ goto err_register_dev;
+ }
+
+ return 0;
+
+err_register_dev:
+ ntb_epf_cleanup_isr(ndev);
+
+err_init_dev:
+ ntb_epf_deinit_pci(ndev);
+
+ return ret;
+}
+
+static void ntb_epf_pci_remove(struct pci_dev *pdev)
+{
+ struct ntb_epf_dev *ndev = pci_get_drvdata(pdev);
+
+ ntb_unregister_device(&ndev->ntb);
+ ntb_epf_cleanup_isr(ndev);
+ ntb_epf_deinit_pci(ndev);
+ kfree(ndev);
+}
+
+static const struct ntb_epf_data j721e_data = {
+ .ctrl_reg_bar = BAR_0,
+ .peer_spad_reg_bar = BAR_1,
+ .db_reg_bar = BAR_2,
+};
+
+static const struct pci_device_id ntb_epf_pci_tbl[] = {
+ {
+ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_J721E),
+ .class = PCI_CLASS_MEMORY_RAM << 8, .class_mask = 0xffff00,
+ .driver_data = (kernel_ulong_t)&j721e_data,
+ },
+ { },
+};
+
+static struct pci_driver ntb_epf_pci_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = ntb_epf_pci_tbl,
+ .probe = ntb_epf_pci_probe,
+ .remove = ntb_epf_pci_remove,
+};
+module_pci_driver(ntb_epf_pci_driver);
+
+MODULE_DESCRIPTION("PCI ENDPOINT NTB HOST DRIVER");
+MODULE_AUTHOR("Kishon Vijay Abraham I <[email protected]>");
+MODULE_LICENSE("GPL v2");
--
2.17.1

2020-09-30 15:41:31

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: [PATCH v7 11/18] PCI: cadence: Implement ->msi_map_irq() ops

Implement ->msi_map_irq() ops in order to map physical address to
MSI address and return MSI data.

Signed-off-by: Kishon Vijay Abraham I <[email protected]>
---
.../pci/controller/cadence/pcie-cadence-ep.c | 53 +++++++++++++++++++
1 file changed, 53 insertions(+)

diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c
index 254a3e1eff50..5df492a12042 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-ep.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c
@@ -383,6 +383,57 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn,
return 0;
}

+static int cdns_pcie_ep_map_msi_irq(struct pci_epc *epc, u8 fn,
+ phys_addr_t addr, u8 interrupt_num,
+ u32 entry_size, u32 *msi_data,
+ u32 *msi_addr_offset)
+{
+ struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+ u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
+ struct cdns_pcie *pcie = &ep->pcie;
+ u64 pci_addr, pci_addr_mask = 0xff;
+ u16 flags, mme, data, data_mask;
+ u8 msi_count;
+ int ret;
+ int i;
+
+ /* Check whether the MSI feature has been enabled by the PCI host. */
+ flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
+ if (!(flags & PCI_MSI_FLAGS_ENABLE))
+ return -EINVAL;
+
+ /* Get the number of enabled MSIs */
+ mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4;
+ msi_count = 1 << mme;
+ if (!interrupt_num || interrupt_num > msi_count)
+ return -EINVAL;
+
+ /* Compute the data value to be written. */
+ data_mask = msi_count - 1;
+ data = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_DATA_64);
+ data = data & ~data_mask;
+
+ /* Get the PCI address where to write the data into. */
+ pci_addr = cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_HI);
+ pci_addr <<= 32;
+ pci_addr |= cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_LO);
+ pci_addr &= GENMASK_ULL(63, 2);
+
+ for (i = 0; i < interrupt_num; i++) {
+ ret = cdns_pcie_ep_map_addr(epc, fn, addr,
+ pci_addr & ~pci_addr_mask,
+ entry_size);
+ if (ret)
+ return ret;
+ addr = addr + entry_size;
+ }
+
+ *msi_data = data;
+ *msi_addr_offset = pci_addr & pci_addr_mask;
+
+ return 0;
+}
+
static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn,
u16 interrupt_num)
{
@@ -482,6 +533,7 @@ static const struct pci_epc_features cdns_pcie_epc_features = {
.linkup_notifier = false,
.msi_capable = true,
.msix_capable = true,
+ .align = 256,
};

static const struct pci_epc_features*
@@ -501,6 +553,7 @@ static const struct pci_epc_ops cdns_pcie_epc_ops = {
.set_msix = cdns_pcie_ep_set_msix,
.get_msix = cdns_pcie_ep_get_msix,
.raise_irq = cdns_pcie_ep_raise_irq,
+ .map_msi_irq = cdns_pcie_ep_map_msi_irq,
.start = cdns_pcie_ep_start,
.get_features = cdns_pcie_ep_get_features,
};
--
2.17.1

2020-09-30 15:41:48

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: [PATCH v7 18/18] Documentation: PCI: Add userguide for PCI endpoint NTB function

Add documentation to help users use pci-epf-ntb function driver and
existing host side NTB infrastructure for NTB functionality.

Signed-off-by: Kishon Vijay Abraham I <[email protected]>
Reviewed-by: Randy Dunlap <[email protected]>
---
Documentation/PCI/endpoint/index.rst | 1 +
Documentation/PCI/endpoint/pci-ntb-howto.rst | 160 +++++++++++++++++++
2 files changed, 161 insertions(+)
create mode 100644 Documentation/PCI/endpoint/pci-ntb-howto.rst

diff --git a/Documentation/PCI/endpoint/index.rst b/Documentation/PCI/endpoint/index.rst
index 9cb6e5f3c4d5..38ea1f604b6d 100644
--- a/Documentation/PCI/endpoint/index.rst
+++ b/Documentation/PCI/endpoint/index.rst
@@ -12,6 +12,7 @@ PCI Endpoint Framework
pci-test-function
pci-test-howto
pci-ntb-function
+ pci-ntb-howto

function/binding/pci-test
function/binding/pci-ntb
diff --git a/Documentation/PCI/endpoint/pci-ntb-howto.rst b/Documentation/PCI/endpoint/pci-ntb-howto.rst
new file mode 100644
index 000000000000..b6e1073c9a39
--- /dev/null
+++ b/Documentation/PCI/endpoint/pci-ntb-howto.rst
@@ -0,0 +1,160 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===================================================================
+PCI Non-Transparent Bridge (NTB) Endpoint Function (EPF) User Guide
+===================================================================
+
+:Author: Kishon Vijay Abraham I <[email protected]>
+
+This document is a guide to help users use pci-epf-ntb function driver
+and ntb_hw_epf host driver for NTB functionality. The list of steps to
+be followed in the host side and EP side is given below. For the hardware
+configuration and internals of NTB using configurable endpoints see
+Documentation/PCI/endpoint/pci-ntb-function.rst
+
+Endpoint Device
+===============
+
+Endpoint Controller Devices
+---------------------------
+
+For implementing NTB functionality at least two endpoint controller devices
+are required.
+To find the list of endpoint controller devices in the system::
+
+ # ls /sys/class/pci_epc/
+ 2900000.pcie-ep 2910000.pcie-ep
+
+If PCI_ENDPOINT_CONFIGFS is enabled::
+
+ # ls /sys/kernel/config/pci_ep/controllers
+ 2900000.pcie-ep 2910000.pcie-ep
+
+
+Endpoint Function Drivers
+-------------------------
+
+To find the list of endpoint function drivers in the system::
+
+ # ls /sys/bus/pci-epf/drivers
+ pci_epf_ntb pci_epf_ntb
+
+If PCI_ENDPOINT_CONFIGFS is enabled::
+
+ # ls /sys/kernel/config/pci_ep/functions
+ pci_epf_ntb pci_epf_ntb
+
+
+Creating pci-epf-ntb Device
+----------------------------
+
+PCI endpoint function device can be created using the configfs. To create
+pci-epf-ntb device, the following commands can be used::
+
+ # mount -t configfs none /sys/kernel/config
+ # cd /sys/kernel/config/pci_ep/
+ # mkdir functions/pci_epf_ntb/func1
+
+The "mkdir func1" above creates the pci-epf-ntb function device that will
+be probed by pci_epf_ntb driver.
+
+The PCI endpoint framework populates the directory with the following
+configurable fields::
+
+ # ls functions/pci_epf_ntb/func1
+ baseclass_code deviceid msi_interrupts pci-epf-ntb.0
+ progif_code secondary subsys_id vendorid
+ cache_line_size interrupt_pin msix_interrupts primary
+ revid subclass_code subsys_vendor_id
+
+The PCI endpoint function driver populates these entries with default values
+when the device is bound to the driver. The pci-epf-ntb driver populates
+vendorid with 0xffff and interrupt_pin with 0x0001::
+
+ # cat functions/pci_epf_ntb/func1/vendorid
+ 0xffff
+ # cat functions/pci_epf_ntb/func1/interrupt_pin
+ 0x0001
+
+
+Configuring pci-epf-ntb Device
+-------------------------------
+
+The user can configure the pci-epf-ntb device using its configfs entry. In order
+to change the vendorid and the deviceid, the following
+commands can be used::
+
+ # echo 0x104c > functions/pci_epf_ntb/func1/vendorid
+ # echo 0xb00d > functions/pci_epf_ntb/func1/deviceid
+
+In order to configure NTB specific attributes, a new sub-directory to func1
+should be created::
+
+ # mkdir functions/pci_epf_ntb/func1/pci_epf_ntb.0/
+
+The NTB function driver will populate this directory with various attributes
+that can be configured by the user::
+
+ # ls functions/pci_epf_ntb/func1/pci_epf_ntb.0/
+ db_count mw1 mw2 mw3 mw4 num_mws
+ spad_count
+
+A sample configuration for NTB function is given below::
+
+ # echo 4 > functions/pci_epf_ntb/func1/pci_epf_ntb.0/db_count
+ # echo 128 > functions/pci_epf_ntb/func1/pci_epf_ntb.0/spad_count
+ # echo 2 > functions/pci_epf_ntb/func1/pci_epf_ntb.0/num_mws
+ # echo 0x100000 > functions/pci_epf_ntb/func1/pci_epf_ntb.0/mw1
+ # echo 0x100000 > functions/pci_epf_ntb/func1/pci_epf_ntb.0/mw2
+
+Binding pci-epf-ntb Device to EP Controller
+--------------------------------------------
+
+NTB function device should be attached to two PCIe endpoint controllers
+connected to the two hosts. Use the 'primary' and 'secondary' entries
+inside NTB function device to attach one PCIe endpoint controller to
+primary interface and the other PCIe endpoint controller to the secondary
+interface. ::
+
+ # ln -s controllers/2900000.pcie-ep/ functions/pci-epf-ntb/func1/primary
+ # ln -s controllers/2910000.pcie-ep/ functions/pci-epf-ntb/func1/secondary
+
+Once the above step is completed, both the PCI endpoint controllers are ready to
+establish a link with the host.
+
+
+Start the Link
+--------------
+
+In order for the endpoint device to establish a link with the host, the _start_
+field should be populated with '1'. For NTB, both the PCIe endpoint controllers
+should establish link with the host::
+
+ #echo 1 > controllers/2900000.pcie-ep/start
+ #echo 1 > controllers/2910000.pcie-ep/start
+
+
+RootComplex Device
+==================
+
+lspci Output
+------------
+
+Note that the devices listed here correspond to the values populated in
+"Creating pci-epf-ntb Device" section above::
+
+ # lspci
+ 0000:00:00.0 PCI bridge: Texas Instruments Device b00d
+ 0000:01:00.0 RAM memory: Texas Instruments Device b00d
+
+
+Using ntb_hw_epf Device
+-----------------------
+
+The host side software follows the standard NTB software architecture in Linux.
+All the existing client side NTB utilities like NTB Transport Client and NTB
+Netdev, NTB Ping Pong Test Client and NTB Tool Test Client can be used with NTB
+function device.
+
+For more information on NTB see
+:doc:`Non-Transparent Bridge <../../driver-api/ntb>`
--
2.17.1

2020-10-05 05:58:53

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: Re: [PATCH v7 00/18] Implement NTB Controller using multiple PCI EP

Hi Jon Mason, Allen Hubbe, Dave Jiang,

On 30/09/20 9:05 pm, Kishon Vijay Abraham I wrote:
> This series is about implementing SW defined Non-Transparent Bridge (NTB)
> using multiple endpoint (EP) instances. This series has been tested using
> 2 endpoint instances in J7 connected to J7 board on one end and DRA7 board
> on the other end. However there is nothing platform specific for the NTB
> functionality.

This series has two patches that adds to drivers/ntb/ directory.
[PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-Transparent
Bridge and [PATCH v7 16/18] NTB: tool: Enable the NTB/PCIe link on the
local or remote side of bridge.

If you can review and Ack the above patches, Lorenzo can queue it along
with the rest of the series.

Thanks for your help in advance.

Best Regards,
Kishon

>
> This was presented in Linux Plumbers Conference. Link to presentation
> and video can be found @ [1]
>
> RFC patch series can be found @ [2]
> v1 patch series can be found @ [3]
> v2 patch series can be found @ [4]
> v3 patch series can be found @ [5]
> v4 patch series can be found @ [6]
> v5 patch series can be found @ [7]
> v6 patch series can be found @ [8]
>
> Changes from v6:
> 1) Fixed issues when multiple NTB devices are creating using multiple
> functions
> 2) Fixed issue with writing scratchpad register
> 3) Created a video demo @ [9]
>
> Changes from v5:
> 1) Fixed a formatting issue in Kconfig pointed out by Randy
> 2) Checked for Error or Null in pci_epc_add_epf()
>
> Changes from v4:
> 1) Fixed error condition checks in pci_epc_add_epf()
>
> Changes from v3:
> 1) Fixed Documentation edits suggested by Randy Dunlap <[email protected]>
>
> Changes from v2:
> 1) Add support for the user to create sub-directory of 'EPF Device'
> directory (for endpoint function specific configuration using
> configfs).
> 2) Add documentation for NTB specific attributes in configfs
> 3) Check for PCI_CLASS_MEMORY_RAM (PCIe class) before binding ntb_hw_epf
> driver
> 4) Other documentation fixes
>
> Changes from v1:
> 1) As per Rob's comment, removed support for creating NTB function
> device from DT
> 2) Add support to create NTB EPF device using configfs (added support in
> configfs to associate primary and secondary EPC with EPF.
>
> Changes from RFC:
> 1) Converted the DT binding patches to YAML schema and merged the
> DT binding patches together
> 2) NTB documentation is converted to .rst
> 3) One HOST can now interrupt the other HOST using MSI-X interrupts
> 4) Added support for teardown of memory window and doorbell
> configuration
> 5) Add support to provide support 64-bit memory window size from
> DT
>
> [1] -> https://linuxplumbersconf.org/event/4/contributions/395/
> [2] -> http://lore.kernel.org/r/[email protected]
> [3] -> http://lore.kernel.org/r/[email protected]
> [4] -> http://lore.kernel.org/r/[email protected]
> [5] -> http://lore.kernel.org/r/[email protected]
> [6] -> http://lore.kernel.org/r/[email protected]
> [7] -> http://lore.kernel.org/r/[email protected]
> [8] -> http://lore.kernel.org/r/[email protected]
> [9] -> https://youtu.be/dLKKxrg5-rY
>
> Kishon Vijay Abraham I (18):
> Documentation: PCI: Add specification for the *PCI NTB* function
> device
> PCI: endpoint: Make *_get_first_free_bar() take into account 64 bit
> BAR
> PCI: endpoint: Add helper API to get the 'next' unreserved BAR
> PCI: endpoint: Make *_free_bar() to return error codes on failure
> PCI: endpoint: Remove unused pci_epf_match_device()
> PCI: endpoint: Add support to associate secondary EPC with EPF
> PCI: endpoint: Add support in configfs to associate two EPCs with EPF
> PCI: endpoint: Add pci_epc_ops to map MSI irq
> PCI: endpoint: Add pci_epf_ops for epf drivers to expose function
> specific attrs
> PCI: endpoint: Allow user to create sub-directory of 'EPF Device'
> directory
> PCI: cadence: Implement ->msi_map_irq() ops
> PCI: cadence: Configure LM_EP_FUNC_CFG based on epc->function_num_map
> PCI: endpoint: Add EP function driver to provide NTB functionality
> PCI: Add TI J721E device to pci ids
> NTB: Add support for EPF PCI-Express Non-Transparent Bridge
> NTB: tool: Enable the NTB/PCIe link on the local or remote side of
> bridge
> Documentation: PCI: Add configfs binding documentation for pci-ntb
> endpoint function
> Documentation: PCI: Add userguide for PCI endpoint NTB function
>
> .../PCI/endpoint/function/binding/pci-ntb.rst | 38 +
> Documentation/PCI/endpoint/index.rst | 3 +
> .../PCI/endpoint/pci-endpoint-cfs.rst | 10 +
> .../PCI/endpoint/pci-ntb-function.rst | 351 +++
> Documentation/PCI/endpoint/pci-ntb-howto.rst | 160 ++
> drivers/misc/pci_endpoint_test.c | 1 -
> drivers/ntb/hw/Kconfig | 1 +
> drivers/ntb/hw/Makefile | 1 +
> drivers/ntb/hw/epf/Kconfig | 6 +
> drivers/ntb/hw/epf/Makefile | 1 +
> drivers/ntb/hw/epf/ntb_hw_epf.c | 755 ++++++
> drivers/ntb/test/ntb_tool.c | 1 +
> .../pci/controller/cadence/pcie-cadence-ep.c | 60 +-
> drivers/pci/endpoint/functions/Kconfig | 12 +
> drivers/pci/endpoint/functions/Makefile | 1 +
> drivers/pci/endpoint/functions/pci-epf-ntb.c | 2114 +++++++++++++++++
> drivers/pci/endpoint/functions/pci-epf-test.c | 13 +-
> drivers/pci/endpoint/pci-ep-cfs.c | 176 +-
> drivers/pci/endpoint/pci-epc-core.c | 130 +-
> drivers/pci/endpoint/pci-epf-core.c | 105 +-
> include/linux/pci-epc.h | 39 +-
> include/linux/pci-epf.h | 28 +-
> include/linux/pci_ids.h | 1 +
> 23 files changed, 3934 insertions(+), 73 deletions(-)
> create mode 100644 Documentation/PCI/endpoint/function/binding/pci-ntb.rst
> create mode 100644 Documentation/PCI/endpoint/pci-ntb-function.rst
> create mode 100644 Documentation/PCI/endpoint/pci-ntb-howto.rst
> create mode 100644 drivers/ntb/hw/epf/Kconfig
> create mode 100644 drivers/ntb/hw/epf/Makefile
> create mode 100644 drivers/ntb/hw/epf/ntb_hw_epf.c
> create mode 100644 drivers/pci/endpoint/functions/pci-epf-ntb.c
>

2020-10-20 13:20:40

by Lorenzo Pieralisi

[permalink] [raw]
Subject: Re: [PATCH v7 00/18] Implement NTB Controller using multiple PCI EP

On Tue, Oct 20, 2020 at 01:45:45PM +0530, Kishon Vijay Abraham I wrote:
> Hi,
>
> On 05/10/20 11:27 am, Kishon Vijay Abraham I wrote:
> > Hi Jon Mason, Allen Hubbe, Dave Jiang,
> >
> > On 30/09/20 9:05 pm, Kishon Vijay Abraham I wrote:
> >> This series is about implementing SW defined Non-Transparent Bridge (NTB)
> >> using multiple endpoint (EP) instances. This series has been tested using
> >> 2 endpoint instances in J7 connected to J7 board on one end and DRA7 board
> >> on the other end. However there is nothing platform specific for the NTB
> >> functionality.
> >
> > This series has two patches that adds to drivers/ntb/ directory.
> > [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-Transparent
> > Bridge and [PATCH v7 16/18] NTB: tool: Enable the NTB/PCIe link on the
> > local or remote side of bridge.
> >
> > If you can review and Ack the above patches, Lorenzo can queue it along
> > with the rest of the series.
> >
> > Thanks for your help in advance.
>
> Gentle ping on this series.

I am not queueing any more patches for this merge window - we postpone
this series to v5.11 and in the interim it would be good to define some
possible users.

Thanks,
Lorenzo

> Thanks
> Kishon
> >
> > Best Regards,
> > Kishon
> >
> >>
> >> This was presented in Linux Plumbers Conference. Link to presentation
> >> and video can be found @ [1]
> >>
> >> RFC patch series can be found @ [2]
> >> v1 patch series can be found @ [3]
> >> v2 patch series can be found @ [4]
> >> v3 patch series can be found @ [5]
> >> v4 patch series can be found @ [6]
> >> v5 patch series can be found @ [7]
> >> v6 patch series can be found @ [8]
> >>
> >> Changes from v6:
> >> 1) Fixed issues when multiple NTB devices are creating using multiple
> >> functions
> >> 2) Fixed issue with writing scratchpad register
> >> 3) Created a video demo @ [9]
> >>
> >> Changes from v5:
> >> 1) Fixed a formatting issue in Kconfig pointed out by Randy
> >> 2) Checked for Error or Null in pci_epc_add_epf()
> >>
> >> Changes from v4:
> >> 1) Fixed error condition checks in pci_epc_add_epf()
> >>
> >> Changes from v3:
> >> 1) Fixed Documentation edits suggested by Randy Dunlap <[email protected]>
> >>
> >> Changes from v2:
> >> 1) Add support for the user to create sub-directory of 'EPF Device'
> >> directory (for endpoint function specific configuration using
> >> configfs).
> >> 2) Add documentation for NTB specific attributes in configfs
> >> 3) Check for PCI_CLASS_MEMORY_RAM (PCIe class) before binding ntb_hw_epf
> >> driver
> >> 4) Other documentation fixes
> >>
> >> Changes from v1:
> >> 1) As per Rob's comment, removed support for creating NTB function
> >> device from DT
> >> 2) Add support to create NTB EPF device using configfs (added support in
> >> configfs to associate primary and secondary EPC with EPF.
> >>
> >> Changes from RFC:
> >> 1) Converted the DT binding patches to YAML schema and merged the
> >> DT binding patches together
> >> 2) NTB documentation is converted to .rst
> >> 3) One HOST can now interrupt the other HOST using MSI-X interrupts
> >> 4) Added support for teardown of memory window and doorbell
> >> configuration
> >> 5) Add support to provide support 64-bit memory window size from
> >> DT
> >>
> >> [1] -> https://linuxplumbersconf.org/event/4/contributions/395/
> >> [2] -> http://lore.kernel.org/r/[email protected]
> >> [3] -> http://lore.kernel.org/r/[email protected]
> >> [4] -> http://lore.kernel.org/r/[email protected]
> >> [5] -> http://lore.kernel.org/r/[email protected]
> >> [6] -> http://lore.kernel.org/r/[email protected]
> >> [7] -> http://lore.kernel.org/r/[email protected]
> >> [8] -> http://lore.kernel.org/r/[email protected]
> >> [9] -> https://youtu.be/dLKKxrg5-rY
> >>
> >> Kishon Vijay Abraham I (18):
> >> Documentation: PCI: Add specification for the *PCI NTB* function
> >> device
> >> PCI: endpoint: Make *_get_first_free_bar() take into account 64 bit
> >> BAR
> >> PCI: endpoint: Add helper API to get the 'next' unreserved BAR
> >> PCI: endpoint: Make *_free_bar() to return error codes on failure
> >> PCI: endpoint: Remove unused pci_epf_match_device()
> >> PCI: endpoint: Add support to associate secondary EPC with EPF
> >> PCI: endpoint: Add support in configfs to associate two EPCs with EPF
> >> PCI: endpoint: Add pci_epc_ops to map MSI irq
> >> PCI: endpoint: Add pci_epf_ops for epf drivers to expose function
> >> specific attrs
> >> PCI: endpoint: Allow user to create sub-directory of 'EPF Device'
> >> directory
> >> PCI: cadence: Implement ->msi_map_irq() ops
> >> PCI: cadence: Configure LM_EP_FUNC_CFG based on epc->function_num_map
> >> PCI: endpoint: Add EP function driver to provide NTB functionality
> >> PCI: Add TI J721E device to pci ids
> >> NTB: Add support for EPF PCI-Express Non-Transparent Bridge
> >> NTB: tool: Enable the NTB/PCIe link on the local or remote side of
> >> bridge
> >> Documentation: PCI: Add configfs binding documentation for pci-ntb
> >> endpoint function
> >> Documentation: PCI: Add userguide for PCI endpoint NTB function
> >>
> >> .../PCI/endpoint/function/binding/pci-ntb.rst | 38 +
> >> Documentation/PCI/endpoint/index.rst | 3 +
> >> .../PCI/endpoint/pci-endpoint-cfs.rst | 10 +
> >> .../PCI/endpoint/pci-ntb-function.rst | 351 +++
> >> Documentation/PCI/endpoint/pci-ntb-howto.rst | 160 ++
> >> drivers/misc/pci_endpoint_test.c | 1 -
> >> drivers/ntb/hw/Kconfig | 1 +
> >> drivers/ntb/hw/Makefile | 1 +
> >> drivers/ntb/hw/epf/Kconfig | 6 +
> >> drivers/ntb/hw/epf/Makefile | 1 +
> >> drivers/ntb/hw/epf/ntb_hw_epf.c | 755 ++++++
> >> drivers/ntb/test/ntb_tool.c | 1 +
> >> .../pci/controller/cadence/pcie-cadence-ep.c | 60 +-
> >> drivers/pci/endpoint/functions/Kconfig | 12 +
> >> drivers/pci/endpoint/functions/Makefile | 1 +
> >> drivers/pci/endpoint/functions/pci-epf-ntb.c | 2114 +++++++++++++++++
> >> drivers/pci/endpoint/functions/pci-epf-test.c | 13 +-
> >> drivers/pci/endpoint/pci-ep-cfs.c | 176 +-
> >> drivers/pci/endpoint/pci-epc-core.c | 130 +-
> >> drivers/pci/endpoint/pci-epf-core.c | 105 +-
> >> include/linux/pci-epc.h | 39 +-
> >> include/linux/pci-epf.h | 28 +-
> >> include/linux/pci_ids.h | 1 +
> >> 23 files changed, 3934 insertions(+), 73 deletions(-)
> >> create mode 100644 Documentation/PCI/endpoint/function/binding/pci-ntb.rst
> >> create mode 100644 Documentation/PCI/endpoint/pci-ntb-function.rst
> >> create mode 100644 Documentation/PCI/endpoint/pci-ntb-howto.rst
> >> create mode 100644 drivers/ntb/hw/epf/Kconfig
> >> create mode 100644 drivers/ntb/hw/epf/Makefile
> >> create mode 100644 drivers/ntb/hw/epf/ntb_hw_epf.c
> >> create mode 100644 drivers/pci/endpoint/functions/pci-epf-ntb.c
> >>

2020-10-20 21:24:53

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: Re: [PATCH v7 00/18] Implement NTB Controller using multiple PCI EP

Hi,

On 05/10/20 11:27 am, Kishon Vijay Abraham I wrote:
> Hi Jon Mason, Allen Hubbe, Dave Jiang,
>
> On 30/09/20 9:05 pm, Kishon Vijay Abraham I wrote:
>> This series is about implementing SW defined Non-Transparent Bridge (NTB)
>> using multiple endpoint (EP) instances. This series has been tested using
>> 2 endpoint instances in J7 connected to J7 board on one end and DRA7 board
>> on the other end. However there is nothing platform specific for the NTB
>> functionality.
>
> This series has two patches that adds to drivers/ntb/ directory.
> [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-Transparent
> Bridge and [PATCH v7 16/18] NTB: tool: Enable the NTB/PCIe link on the
> local or remote side of bridge.
>
> If you can review and Ack the above patches, Lorenzo can queue it along
> with the rest of the series.
>
> Thanks for your help in advance.

Gentle ping on this series.

Thanks
Kishon
>
> Best Regards,
> Kishon
>
>>
>> This was presented in Linux Plumbers Conference. Link to presentation
>> and video can be found @ [1]
>>
>> RFC patch series can be found @ [2]
>> v1 patch series can be found @ [3]
>> v2 patch series can be found @ [4]
>> v3 patch series can be found @ [5]
>> v4 patch series can be found @ [6]
>> v5 patch series can be found @ [7]
>> v6 patch series can be found @ [8]
>>
>> Changes from v6:
>> 1) Fixed issues when multiple NTB devices are creating using multiple
>> functions
>> 2) Fixed issue with writing scratchpad register
>> 3) Created a video demo @ [9]
>>
>> Changes from v5:
>> 1) Fixed a formatting issue in Kconfig pointed out by Randy
>> 2) Checked for Error or Null in pci_epc_add_epf()
>>
>> Changes from v4:
>> 1) Fixed error condition checks in pci_epc_add_epf()
>>
>> Changes from v3:
>> 1) Fixed Documentation edits suggested by Randy Dunlap <[email protected]>
>>
>> Changes from v2:
>> 1) Add support for the user to create sub-directory of 'EPF Device'
>> directory (for endpoint function specific configuration using
>> configfs).
>> 2) Add documentation for NTB specific attributes in configfs
>> 3) Check for PCI_CLASS_MEMORY_RAM (PCIe class) before binding ntb_hw_epf
>> driver
>> 4) Other documentation fixes
>>
>> Changes from v1:
>> 1) As per Rob's comment, removed support for creating NTB function
>> device from DT
>> 2) Add support to create NTB EPF device using configfs (added support in
>> configfs to associate primary and secondary EPC with EPF.
>>
>> Changes from RFC:
>> 1) Converted the DT binding patches to YAML schema and merged the
>> DT binding patches together
>> 2) NTB documentation is converted to .rst
>> 3) One HOST can now interrupt the other HOST using MSI-X interrupts
>> 4) Added support for teardown of memory window and doorbell
>> configuration
>> 5) Add support to provide support 64-bit memory window size from
>> DT
>>
>> [1] -> https://linuxplumbersconf.org/event/4/contributions/395/
>> [2] -> http://lore.kernel.org/r/[email protected]
>> [3] -> http://lore.kernel.org/r/[email protected]
>> [4] -> http://lore.kernel.org/r/[email protected]
>> [5] -> http://lore.kernel.org/r/[email protected]
>> [6] -> http://lore.kernel.org/r/[email protected]
>> [7] -> http://lore.kernel.org/r/[email protected]
>> [8] -> http://lore.kernel.org/r/[email protected]
>> [9] -> https://youtu.be/dLKKxrg5-rY
>>
>> Kishon Vijay Abraham I (18):
>> Documentation: PCI: Add specification for the *PCI NTB* function
>> device
>> PCI: endpoint: Make *_get_first_free_bar() take into account 64 bit
>> BAR
>> PCI: endpoint: Add helper API to get the 'next' unreserved BAR
>> PCI: endpoint: Make *_free_bar() to return error codes on failure
>> PCI: endpoint: Remove unused pci_epf_match_device()
>> PCI: endpoint: Add support to associate secondary EPC with EPF
>> PCI: endpoint: Add support in configfs to associate two EPCs with EPF
>> PCI: endpoint: Add pci_epc_ops to map MSI irq
>> PCI: endpoint: Add pci_epf_ops for epf drivers to expose function
>> specific attrs
>> PCI: endpoint: Allow user to create sub-directory of 'EPF Device'
>> directory
>> PCI: cadence: Implement ->msi_map_irq() ops
>> PCI: cadence: Configure LM_EP_FUNC_CFG based on epc->function_num_map
>> PCI: endpoint: Add EP function driver to provide NTB functionality
>> PCI: Add TI J721E device to pci ids
>> NTB: Add support for EPF PCI-Express Non-Transparent Bridge
>> NTB: tool: Enable the NTB/PCIe link on the local or remote side of
>> bridge
>> Documentation: PCI: Add configfs binding documentation for pci-ntb
>> endpoint function
>> Documentation: PCI: Add userguide for PCI endpoint NTB function
>>
>> .../PCI/endpoint/function/binding/pci-ntb.rst | 38 +
>> Documentation/PCI/endpoint/index.rst | 3 +
>> .../PCI/endpoint/pci-endpoint-cfs.rst | 10 +
>> .../PCI/endpoint/pci-ntb-function.rst | 351 +++
>> Documentation/PCI/endpoint/pci-ntb-howto.rst | 160 ++
>> drivers/misc/pci_endpoint_test.c | 1 -
>> drivers/ntb/hw/Kconfig | 1 +
>> drivers/ntb/hw/Makefile | 1 +
>> drivers/ntb/hw/epf/Kconfig | 6 +
>> drivers/ntb/hw/epf/Makefile | 1 +
>> drivers/ntb/hw/epf/ntb_hw_epf.c | 755 ++++++
>> drivers/ntb/test/ntb_tool.c | 1 +
>> .../pci/controller/cadence/pcie-cadence-ep.c | 60 +-
>> drivers/pci/endpoint/functions/Kconfig | 12 +
>> drivers/pci/endpoint/functions/Makefile | 1 +
>> drivers/pci/endpoint/functions/pci-epf-ntb.c | 2114 +++++++++++++++++
>> drivers/pci/endpoint/functions/pci-epf-test.c | 13 +-
>> drivers/pci/endpoint/pci-ep-cfs.c | 176 +-
>> drivers/pci/endpoint/pci-epc-core.c | 130 +-
>> drivers/pci/endpoint/pci-epf-core.c | 105 +-
>> include/linux/pci-epc.h | 39 +-
>> include/linux/pci-epf.h | 28 +-
>> include/linux/pci_ids.h | 1 +
>> 23 files changed, 3934 insertions(+), 73 deletions(-)
>> create mode 100644 Documentation/PCI/endpoint/function/binding/pci-ntb.rst
>> create mode 100644 Documentation/PCI/endpoint/pci-ntb-function.rst
>> create mode 100644 Documentation/PCI/endpoint/pci-ntb-howto.rst
>> create mode 100644 drivers/ntb/hw/epf/Kconfig
>> create mode 100644 drivers/ntb/hw/epf/Makefile
>> create mode 100644 drivers/ntb/hw/epf/ntb_hw_epf.c
>> create mode 100644 drivers/pci/endpoint/functions/pci-epf-ntb.c
>>

2020-11-03 08:00:38

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: Re: [PATCH v7 00/18] Implement NTB Controller using multiple PCI EP

+Alan

Hi Jon Mason, Allen Hubbe, Dave Jiang,

On 20/10/20 6:48 pm, Lorenzo Pieralisi wrote:
> On Tue, Oct 20, 2020 at 01:45:45PM +0530, Kishon Vijay Abraham I wrote:
>> Hi,
>>
>> On 05/10/20 11:27 am, Kishon Vijay Abraham I wrote:
>>> Hi Jon Mason, Allen Hubbe, Dave Jiang,
>>>
>>> On 30/09/20 9:05 pm, Kishon Vijay Abraham I wrote:
>>>> This series is about implementing SW defined Non-Transparent Bridge (NTB)
>>>> using multiple endpoint (EP) instances. This series has been tested using
>>>> 2 endpoint instances in J7 connected to J7 board on one end and DRA7 board
>>>> on the other end. However there is nothing platform specific for the NTB
>>>> functionality.
>>>
>>> This series has two patches that adds to drivers/ntb/ directory.
>>> [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-Transparent
>>> Bridge and [PATCH v7 16/18] NTB: tool: Enable the NTB/PCIe link on the
>>> local or remote side of bridge.
>>>
>>> If you can review and Ack the above patches, Lorenzo can queue it along
>>> with the rest of the series.

Would you be able to review and Ack the NTB parts of this series?
>>>
>>> Thanks for your help in advance.
>>
>> Gentle ping on this series.
>
> I am not queueing any more patches for this merge window - we postpone
> this series to v5.11 and in the interim it would be good to define some
> possible users.

Alan, Do you have a system where you can test this series? It only needs
two endpoint instances on a single system.

Thanks
Kishon

>
> Thanks,
> Lorenzo
>
>> Thanks
>> Kishon
>>>
>>> Best Regards,
>>> Kishon
>>>
>>>>
>>>> This was presented in Linux Plumbers Conference. Link to presentation
>>>> and video can be found @ [1]
>>>>
>>>> RFC patch series can be found @ [2]
>>>> v1 patch series can be found @ [3]
>>>> v2 patch series can be found @ [4]
>>>> v3 patch series can be found @ [5]
>>>> v4 patch series can be found @ [6]
>>>> v5 patch series can be found @ [7]
>>>> v6 patch series can be found @ [8]
>>>>
>>>> Changes from v6:
>>>> 1) Fixed issues when multiple NTB devices are creating using multiple
>>>> functions
>>>> 2) Fixed issue with writing scratchpad register
>>>> 3) Created a video demo @ [9]
>>>>
>>>> Changes from v5:
>>>> 1) Fixed a formatting issue in Kconfig pointed out by Randy
>>>> 2) Checked for Error or Null in pci_epc_add_epf()
>>>>
>>>> Changes from v4:
>>>> 1) Fixed error condition checks in pci_epc_add_epf()
>>>>
>>>> Changes from v3:
>>>> 1) Fixed Documentation edits suggested by Randy Dunlap <[email protected]>
>>>>
>>>> Changes from v2:
>>>> 1) Add support for the user to create sub-directory of 'EPF Device'
>>>> directory (for endpoint function specific configuration using
>>>> configfs).
>>>> 2) Add documentation for NTB specific attributes in configfs
>>>> 3) Check for PCI_CLASS_MEMORY_RAM (PCIe class) before binding ntb_hw_epf
>>>> driver
>>>> 4) Other documentation fixes
>>>>
>>>> Changes from v1:
>>>> 1) As per Rob's comment, removed support for creating NTB function
>>>> device from DT
>>>> 2) Add support to create NTB EPF device using configfs (added support in
>>>> configfs to associate primary and secondary EPC with EPF.
>>>>
>>>> Changes from RFC:
>>>> 1) Converted the DT binding patches to YAML schema and merged the
>>>> DT binding patches together
>>>> 2) NTB documentation is converted to .rst
>>>> 3) One HOST can now interrupt the other HOST using MSI-X interrupts
>>>> 4) Added support for teardown of memory window and doorbell
>>>> configuration
>>>> 5) Add support to provide support 64-bit memory window size from
>>>> DT
>>>>
>>>> [1] -> https://linuxplumbersconf.org/event/4/contributions/395/
>>>> [2] -> http://lore.kernel.org/r/[email protected]
>>>> [3] -> http://lore.kernel.org/r/[email protected]
>>>> [4] -> http://lore.kernel.org/r/[email protected]
>>>> [5] -> http://lore.kernel.org/r/[email protected]
>>>> [6] -> http://lore.kernel.org/r/[email protected]
>>>> [7] -> http://lore.kernel.org/r/[email protected]
>>>> [8] -> http://lore.kernel.org/r/[email protected]
>>>> [9] -> https://youtu.be/dLKKxrg5-rY
>>>>
>>>> Kishon Vijay Abraham I (18):
>>>> Documentation: PCI: Add specification for the *PCI NTB* function
>>>> device
>>>> PCI: endpoint: Make *_get_first_free_bar() take into account 64 bit
>>>> BAR
>>>> PCI: endpoint: Add helper API to get the 'next' unreserved BAR
>>>> PCI: endpoint: Make *_free_bar() to return error codes on failure
>>>> PCI: endpoint: Remove unused pci_epf_match_device()
>>>> PCI: endpoint: Add support to associate secondary EPC with EPF
>>>> PCI: endpoint: Add support in configfs to associate two EPCs with EPF
>>>> PCI: endpoint: Add pci_epc_ops to map MSI irq
>>>> PCI: endpoint: Add pci_epf_ops for epf drivers to expose function
>>>> specific attrs
>>>> PCI: endpoint: Allow user to create sub-directory of 'EPF Device'
>>>> directory
>>>> PCI: cadence: Implement ->msi_map_irq() ops
>>>> PCI: cadence: Configure LM_EP_FUNC_CFG based on epc->function_num_map
>>>> PCI: endpoint: Add EP function driver to provide NTB functionality
>>>> PCI: Add TI J721E device to pci ids
>>>> NTB: Add support for EPF PCI-Express Non-Transparent Bridge
>>>> NTB: tool: Enable the NTB/PCIe link on the local or remote side of
>>>> bridge
>>>> Documentation: PCI: Add configfs binding documentation for pci-ntb
>>>> endpoint function
>>>> Documentation: PCI: Add userguide for PCI endpoint NTB function
>>>>
>>>> .../PCI/endpoint/function/binding/pci-ntb.rst | 38 +
>>>> Documentation/PCI/endpoint/index.rst | 3 +
>>>> .../PCI/endpoint/pci-endpoint-cfs.rst | 10 +
>>>> .../PCI/endpoint/pci-ntb-function.rst | 351 +++
>>>> Documentation/PCI/endpoint/pci-ntb-howto.rst | 160 ++
>>>> drivers/misc/pci_endpoint_test.c | 1 -
>>>> drivers/ntb/hw/Kconfig | 1 +
>>>> drivers/ntb/hw/Makefile | 1 +
>>>> drivers/ntb/hw/epf/Kconfig | 6 +
>>>> drivers/ntb/hw/epf/Makefile | 1 +
>>>> drivers/ntb/hw/epf/ntb_hw_epf.c | 755 ++++++
>>>> drivers/ntb/test/ntb_tool.c | 1 +
>>>> .../pci/controller/cadence/pcie-cadence-ep.c | 60 +-
>>>> drivers/pci/endpoint/functions/Kconfig | 12 +
>>>> drivers/pci/endpoint/functions/Makefile | 1 +
>>>> drivers/pci/endpoint/functions/pci-epf-ntb.c | 2114 +++++++++++++++++
>>>> drivers/pci/endpoint/functions/pci-epf-test.c | 13 +-
>>>> drivers/pci/endpoint/pci-ep-cfs.c | 176 +-
>>>> drivers/pci/endpoint/pci-epc-core.c | 130 +-
>>>> drivers/pci/endpoint/pci-epf-core.c | 105 +-
>>>> include/linux/pci-epc.h | 39 +-
>>>> include/linux/pci-epf.h | 28 +-
>>>> include/linux/pci_ids.h | 1 +
>>>> 23 files changed, 3934 insertions(+), 73 deletions(-)
>>>> create mode 100644 Documentation/PCI/endpoint/function/binding/pci-ntb.rst
>>>> create mode 100644 Documentation/PCI/endpoint/pci-ntb-function.rst
>>>> create mode 100644 Documentation/PCI/endpoint/pci-ntb-howto.rst
>>>> create mode 100644 drivers/ntb/hw/epf/Kconfig
>>>> create mode 100644 drivers/ntb/hw/epf/Makefile
>>>> create mode 100644 drivers/ntb/hw/epf/ntb_hw_epf.c
>>>> create mode 100644 drivers/pci/endpoint/functions/pci-epf-ntb.c
>>>>

2020-11-09 09:39:00

by Sherry Sun

[permalink] [raw]
Subject: RE: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-Transparent Bridge

Hi Kishon,

> Subject: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-
> Transparent Bridge
>
> From: Kishon Vijay Abraham I <[email protected]>
>
> Add support for EPF PCI-Express Non-Transparent Bridge (NTB) device.
> This driver is platform independent and could be used by any platform which
> have multiple PCIe endpoint instances configured using the pci-epf-ntb driver.
> The driver connnects to the standard NTB sub-system interface. The EPF NTB
> device has configurable number of memory windows (Max 4), configurable
> number of doorbell (Max 32), and configurable number of scratch-pad
> registers.
>
> Signed-off-by: Kishon Vijay Abraham I <[email protected]>
> ---
> drivers/ntb/hw/Kconfig | 1 +
> drivers/ntb/hw/Makefile | 1 +
> drivers/ntb/hw/epf/Kconfig | 6 +
> drivers/ntb/hw/epf/Makefile | 1 +
> drivers/ntb/hw/epf/ntb_hw_epf.c | 755
> ++++++++++++++++++++++++++++++++
> 5 files changed, 764 insertions(+)
> create mode 100644 drivers/ntb/hw/epf/Kconfig create mode 100644
> drivers/ntb/hw/epf/Makefile create mode 100644
> drivers/ntb/hw/epf/ntb_hw_epf.c
>
> diff --git a/drivers/ntb/hw/Kconfig b/drivers/ntb/hw/Kconfig index
> e77c587060ff..c325be526b80 100644
> --- a/drivers/ntb/hw/Kconfig
> +++ b/drivers/ntb/hw/Kconfig
> @@ -2,4 +2,5 @@
> source "drivers/ntb/hw/amd/Kconfig"
> source "drivers/ntb/hw/idt/Kconfig"
> source "drivers/ntb/hw/intel/Kconfig"
> +source "drivers/ntb/hw/epf/Kconfig"
> source "drivers/ntb/hw/mscc/Kconfig"
> diff --git a/drivers/ntb/hw/Makefile b/drivers/ntb/hw/Makefile index
> 4714d6238845..223ca592b5f9 100644
> --- a/drivers/ntb/hw/Makefile
> +++ b/drivers/ntb/hw/Makefile
> @@ -2,4 +2,5 @@
> obj-$(CONFIG_NTB_AMD) += amd/
> obj-$(CONFIG_NTB_IDT) += idt/
> obj-$(CONFIG_NTB_INTEL) += intel/
> +obj-$(CONFIG_NTB_EPF) += epf/
> obj-$(CONFIG_NTB_SWITCHTEC) += mscc/
> diff --git a/drivers/ntb/hw/epf/Kconfig b/drivers/ntb/hw/epf/Kconfig new
> file mode 100644 index 000000000000..6197d1aab344
> --- /dev/null
> +++ b/drivers/ntb/hw/epf/Kconfig
> @@ -0,0 +1,6 @@
> +config NTB_EPF
> + tristate "Generic EPF Non-Transparent Bridge support"
> + depends on m
> + help
> + This driver supports EPF NTB on configurable endpoint.
> + If unsure, say N.
> diff --git a/drivers/ntb/hw/epf/Makefile b/drivers/ntb/hw/epf/Makefile new
> file mode 100644 index 000000000000..2f560a422bc6
> --- /dev/null
> +++ b/drivers/ntb/hw/epf/Makefile
> @@ -0,0 +1 @@
> +obj-$(CONFIG_NTB_EPF) += ntb_hw_epf.o
> diff --git a/drivers/ntb/hw/epf/ntb_hw_epf.c
> b/drivers/ntb/hw/epf/ntb_hw_epf.c new file mode 100644 index
> 000000000000..0a144987851a
> --- /dev/null
> +++ b/drivers/ntb/hw/epf/ntb_hw_epf.c
> @@ -0,0 +1,755 @@
......
> +static int ntb_epf_init_pci(struct ntb_epf_dev *ndev,
> + struct pci_dev *pdev)
> +{
> + struct device *dev = ndev->dev;
> + int ret;
> +
> + pci_set_drvdata(pdev, ndev);
> +
> + ret = pci_enable_device(pdev);
> + if (ret) {
> + dev_err(dev, "Cannot enable PCI device\n");
> + goto err_pci_enable;
> + }
> +
> + ret = pci_request_regions(pdev, "ntb");
> + if (ret) {
> + dev_err(dev, "Cannot obtain PCI resources\n");
> + goto err_pci_regions;
> + }
> +
> + pci_set_master(pdev);
> +
> + ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
> + if (ret) {
> + ret = dma_set_mask_and_coherent(dev,
> DMA_BIT_MASK(32));
> + if (ret) {
> + dev_err(dev, "Cannot set DMA mask\n");
> + goto err_dma_mask;
> + }
> + dev_warn(&pdev->dev, "Cannot DMA highmem\n");
> + }
> +
> + ndev->ctrl_reg = pci_iomap(pdev, 0, 0);

The second parameter of pci_iomap should be ndev->ctrl_reg_bar instead of the hardcode value 0, right?

> + if (!ndev->ctrl_reg) {
> + ret = -EIO;
> + goto err_dma_mask;
> + }
> +
> + ndev->peer_spad_reg = pci_iomap(pdev, 1, 0);

pci_iomap(pdev, ndev->peer_spad_reg_bar, 0);

> + if (!ndev->peer_spad_reg) {
> + ret = -EIO;
> + goto err_dma_mask;
> + }
> +
> + ndev->db_reg = pci_iomap(pdev, 2, 0);

pci_iomap(pdev, ndev->db_reg_bar, 0);

Best Regards
Sherry

2020-11-09 14:24:12

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: Re: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-Transparent Bridge

Hi Sherry,

On 09/11/20 3:07 pm, Sherry Sun wrote:
> Hi Kishon,
>
>> Subject: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-
>> Transparent Bridge
>>
>> From: Kishon Vijay Abraham I <[email protected]>
>>
>> Add support for EPF PCI-Express Non-Transparent Bridge (NTB) device.
>> This driver is platform independent and could be used by any platform which
>> have multiple PCIe endpoint instances configured using the pci-epf-ntb driver.
>> The driver connnects to the standard NTB sub-system interface. The EPF NTB
>> device has configurable number of memory windows (Max 4), configurable
>> number of doorbell (Max 32), and configurable number of scratch-pad
>> registers.
>>
>> Signed-off-by: Kishon Vijay Abraham I <[email protected]>
>> ---
>> drivers/ntb/hw/Kconfig | 1 +
>> drivers/ntb/hw/Makefile | 1 +
>> drivers/ntb/hw/epf/Kconfig | 6 +
>> drivers/ntb/hw/epf/Makefile | 1 +
>> drivers/ntb/hw/epf/ntb_hw_epf.c | 755
>> ++++++++++++++++++++++++++++++++
>> 5 files changed, 764 insertions(+)
>> create mode 100644 drivers/ntb/hw/epf/Kconfig create mode 100644
>> drivers/ntb/hw/epf/Makefile create mode 100644
>> drivers/ntb/hw/epf/ntb_hw_epf.c
>>
>> diff --git a/drivers/ntb/hw/Kconfig b/drivers/ntb/hw/Kconfig index
>> e77c587060ff..c325be526b80 100644
>> --- a/drivers/ntb/hw/Kconfig
>> +++ b/drivers/ntb/hw/Kconfig
>> @@ -2,4 +2,5 @@
>> source "drivers/ntb/hw/amd/Kconfig"
>> source "drivers/ntb/hw/idt/Kconfig"
>> source "drivers/ntb/hw/intel/Kconfig"
>> +source "drivers/ntb/hw/epf/Kconfig"
>> source "drivers/ntb/hw/mscc/Kconfig"
>> diff --git a/drivers/ntb/hw/Makefile b/drivers/ntb/hw/Makefile index
>> 4714d6238845..223ca592b5f9 100644
>> --- a/drivers/ntb/hw/Makefile
>> +++ b/drivers/ntb/hw/Makefile
>> @@ -2,4 +2,5 @@
>> obj-$(CONFIG_NTB_AMD) += amd/
>> obj-$(CONFIG_NTB_IDT) += idt/
>> obj-$(CONFIG_NTB_INTEL) += intel/
>> +obj-$(CONFIG_NTB_EPF) += epf/
>> obj-$(CONFIG_NTB_SWITCHTEC) += mscc/
>> diff --git a/drivers/ntb/hw/epf/Kconfig b/drivers/ntb/hw/epf/Kconfig new
>> file mode 100644 index 000000000000..6197d1aab344
>> --- /dev/null
>> +++ b/drivers/ntb/hw/epf/Kconfig
>> @@ -0,0 +1,6 @@
>> +config NTB_EPF
>> + tristate "Generic EPF Non-Transparent Bridge support"
>> + depends on m
>> + help
>> + This driver supports EPF NTB on configurable endpoint.
>> + If unsure, say N.
>> diff --git a/drivers/ntb/hw/epf/Makefile b/drivers/ntb/hw/epf/Makefile new
>> file mode 100644 index 000000000000..2f560a422bc6
>> --- /dev/null
>> +++ b/drivers/ntb/hw/epf/Makefile
>> @@ -0,0 +1 @@
>> +obj-$(CONFIG_NTB_EPF) += ntb_hw_epf.o
>> diff --git a/drivers/ntb/hw/epf/ntb_hw_epf.c
>> b/drivers/ntb/hw/epf/ntb_hw_epf.c new file mode 100644 index
>> 000000000000..0a144987851a
>> --- /dev/null
>> +++ b/drivers/ntb/hw/epf/ntb_hw_epf.c
>> @@ -0,0 +1,755 @@
> ......
>> +static int ntb_epf_init_pci(struct ntb_epf_dev *ndev,
>> + struct pci_dev *pdev)
>> +{
>> + struct device *dev = ndev->dev;
>> + int ret;
>> +
>> + pci_set_drvdata(pdev, ndev);
>> +
>> + ret = pci_enable_device(pdev);
>> + if (ret) {
>> + dev_err(dev, "Cannot enable PCI device\n");
>> + goto err_pci_enable;
>> + }
>> +
>> + ret = pci_request_regions(pdev, "ntb");
>> + if (ret) {
>> + dev_err(dev, "Cannot obtain PCI resources\n");
>> + goto err_pci_regions;
>> + }
>> +
>> + pci_set_master(pdev);
>> +
>> + ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
>> + if (ret) {
>> + ret = dma_set_mask_and_coherent(dev,
>> DMA_BIT_MASK(32));
>> + if (ret) {
>> + dev_err(dev, "Cannot set DMA mask\n");
>> + goto err_dma_mask;
>> + }
>> + dev_warn(&pdev->dev, "Cannot DMA highmem\n");
>> + }
>> +
>> + ndev->ctrl_reg = pci_iomap(pdev, 0, 0);
>
> The second parameter of pci_iomap should be ndev->ctrl_reg_bar instead of the hardcode value 0, right?
>
>> + if (!ndev->ctrl_reg) {
>> + ret = -EIO;
>> + goto err_dma_mask;
>> + }
>> +
>> + ndev->peer_spad_reg = pci_iomap(pdev, 1, 0);
>
> pci_iomap(pdev, ndev->peer_spad_reg_bar, 0);
>
>> + if (!ndev->peer_spad_reg) {
>> + ret = -EIO;
>> + goto err_dma_mask;
>> + }
>> +
>> + ndev->db_reg = pci_iomap(pdev, 2, 0);
>
> pci_iomap(pdev, ndev->db_reg_bar, 0);

Good catch. Will fix it and send. Thank you for reviewing.

Regards,
Kishon

2020-11-10 02:27:28

by Sherry Sun

[permalink] [raw]
Subject: RE: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-Transparent Bridge

Hi Kishon,

> Subject: Re: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-
> Transparent Bridge
>
> Hi Sherry,
>
> On 09/11/20 3:07 pm, Sherry Sun wrote:
> > Hi Kishon,
> >
> >> Subject: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-
> >> Transparent Bridge
> >>
> >> From: Kishon Vijay Abraham I <[email protected]>
> >>
> >> Add support for EPF PCI-Express Non-Transparent Bridge (NTB) device.
> >> This driver is platform independent and could be used by any platform
> >> which have multiple PCIe endpoint instances configured using the pci-epf-
> ntb driver.
> >> The driver connnects to the standard NTB sub-system interface. The
> >> EPF NTB device has configurable number of memory windows (Max 4),
> >> configurable number of doorbell (Max 32), and configurable number of
> >> scratch-pad registers.
> >>
> >> Signed-off-by: Kishon Vijay Abraham I <[email protected]>
> >> ---
> >> drivers/ntb/hw/Kconfig | 1 +
> >> drivers/ntb/hw/Makefile | 1 +
> >> drivers/ntb/hw/epf/Kconfig | 6 +
> >> drivers/ntb/hw/epf/Makefile | 1 +
> >> drivers/ntb/hw/epf/ntb_hw_epf.c | 755
> >> ++++++++++++++++++++++++++++++++
> >> 5 files changed, 764 insertions(+)
> >> create mode 100644 drivers/ntb/hw/epf/Kconfig create mode 100644
> >> drivers/ntb/hw/epf/Makefile create mode 100644
> >> drivers/ntb/hw/epf/ntb_hw_epf.c
> >>
> >> diff --git a/drivers/ntb/hw/Kconfig b/drivers/ntb/hw/Kconfig index
> >> e77c587060ff..c325be526b80 100644
> >> --- a/drivers/ntb/hw/Kconfig
> >> +++ b/drivers/ntb/hw/Kconfig
> >> @@ -2,4 +2,5 @@
> >> source "drivers/ntb/hw/amd/Kconfig"
> >> source "drivers/ntb/hw/idt/Kconfig"
> >> source "drivers/ntb/hw/intel/Kconfig"
> >> +source "drivers/ntb/hw/epf/Kconfig"
> >> source "drivers/ntb/hw/mscc/Kconfig"
> >> diff --git a/drivers/ntb/hw/Makefile b/drivers/ntb/hw/Makefile index
> >> 4714d6238845..223ca592b5f9 100644
> >> --- a/drivers/ntb/hw/Makefile
> >> +++ b/drivers/ntb/hw/Makefile
> >> @@ -2,4 +2,5 @@
> >> obj-$(CONFIG_NTB_AMD) += amd/
> >> obj-$(CONFIG_NTB_IDT) += idt/
> >> obj-$(CONFIG_NTB_INTEL) += intel/
> >> +obj-$(CONFIG_NTB_EPF) += epf/
> >> obj-$(CONFIG_NTB_SWITCHTEC) += mscc/ diff --git
> >> a/drivers/ntb/hw/epf/Kconfig b/drivers/ntb/hw/epf/Kconfig new file
> >> mode 100644 index 000000000000..6197d1aab344
> >> --- /dev/null
> >> +++ b/drivers/ntb/hw/epf/Kconfig
> >> @@ -0,0 +1,6 @@
> >> +config NTB_EPF
> >> + tristate "Generic EPF Non-Transparent Bridge support"
> >> + depends on m
> >> + help
> >> + This driver supports EPF NTB on configurable endpoint.
> >> + If unsure, say N.
> >> diff --git a/drivers/ntb/hw/epf/Makefile
> >> b/drivers/ntb/hw/epf/Makefile new file mode 100644 index
> >> 000000000000..2f560a422bc6
> >> --- /dev/null
> >> +++ b/drivers/ntb/hw/epf/Makefile
> >> @@ -0,0 +1 @@
> >> +obj-$(CONFIG_NTB_EPF) += ntb_hw_epf.o
> >> diff --git a/drivers/ntb/hw/epf/ntb_hw_epf.c
> >> b/drivers/ntb/hw/epf/ntb_hw_epf.c new file mode 100644 index
> >> 000000000000..0a144987851a
> >> --- /dev/null
> >> +++ b/drivers/ntb/hw/epf/ntb_hw_epf.c
> >> @@ -0,0 +1,755 @@
> > ......
> >> +static int ntb_epf_init_pci(struct ntb_epf_dev *ndev,
> >> + struct pci_dev *pdev)
> >> +{
> >> + struct device *dev = ndev->dev;
> >> + int ret;
> >> +
> >> + pci_set_drvdata(pdev, ndev);
> >> +
> >> + ret = pci_enable_device(pdev);
> >> + if (ret) {
> >> + dev_err(dev, "Cannot enable PCI device\n");
> >> + goto err_pci_enable;
> >> + }
> >> +
> >> + ret = pci_request_regions(pdev, "ntb");
> >> + if (ret) {
> >> + dev_err(dev, "Cannot obtain PCI resources\n");
> >> + goto err_pci_regions;
> >> + }
> >> +
> >> + pci_set_master(pdev);
> >> +
> >> + ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
> >> + if (ret) {
> >> + ret = dma_set_mask_and_coherent(dev,
> >> DMA_BIT_MASK(32));
> >> + if (ret) {
> >> + dev_err(dev, "Cannot set DMA mask\n");
> >> + goto err_dma_mask;
> >> + }
> >> + dev_warn(&pdev->dev, "Cannot DMA highmem\n");
> >> + }
> >> +
> >> + ndev->ctrl_reg = pci_iomap(pdev, 0, 0);
> >
> > The second parameter of pci_iomap should be ndev->ctrl_reg_bar instead
> of the hardcode value 0, right?
> >
> >> + if (!ndev->ctrl_reg) {
> >> + ret = -EIO;
> >> + goto err_dma_mask;
> >> + }
> >> +
> >> + ndev->peer_spad_reg = pci_iomap(pdev, 1, 0);
> >
> > pci_iomap(pdev, ndev->peer_spad_reg_bar, 0);
> >
> >> + if (!ndev->peer_spad_reg) {
> >> + ret = -EIO;
> >> + goto err_dma_mask;
> >> + }
> >> +
> >> + ndev->db_reg = pci_iomap(pdev, 2, 0);
> >
> > pci_iomap(pdev, ndev->db_reg_bar, 0);
>
> Good catch. Will fix it and send. Thank you for reviewing.

You're welcome.
By the way, since I've studied VOP(virtio over pcie) before, and only recently learned
about NTB related code. I have some questions about NTB.

For NTB, in order to make two(or more) different systems to communicate with each other,
seems at least three boards are required(two host boards and one board with multiple EP
instances as bridge), right?
But for VOP, only two boards are needed(one board as host and one board as card) to realize the
communication between the two systems, so my question is what are the advantages of using NTB?
Because I think the architecture of NTB seems more complicated. Many thanks!

Best regards
Sherry

>
> Regards,
> Kishon

2020-11-10 14:23:12

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: Re: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-Transparent Bridge

Hi Sherry,

On 10/11/20 7:55 am, Sherry Sun wrote:
> Hi Kishon,
>
>> Subject: Re: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-
>> Transparent Bridge
>>
>> Hi Sherry,
>>
>> On 09/11/20 3:07 pm, Sherry Sun wrote:
>>> Hi Kishon,
>>>
>>>> Subject: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-
>>>> Transparent Bridge
>>>>
>>>> From: Kishon Vijay Abraham I <[email protected]>
>>>>
>>>> Add support for EPF PCI-Express Non-Transparent Bridge (NTB) device.
>>>> This driver is platform independent and could be used by any platform
>>>> which have multiple PCIe endpoint instances configured using the pci-epf-
>> ntb driver.
>>>> The driver connnects to the standard NTB sub-system interface. The
>>>> EPF NTB device has configurable number of memory windows (Max 4),
>>>> configurable number of doorbell (Max 32), and configurable number of
>>>> scratch-pad registers.
>>>>
>>>> Signed-off-by: Kishon Vijay Abraham I <[email protected]>
>>>> ---
>>>> drivers/ntb/hw/Kconfig | 1 +
>>>> drivers/ntb/hw/Makefile | 1 +
>>>> drivers/ntb/hw/epf/Kconfig | 6 +
>>>> drivers/ntb/hw/epf/Makefile | 1 +
>>>> drivers/ntb/hw/epf/ntb_hw_epf.c | 755
>>>> ++++++++++++++++++++++++++++++++
>>>> 5 files changed, 764 insertions(+)
>>>> create mode 100644 drivers/ntb/hw/epf/Kconfig create mode 100644
>>>> drivers/ntb/hw/epf/Makefile create mode 100644
>>>> drivers/ntb/hw/epf/ntb_hw_epf.c
>>>>
>>>> diff --git a/drivers/ntb/hw/Kconfig b/drivers/ntb/hw/Kconfig index
>>>> e77c587060ff..c325be526b80 100644
>>>> --- a/drivers/ntb/hw/Kconfig
>>>> +++ b/drivers/ntb/hw/Kconfig
>>>> @@ -2,4 +2,5 @@
>>>> source "drivers/ntb/hw/amd/Kconfig"
>>>> source "drivers/ntb/hw/idt/Kconfig"
>>>> source "drivers/ntb/hw/intel/Kconfig"
>>>> +source "drivers/ntb/hw/epf/Kconfig"
>>>> source "drivers/ntb/hw/mscc/Kconfig"
>>>> diff --git a/drivers/ntb/hw/Makefile b/drivers/ntb/hw/Makefile index
>>>> 4714d6238845..223ca592b5f9 100644
>>>> --- a/drivers/ntb/hw/Makefile
>>>> +++ b/drivers/ntb/hw/Makefile
>>>> @@ -2,4 +2,5 @@
>>>> obj-$(CONFIG_NTB_AMD) += amd/
>>>> obj-$(CONFIG_NTB_IDT) += idt/
>>>> obj-$(CONFIG_NTB_INTEL) += intel/
>>>> +obj-$(CONFIG_NTB_EPF) += epf/
>>>> obj-$(CONFIG_NTB_SWITCHTEC) += mscc/ diff --git
>>>> a/drivers/ntb/hw/epf/Kconfig b/drivers/ntb/hw/epf/Kconfig new file
>>>> mode 100644 index 000000000000..6197d1aab344
>>>> --- /dev/null
>>>> +++ b/drivers/ntb/hw/epf/Kconfig
>>>> @@ -0,0 +1,6 @@
>>>> +config NTB_EPF
>>>> + tristate "Generic EPF Non-Transparent Bridge support"
>>>> + depends on m
>>>> + help
>>>> + This driver supports EPF NTB on configurable endpoint.
>>>> + If unsure, say N.
>>>> diff --git a/drivers/ntb/hw/epf/Makefile
>>>> b/drivers/ntb/hw/epf/Makefile new file mode 100644 index
>>>> 000000000000..2f560a422bc6
>>>> --- /dev/null
>>>> +++ b/drivers/ntb/hw/epf/Makefile
>>>> @@ -0,0 +1 @@
>>>> +obj-$(CONFIG_NTB_EPF) += ntb_hw_epf.o
>>>> diff --git a/drivers/ntb/hw/epf/ntb_hw_epf.c
>>>> b/drivers/ntb/hw/epf/ntb_hw_epf.c new file mode 100644 index
>>>> 000000000000..0a144987851a
>>>> --- /dev/null
>>>> +++ b/drivers/ntb/hw/epf/ntb_hw_epf.c
>>>> @@ -0,0 +1,755 @@
>>> ......
>>>> +static int ntb_epf_init_pci(struct ntb_epf_dev *ndev,
>>>> + struct pci_dev *pdev)
>>>> +{
>>>> + struct device *dev = ndev->dev;
>>>> + int ret;
>>>> +
>>>> + pci_set_drvdata(pdev, ndev);
>>>> +
>>>> + ret = pci_enable_device(pdev);
>>>> + if (ret) {
>>>> + dev_err(dev, "Cannot enable PCI device\n");
>>>> + goto err_pci_enable;
>>>> + }
>>>> +
>>>> + ret = pci_request_regions(pdev, "ntb");
>>>> + if (ret) {
>>>> + dev_err(dev, "Cannot obtain PCI resources\n");
>>>> + goto err_pci_regions;
>>>> + }
>>>> +
>>>> + pci_set_master(pdev);
>>>> +
>>>> + ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
>>>> + if (ret) {
>>>> + ret = dma_set_mask_and_coherent(dev,
>>>> DMA_BIT_MASK(32));
>>>> + if (ret) {
>>>> + dev_err(dev, "Cannot set DMA mask\n");
>>>> + goto err_dma_mask;
>>>> + }
>>>> + dev_warn(&pdev->dev, "Cannot DMA highmem\n");
>>>> + }
>>>> +
>>>> + ndev->ctrl_reg = pci_iomap(pdev, 0, 0);
>>>
>>> The second parameter of pci_iomap should be ndev->ctrl_reg_bar instead
>> of the hardcode value 0, right?
>>>
>>>> + if (!ndev->ctrl_reg) {
>>>> + ret = -EIO;
>>>> + goto err_dma_mask;
>>>> + }
>>>> +
>>>> + ndev->peer_spad_reg = pci_iomap(pdev, 1, 0);
>>>
>>> pci_iomap(pdev, ndev->peer_spad_reg_bar, 0);
>>>
>>>> + if (!ndev->peer_spad_reg) {
>>>> + ret = -EIO;
>>>> + goto err_dma_mask;
>>>> + }
>>>> +
>>>> + ndev->db_reg = pci_iomap(pdev, 2, 0);
>>>
>>> pci_iomap(pdev, ndev->db_reg_bar, 0);
>>
>> Good catch. Will fix it and send. Thank you for reviewing.
>
> You're welcome.
> By the way, since I've studied VOP(virtio over pcie) before, and only recently learned
> about NTB related code. I have some questions about NTB.
>
> For NTB, in order to make two(or more) different systems to communicate with each other,
> seems at least three boards are required(two host boards and one board with multiple EP
> instances as bridge), right?

right, this series is about creating NTB bridge by configuring multiple
EP instances in an SoC, however there are also dedicated HW NTB switches
(internally they might as well use multiple EP instances).

> But for VOP, only two boards are needed(one board as host and one board as card) to realize the
> communication between the two systems, so my question is what are the advantages of using NTB?

NTB is a bridge that facilitates communication between two different
systems. So it by itself will not be source or sink of any data unlike a
normal EP to RP system (or the VOP) which will be source or sink of data.
> Because I think the architecture of NTB seems more complicated. Many thanks!

yeah, I think it enables a different use case all together. Consider you
have two x86 HOST PCs (having RP) and they have to be communicate using
PCIe. NTB can be used in such cases for the two x86 PCs to communicate
with each other over PCIe, which wouldn't be possible without NTB.

Regards,
Kishon

2020-11-10 15:01:26

by Arnd Bergmann

[permalink] [raw]
Subject: Re: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-Transparent Bridge

On Tue, Nov 10, 2020 at 3:20 PM Kishon Vijay Abraham I <[email protected]> wrote:
> On 10/11/20 7:55 am, Sherry Sun wrote:

> > But for VOP, only two boards are needed(one board as host and one board as card) to realize the
> > communication between the two systems, so my question is what are the advantages of using NTB?
>
> NTB is a bridge that facilitates communication between two different
> systems. So it by itself will not be source or sink of any data unlike a
> normal EP to RP system (or the VOP) which will be source or sink of data.
>
> > Because I think the architecture of NTB seems more complicated. Many thanks!
>
> yeah, I think it enables a different use case all together. Consider you
> have two x86 HOST PCs (having RP) and they have to be communicate using
> PCIe. NTB can be used in such cases for the two x86 PCs to communicate
> with each other over PCIe, which wouldn't be possible without NTB.

I think for VOP, we should have an abstraction that can work on either NTB
or directly on the endpoint framework but provide an interface that then
lets you create logical devices the same way.

Doing VOP based on NTB plus the new NTB_EPF driver would also
work and just move the abstraction somewhere else, but I guess it
would complicate setting it up for those users that only care about the
simpler endpoint case.

Arnd

2020-11-10 15:45:15

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: Re: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-Transparent Bridge

Hi Sherry, Arnd,

On 10/11/20 8:29 pm, Arnd Bergmann wrote:
> On Tue, Nov 10, 2020 at 3:20 PM Kishon Vijay Abraham I <[email protected]> wrote:
>> On 10/11/20 7:55 am, Sherry Sun wrote:
>
>>> But for VOP, only two boards are needed(one board as host and one board as card) to realize the
>>> communication between the two systems, so my question is what are the advantages of using NTB?
>>
>> NTB is a bridge that facilitates communication between two different
>> systems. So it by itself will not be source or sink of any data unlike a
>> normal EP to RP system (or the VOP) which will be source or sink of data.
>>
>>> Because I think the architecture of NTB seems more complicated. Many thanks!
>>
>> yeah, I think it enables a different use case all together. Consider you
>> have two x86 HOST PCs (having RP) and they have to be communicate using
>> PCIe. NTB can be used in such cases for the two x86 PCs to communicate
>> with each other over PCIe, which wouldn't be possible without NTB.
>
> I think for VOP, we should have an abstraction that can work on either NTB
> or directly on the endpoint framework but provide an interface that then
> lets you create logical devices the same way.
>
> Doing VOP based on NTB plus the new NTB_EPF driver would also
> work and just move the abstraction somewhere else, but I guess it
> would complicate setting it up for those users that only care about the
> simpler endpoint case.

I'm not sure if you've got a chance to look at [1], where I added
support for RP<->EP system both running Linux, with EP configured using
Linux EP framework (as well as HOST ports connected to NTB switch,
patches 20 and 21, that uses the Linux NTB framework) to communicate
using virtio over PCIe.

The cover-letter [1] shows a picture of the two use cases supported in
that series.

[1] -> http://lore.kernel.org/r/[email protected]

Thank You,
Kishon

2020-11-11 02:51:47

by Sherry Sun

[permalink] [raw]
Subject: RE: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-Transparent Bridge

Hi Kishon,

> Subject: Re: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-
> Transparent Bridge
>
> Hi Sherry, Arnd,
>
> On 10/11/20 8:29 pm, Arnd Bergmann wrote:
> > On Tue, Nov 10, 2020 at 3:20 PM Kishon Vijay Abraham I <[email protected]>
> wrote:
> >> On 10/11/20 7:55 am, Sherry Sun wrote:
> >
> >>> But for VOP, only two boards are needed(one board as host and one
> >>> board as card) to realize the communication between the two systems,
> so my question is what are the advantages of using NTB?
> >>
> >> NTB is a bridge that facilitates communication between two different
> >> systems. So it by itself will not be source or sink of any data
> >> unlike a normal EP to RP system (or the VOP) which will be source or sink
> of data.
> >>
> >>> Because I think the architecture of NTB seems more complicated. Many
> thanks!
> >>
> >> yeah, I think it enables a different use case all together. Consider
> >> you have two x86 HOST PCs (having RP) and they have to be communicate
> >> using PCIe. NTB can be used in such cases for the two x86 PCs to
> >> communicate with each other over PCIe, which wouldn't be possible
> without NTB.
> >
> > I think for VOP, we should have an abstraction that can work on either
> > NTB or directly on the endpoint framework but provide an interface
> > that then lets you create logical devices the same way.
> >
> > Doing VOP based on NTB plus the new NTB_EPF driver would also work and
> > just move the abstraction somewhere else, but I guess it would
> > complicate setting it up for those users that only care about the
> > simpler endpoint case.
>
> I'm not sure if you've got a chance to look at [1], where I added support for
> RP<->EP system both running Linux, with EP configured using Linux EP
> framework (as well as HOST ports connected to NTB switch, patches 20 and
> 21, that uses the Linux NTB framework) to communicate using virtio over
> PCIe.
>

I saw your patches at [1], here you take a rpmsg as an example to communicate between
two SoCs using PCIe RC<->EP and HOST1-NTB-HOST2 for different usercases.
The VOP code works under the PCIe RC<->EP framework, which means that we can also
make VOP works under the Linux NTB framework, just like the rpmsg way you did here, right?

Best regards
Sherry

> The cover-letter [1] shows a picture of the two use cases supported in that
> series.
>
> [1] ->
> https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flore.ke
> rnel.org%2Fr%2F20200702082143.25259-1-
> kishon%40ti.com&amp;data=04%7C01%7Csherry.sun%40nxp.com%7C5d8b7
> 3a4b72947bea65d08d8858f5091%7C686ea1d3bc2b4c6fa92cd99c5c301635%7
> C0%7C0%7C637406197865119992%7CUnknown%7CTWFpbGZsb3d8eyJWIjoi
> MC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C100
> 0&amp;sdata=iRrBvQ9xjoOUYU%2FDidMLZZpW6XuU4ITVXFDA%2B%2F4rJFU
> %3D&amp;reserved=0
>
> Thank You,
> Kishon

2020-11-12 11:15:03

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: Re: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-Transparent Bridge

Hi Sherry,

On 11/11/20 8:19 am, Sherry Sun wrote:
> Hi Kishon,
>
>> Subject: Re: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-
>> Transparent Bridge
>>
>> Hi Sherry, Arnd,
>>
>> On 10/11/20 8:29 pm, Arnd Bergmann wrote:
>>> On Tue, Nov 10, 2020 at 3:20 PM Kishon Vijay Abraham I <[email protected]>
>> wrote:
>>>> On 10/11/20 7:55 am, Sherry Sun wrote:
>>>
>>>>> But for VOP, only two boards are needed(one board as host and one
>>>>> board as card) to realize the communication between the two systems,
>> so my question is what are the advantages of using NTB?
>>>>
>>>> NTB is a bridge that facilitates communication between two different
>>>> systems. So it by itself will not be source or sink of any data
>>>> unlike a normal EP to RP system (or the VOP) which will be source or sink
>> of data.
>>>>
>>>>> Because I think the architecture of NTB seems more complicated. Many
>> thanks!
>>>>
>>>> yeah, I think it enables a different use case all together. Consider
>>>> you have two x86 HOST PCs (having RP) and they have to be communicate
>>>> using PCIe. NTB can be used in such cases for the two x86 PCs to
>>>> communicate with each other over PCIe, which wouldn't be possible
>> without NTB.
>>>
>>> I think for VOP, we should have an abstraction that can work on either
>>> NTB or directly on the endpoint framework but provide an interface
>>> that then lets you create logical devices the same way.
>>>
>>> Doing VOP based on NTB plus the new NTB_EPF driver would also work and
>>> just move the abstraction somewhere else, but I guess it would
>>> complicate setting it up for those users that only care about the
>>> simpler endpoint case.
>>
>> I'm not sure if you've got a chance to look at [1], where I added support for
>> RP<->EP system both running Linux, with EP configured using Linux EP
>> framework (as well as HOST ports connected to NTB switch, patches 20 and
>> 21, that uses the Linux NTB framework) to communicate using virtio over
>> PCIe.
>>
>
> I saw your patches at [1], here you take a rpmsg as an example to communicate between
> two SoCs using PCIe RC<->EP and HOST1-NTB-HOST2 for different usercases.
> The VOP code works under the PCIe RC<->EP framework, which means that we can also
> make VOP works under the Linux NTB framework, just like the rpmsg way you did here, right?

Does VOP really work with EP framework? At-least whatever is in upstream
doesn't seem to indicate so.

The NTB framework lets one host with RP port to communicate with another
host with RP port.

The EP Framework lets one device with EP port to communicate with a host
with RP port.

Rest of the trick should be how you tie them together.

PCIe framework creates "pci_device" for each of the devices it
enumerates. NTB framework works on this pci_device to communicate with
the remote host using PCIe bridge. The remote host will use NTB
framework as well.

So depends on what interfaces VOP device provides you can use either NTB
framework or EP framework. If it's going to connect two different
devices in turn creating pci_device on each of the systems, then you can
use NTB framework.

Regards
Kishon

2020-11-12 12:38:06

by Sherry Sun

[permalink] [raw]
Subject: RE: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-Transparent Bridge

Hi Kishon,

> Subject: Re: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-
> Transparent Bridge
>
> Hi Sherry,
>
> On 11/11/20 8:19 am, Sherry Sun wrote:
> > Hi Kishon,
> >
> >> Subject: Re: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express
> >> Non- Transparent Bridge
> >>
> >> Hi Sherry, Arnd,
> >>
> >> On 10/11/20 8:29 pm, Arnd Bergmann wrote:
> >>> On Tue, Nov 10, 2020 at 3:20 PM Kishon Vijay Abraham I
> >>> <[email protected]>
> >> wrote:
> >>>> On 10/11/20 7:55 am, Sherry Sun wrote:
> >>>
> >>>>> But for VOP, only two boards are needed(one board as host and one
> >>>>> board as card) to realize the communication between the two
> >>>>> systems,
> >> so my question is what are the advantages of using NTB?
> >>>>
> >>>> NTB is a bridge that facilitates communication between two
> >>>> different systems. So it by itself will not be source or sink of
> >>>> any data unlike a normal EP to RP system (or the VOP) which will be
> >>>> source or sink
> >> of data.
> >>>>
> >>>>> Because I think the architecture of NTB seems more complicated.
> >>>>> Many
> >> thanks!
> >>>>
> >>>> yeah, I think it enables a different use case all together.
> >>>> Consider you have two x86 HOST PCs (having RP) and they have to be
> >>>> communicate using PCIe. NTB can be used in such cases for the two
> >>>> x86 PCs to communicate with each other over PCIe, which wouldn't be
> >>>> possible
> >> without NTB.
> >>>
> >>> I think for VOP, we should have an abstraction that can work on
> >>> either NTB or directly on the endpoint framework but provide an
> >>> interface that then lets you create logical devices the same way.
> >>>
> >>> Doing VOP based on NTB plus the new NTB_EPF driver would also work
> >>> and just move the abstraction somewhere else, but I guess it would
> >>> complicate setting it up for those users that only care about the
> >>> simpler endpoint case.
> >>
> >> I'm not sure if you've got a chance to look at [1], where I added
> >> support for RP<->EP system both running Linux, with EP configured
> >> using Linux EP framework (as well as HOST ports connected to NTB
> >> switch, patches 20 and 21, that uses the Linux NTB framework) to
> >> communicate using virtio over PCIe.
> >>
> >
> > I saw your patches at [1], here you take a rpmsg as an example to
> > communicate between two SoCs using PCIe RC<->EP and HOST1-NTB-
> HOST2 for different usercases.
> > The VOP code works under the PCIe RC<->EP framework, which means that
> > we can also make VOP works under the Linux NTB framework, just like the
> rpmsg way you did here, right?
>
> Does VOP really work with EP framework? At-least whatever is in upstream
> doesn't seem to indicate so.
>

We did write a pci_epf driver to support VOP, looks like pci-epf-test.c, and it works well.
So certainly VOP can work with EP framework.
But it's a pity that the VOP related codes has been deleted before we send the pci_epf_vop driver patches to upstream.

> The NTB framework lets one host with RP port to communicate with another
> host with RP port.
>
> The EP Framework lets one device with EP port to communicate with a host
> with RP port.
>
> Rest of the trick should be how you tie them together.
>
> PCIe framework creates "pci_device" for each of the devices it enumerates.
> NTB framework works on this pci_device to communicate with the remote
> host using PCIe bridge. The remote host will use NTB framework as well.
>
> So depends on what interfaces VOP device provides you can use either NTB
> framework or EP framework. If it's going to connect two different devices in
> turn creating pci_device on each of the systems, then you can use NTB
> framework.
>

Thanks for your detailed explanation! It is clear.
I think maybe VOP is suitable for the basic PCIe framework instead of NTB.

Best regards
Sherry

> Regards
> Kishon

2020-11-12 13:29:14

by Arnd Bergmann

[permalink] [raw]
Subject: Re: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-Transparent Bridge

On Tue, Nov 10, 2020 at 4:42 PM Kishon Vijay Abraham I <[email protected]> wrote:
> On 10/11/20 8:29 pm, Arnd Bergmann wrote:
> > On Tue, Nov 10, 2020 at 3:20 PM Kishon Vijay Abraham I <[email protected]> wrote:
> >> On 10/11/20 7:55 am, Sherry Sun wrote:
> >
> >>> But for VOP, only two boards are needed(one board as host and one board as card) to realize the
> >>> communication between the two systems, so my question is what are the advantages of using NTB?
> >>
> >> NTB is a bridge that facilitates communication between two different
> >> systems. So it by itself will not be source or sink of any data unlike a
> >> normal EP to RP system (or the VOP) which will be source or sink of data.
> >>
> >>> Because I think the architecture of NTB seems more complicated. Many thanks!
> >>
> >> yeah, I think it enables a different use case all together. Consider you
> >> have two x86 HOST PCs (having RP) and they have to be communicate using
> >> PCIe. NTB can be used in such cases for the two x86 PCs to communicate
> >> with each other over PCIe, which wouldn't be possible without NTB.
> >
> > I think for VOP, we should have an abstraction that can work on either NTB
> > or directly on the endpoint framework but provide an interface that then
> > lets you create logical devices the same way.
> >
> > Doing VOP based on NTB plus the new NTB_EPF driver would also
> > work and just move the abstraction somewhere else, but I guess it
> > would complicate setting it up for those users that only care about the
> > simpler endpoint case.
>
> I'm not sure if you've got a chance to look at [1], where I added
> support for RP<->EP system both running Linux, with EP configured using
> Linux EP framework (as well as HOST ports connected to NTB switch,
> patches 20 and 21, that uses the Linux NTB framework) to communicate
> using virtio over PCIe.
>
> The cover-letter [1] shows a picture of the two use cases supported in
> that series.
>
> [1] -> http://lore.kernel.org/r/[email protected]

No, I missed, that, thanks for pointing me to it!

This looks very promising indeed, I need to read up on the whole
discussion there. I also see your slides at [1] that help do explain some
of it. I have one fundamental question that I can't figure out from
the description, maybe you can help me here:

How is the configuration managed, taking the EP case as an
example? Your UseCase1 example sounds like the system that owns
the EP hardware is the one that turns the EP into a vhost device,
and creates a vhost-rpmsg device on top, while the RC side would
probe the pci-vhost and then detect a virtio-rpmsg device to talk to.
Can it also do the opposite, so you end up with e.g. a virtio-net
device on the EP side and vhost-net on the RC?

Arnd

[1] https://linuxplumbersconf.org/event/7/contributions/849/attachments/642/1175/Virtio_for_PCIe_RC_EP_NTB.pdf

2020-11-16 05:33:48

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: Re: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-Transparent Bridge

Hi Arnd,

On 12/11/20 6:54 pm, Arnd Bergmann wrote:
> On Tue, Nov 10, 2020 at 4:42 PM Kishon Vijay Abraham I <[email protected]> wrote:
>> On 10/11/20 8:29 pm, Arnd Bergmann wrote:
>>> On Tue, Nov 10, 2020 at 3:20 PM Kishon Vijay Abraham I <[email protected]> wrote:
>>>> On 10/11/20 7:55 am, Sherry Sun wrote:
>>>
>>>>> But for VOP, only two boards are needed(one board as host and one board as card) to realize the
>>>>> communication between the two systems, so my question is what are the advantages of using NTB?
>>>>
>>>> NTB is a bridge that facilitates communication between two different
>>>> systems. So it by itself will not be source or sink of any data unlike a
>>>> normal EP to RP system (or the VOP) which will be source or sink of data.
>>>>
>>>>> Because I think the architecture of NTB seems more complicated. Many thanks!
>>>>
>>>> yeah, I think it enables a different use case all together. Consider you
>>>> have two x86 HOST PCs (having RP) and they have to be communicate using
>>>> PCIe. NTB can be used in such cases for the two x86 PCs to communicate
>>>> with each other over PCIe, which wouldn't be possible without NTB.
>>>
>>> I think for VOP, we should have an abstraction that can work on either NTB
>>> or directly on the endpoint framework but provide an interface that then
>>> lets you create logical devices the same way.
>>>
>>> Doing VOP based on NTB plus the new NTB_EPF driver would also
>>> work and just move the abstraction somewhere else, but I guess it
>>> would complicate setting it up for those users that only care about the
>>> simpler endpoint case.
>>
>> I'm not sure if you've got a chance to look at [1], where I added
>> support for RP<->EP system both running Linux, with EP configured using
>> Linux EP framework (as well as HOST ports connected to NTB switch,
>> patches 20 and 21, that uses the Linux NTB framework) to communicate
>> using virtio over PCIe.
>>
>> The cover-letter [1] shows a picture of the two use cases supported in
>> that series.
>>
>> [1] -> http://lore.kernel.org/r/[email protected]
>
> No, I missed, that, thanks for pointing me to it!
>
> This looks very promising indeed, I need to read up on the whole
> discussion there. I also see your slides at [1] that help do explain some
> of it. I have one fundamental question that I can't figure out from
> the description, maybe you can help me here:
>
> How is the configuration managed, taking the EP case as an
> example? Your UseCase1 example sounds like the system that owns
> the EP hardware is the one that turns the EP into a vhost device,
> and creates a vhost-rpmsg device on top, while the RC side would
> probe the pci-vhost and then detect a virtio-rpmsg device to talk to.

That's correct. Slide no 9 in [1] should give the layering details.

> Can it also do the opposite, so you end up with e.g. a virtio-net
> device on the EP side and vhost-net on the RC?

Unfortunately no. Again referring slide 9 in [1], we only have
vhost-pci-epf on the EP side which only creates a "vhost_dev" to deal
with vhost side of things. For doing the opposite, we'd need to create
virtio-pci-epf for EP side that interacts with core virtio (and also the
corresponding vhost back end on PCI host).

Thanks
Kishon

>
> Arnd
>
> [1] https://linuxplumbersconf.org/event/7/contributions/849/attachments/642/1175/Virtio_for_PCIe_RC_EP_NTB.pdf
>

2020-11-16 15:50:45

by Kishon Vijay Abraham I

[permalink] [raw]
Subject: Re: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-Transparent Bridge

Hi Arnd,

On 16/11/20 9:07 pm, Arnd Bergmann wrote:
> On Mon, Nov 16, 2020 at 6:19 AM Kishon Vijay Abraham I <[email protected]> wrote:
>> On 12/11/20 6:54 pm, Arnd Bergmann wrote:
>>>
>>> This looks very promising indeed, I need to read up on the whole
>>> discussion there. I also see your slides at [1] that help do explain some
>>> of it. I have one fundamental question that I can't figure out from
>>> the description, maybe you can help me here:
>>>
>>> How is the configuration managed, taking the EP case as an
>>> example? Your UseCase1 example sounds like the system that owns
>>> the EP hardware is the one that turns the EP into a vhost device,
>>> and creates a vhost-rpmsg device on top, while the RC side would
>>> probe the pci-vhost and then detect a virtio-rpmsg device to talk to.
>>
>> That's correct. Slide no 9 in [1] should give the layering details.
>>
>>> Can it also do the opposite, so you end up with e.g. a virtio-net
>>> device on the EP side and vhost-net on the RC?
>>
>> Unfortunately no. Again referring slide 9 in [1], we only have
>> vhost-pci-epf on the EP side which only creates a "vhost_dev" to deal
>> with vhost side of things. For doing the opposite, we'd need to create
>> virtio-pci-epf for EP side that interacts with core virtio (and also the
>> corresponding vhost back end on PCI host).
>
> Ok, I see. So I think this is the opposite of drivers/misc/mic and
> the bluefield driver were using, so we would probably end up
> needing both.
>
> Then again, I guess the NTB driver would give us the functionality
> for free, if it shows a symmetric link?

Right, NTB driver would need "pci_dev" on both sides of the link. But
that would also mean we cannot use pci EP framework which actually uses
"pci_epf".

Thanks
Kishon

2020-11-17 01:54:25

by Arnd Bergmann

[permalink] [raw]
Subject: Re: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-Transparent Bridge

On Mon, Nov 16, 2020 at 6:19 AM Kishon Vijay Abraham I <[email protected]> wrote:
> On 12/11/20 6:54 pm, Arnd Bergmann wrote:
> >
> > This looks very promising indeed, I need to read up on the whole
> > discussion there. I also see your slides at [1] that help do explain some
> > of it. I have one fundamental question that I can't figure out from
> > the description, maybe you can help me here:
> >
> > How is the configuration managed, taking the EP case as an
> > example? Your UseCase1 example sounds like the system that owns
> > the EP hardware is the one that turns the EP into a vhost device,
> > and creates a vhost-rpmsg device on top, while the RC side would
> > probe the pci-vhost and then detect a virtio-rpmsg device to talk to.
>
> That's correct. Slide no 9 in [1] should give the layering details.
>
> > Can it also do the opposite, so you end up with e.g. a virtio-net
> > device on the EP side and vhost-net on the RC?
>
> Unfortunately no. Again referring slide 9 in [1], we only have
> vhost-pci-epf on the EP side which only creates a "vhost_dev" to deal
> with vhost side of things. For doing the opposite, we'd need to create
> virtio-pci-epf for EP side that interacts with core virtio (and also the
> corresponding vhost back end on PCI host).

Ok, I see. So I think this is the opposite of drivers/misc/mic and
the bluefield driver were using, so we would probably end up
needing both.

Then again, I guess the NTB driver would give us the functionality
for free, if it shows a symmetric link?

Arnd