This patch set adds support for Rx/Tx capabilities on switch port interfaces.
Also, control traffic is redirected through ACLs to the CPU in order to
enable proper STP protocol handling.
The control interface is comprised of 3 queues in total: Rx, Rx error and
Tx confirmation. In this patch set we only enable Rx and Tx conf. All
switch ports share the same queues when frames are redirected to the CPU.
Information regarding the ingress switch port is passed through frame
metadata - the flow context field of the descriptor. NAPI instances are
also shared between switch net_devices and are enabled when at least on
one of the switch ports .dev_open() was called and disabled when at least
one switch port is still up.
The new feature is enabled only on MC versions greater than 10.19.0
(which is soon to be released).
Ioana Ciornei (12):
staging: dpaa2-ethsw: get control interface attributes
staging: dpaa2-ethsw: setup buffer pool for control traffic
staging: dpaa2-ethsw: setup RX path rings
staging: dpaa2-ethsw: setup dpio
staging: dpaa2-ethsw: add ACL table at port probe
staging: dpaa2-ethsw: add ACL entry to redirect STP to CPU
staging: dpaa2-ethsw: seed the buffer pool
staging: dpaa2-ethsw: handle Rx path on control interface
staging: dpaa2-ethsw: add .ndo_start_xmit() callback
staging: dpaa2-ethsw: enable the CTRL_IF based on the FW version
staging: dpaa2-ethsw: enable the control interface
staging: dpaa2-ethsw: remove control traffic from TODO file
drivers/staging/fsl-dpaa2/ethsw/TODO | 8 -
drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h | 141 ++++-
drivers/staging/fsl-dpaa2/ethsw/dpsw.c | 365 +++++++++++
drivers/staging/fsl-dpaa2/ethsw/dpsw.h | 226 +++++++
drivers/staging/fsl-dpaa2/ethsw/ethsw.c | 964 ++++++++++++++++++++++++++++-
drivers/staging/fsl-dpaa2/ethsw/ethsw.h | 83 +++
6 files changed, 1763 insertions(+), 24 deletions(-)
--
1.9.1
The dpaa2-ethsw supports only one Rx queue that is shared by all switch
ports. This means that information about which port was the ingress port
for a specific frame needs to be passed in metadata. In our case, the
Flow Context (FLC) field from the frame descriptor holds this
information. Besides the interface ID of the ingress port we also
receive the virtual QDID of the port. Below is a visual description of
the 64 bits of FLC.
63 47 31 15 0
+---------------------------------------------------+
| | | | |
| RESERVED | IF_ID | RESERVED | IF QDID |
| | | | |
+---------------------------------------------------+
Because all switch ports share the same Rx and Tx conf queues, NAPI
management takes into consideration when there is at least one switch
interface open to enable the NAPI instance.
The Rx path is common, for the most part, for both Rx and Tx conf with
the mention that each of them has its own consume function of a frame
descriptor. Dequeueing from a FQ, consuming dequeued store and also the
NAPI poll function is common between both queues.
Signed-off-by: Ioana Ciornei <[email protected]>
---
drivers/staging/fsl-dpaa2/ethsw/ethsw.c | 311 +++++++++++++++++++++++++++++++-
drivers/staging/fsl-dpaa2/ethsw/ethsw.h | 4 +
2 files changed, 312 insertions(+), 3 deletions(-)
diff --git a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
index 53d651209feb..75e4b3b8c84c 100644
--- a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
+++ b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
@@ -478,9 +478,51 @@ static int port_carrier_state_sync(struct net_device *netdev)
return 0;
}
+/* Manage all NAPI instances for the control interface.
+ *
+ * We only have one RX queue and one Tx Conf queue for all
+ * switch ports. Therefore, we only need to enable the NAPI instance once, the
+ * first time one of the switch ports runs .dev_open().
+ */
+
+static void ethsw_enable_ctrl_if_napi(struct ethsw_core *ethsw)
+{
+ int i;
+
+ /* a new interface is using the NAPI instance */
+ ethsw->napi_users++;
+
+ /* if there is already a user of the instance, return */
+ if (ethsw->napi_users > 1)
+ return;
+
+ if (!ethsw_has_ctrl_if(ethsw))
+ return;
+
+ for (i = 0; i < ETHSW_RX_NUM_FQS; i++)
+ napi_enable(ðsw->fq[i].napi);
+}
+
+static void ethsw_disable_ctrl_if_napi(struct ethsw_core *ethsw)
+{
+ int i;
+
+ /* If we are not the last interface using the NAPI, return */
+ ethsw->napi_users--;
+ if (ethsw->napi_users)
+ return;
+
+ if (!ethsw_has_ctrl_if(ethsw))
+ return;
+
+ for (i = 0; i < ETHSW_RX_NUM_FQS; i++)
+ napi_disable(ðsw->fq[i].napi);
+}
+
static int port_open(struct net_device *netdev)
{
struct ethsw_port_priv *port_priv = netdev_priv(netdev);
+ struct ethsw_core *ethsw = port_priv->ethsw_data;
int err;
/* No need to allow Tx as control interface is disabled */
@@ -502,6 +544,8 @@ static int port_open(struct net_device *netdev)
goto err_carrier_sync;
}
+ ethsw_enable_ctrl_if_napi(ethsw);
+
return 0;
err_carrier_sync:
@@ -514,6 +558,7 @@ static int port_open(struct net_device *netdev)
static int port_stop(struct net_device *netdev)
{
struct ethsw_port_priv *port_priv = netdev_priv(netdev);
+ struct ethsw_core *ethsw = port_priv->ethsw_data;
int err;
err = dpsw_if_disable(port_priv->ethsw_data->mc_io, 0,
@@ -524,6 +569,8 @@ static int port_stop(struct net_device *netdev)
return err;
}
+ ethsw_disable_ctrl_if_napi(ethsw);
+
return 0;
}
@@ -690,6 +737,28 @@ static int port_fdb_dump(struct sk_buff *skb, struct netlink_callback *cb,
return err;
}
+static void ethsw_free_fd(const struct ethsw_core *ethsw,
+ const struct dpaa2_fd *fd)
+{
+ struct device *dev = ethsw->dev;
+ unsigned char *buffer_start;
+ struct sk_buff **skbh, *skb;
+ dma_addr_t fd_addr;
+
+ fd_addr = dpaa2_fd_get_addr(fd);
+ skbh = dpaa2_iova_to_virt(ethsw->iommu_domain, fd_addr);
+
+ skb = *skbh;
+ buffer_start = (unsigned char *)skbh;
+
+ dma_unmap_single(dev, fd_addr,
+ skb_tail_pointer(skb) - buffer_start,
+ DMA_TO_DEVICE);
+
+ /* Move on with skb release */
+ dev_kfree_skb(skb);
+}
+
static const struct net_device_ops ethsw_port_ops = {
.ndo_open = port_open,
.ndo_stop = port_stop,
@@ -1368,6 +1437,104 @@ static int ethsw_register_notifier(struct device *dev)
return err;
}
+/* Build a linear skb based on a single-buffer frame descriptor */
+static struct sk_buff *ethsw_build_linear_skb(struct ethsw_core *ethsw,
+ const struct dpaa2_fd *fd)
+{
+ u16 fd_offset = dpaa2_fd_get_offset(fd);
+ u32 fd_length = dpaa2_fd_get_len(fd);
+ struct device *dev = ethsw->dev;
+ struct sk_buff *skb = NULL;
+ dma_addr_t addr;
+ void *fd_vaddr;
+
+ addr = dpaa2_fd_get_addr(fd);
+ dma_unmap_single(dev, addr, DPAA2_ETHSW_RX_BUF_SIZE,
+ DMA_FROM_DEVICE);
+ fd_vaddr = dpaa2_iova_to_virt(ethsw->iommu_domain, addr);
+ prefetch(fd_vaddr + fd_offset);
+
+ skb = build_skb(fd_vaddr, DPAA2_ETHSW_RX_BUF_SIZE +
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
+ if (unlikely(!skb)) {
+ dev_err(dev, "build_skb() failed\n");
+ return NULL;
+ }
+
+ skb_reserve(skb, fd_offset);
+ skb_put(skb, fd_length);
+
+ ethsw->buf_count--;
+
+ return skb;
+}
+
+static void ethsw_tx_conf(struct ethsw_fq *fq,
+ const struct dpaa2_fd *fd)
+{
+ ethsw_free_fd(fq->ethsw, fd);
+}
+
+static void ethsw_rx(struct ethsw_fq *fq,
+ const struct dpaa2_fd *fd)
+{
+ struct ethsw_core *ethsw = fq->ethsw;
+ struct ethsw_port_priv *port_priv;
+ struct net_device *netdev;
+ struct vlan_ethhdr *hdr;
+ struct sk_buff *skb;
+ u16 vlan_tci, vid;
+ int if_id = -1;
+ int err;
+
+ /* prefetch the frame descriptor */
+ prefetch(fd);
+
+ /* get switch ingress interface ID */
+ if_id = upper_32_bits(dpaa2_fd_get_flc(fd)) & 0x0000FFFF;
+
+ if (if_id < 0 || if_id >= ethsw->sw_attr.num_ifs) {
+ dev_err(ethsw->dev, "Frame received from unknown interface!\n");
+ goto err_free_fd;
+ }
+ port_priv = ethsw->ports[if_id];
+ netdev = port_priv->netdev;
+
+ /* build the SKB based on the FD received */
+ if (dpaa2_fd_get_format(fd) == dpaa2_fd_single) {
+ skb = ethsw_build_linear_skb(ethsw, fd);
+ } else {
+ netdev_err(netdev, "Received invalid frame format\n");
+ goto err_free_fd;
+ }
+
+ if (unlikely(!skb))
+ goto err_free_fd;
+
+ skb_reset_mac_header(skb);
+
+ /* Remove PVID from received frame */
+ hdr = vlan_eth_hdr(skb);
+ vid = ntohs(hdr->h_vlan_TCI) & VLAN_VID_MASK;
+ if (vid == port_priv->pvid) {
+ err = __skb_vlan_pop(skb, &vlan_tci);
+ if (err) {
+ dev_info(ethsw->dev, "skb_vlan_pop() failed %d", err);
+ goto err_free_fd;
+ }
+ }
+
+ skb->dev = netdev;
+ skb->protocol = eth_type_trans(skb, skb->dev);
+
+ netif_receive_skb(skb);
+
+ return;
+
+err_free_fd:
+ ethsw_free_fd(ethsw, fd);
+}
+
static int ethsw_setup_fqs(struct ethsw_core *ethsw)
{
struct dpsw_ctrl_if_attr ctrl_if_attr;
@@ -1384,11 +1551,13 @@ static int ethsw_setup_fqs(struct ethsw_core *ethsw)
ethsw->fq[i].fqid = ctrl_if_attr.rx_fqid;
ethsw->fq[i].ethsw = ethsw;
- ethsw->fq[i++].type = DPSW_QUEUE_RX;
+ ethsw->fq[i].type = DPSW_QUEUE_RX;
+ ethsw->fq[i++].consume = ethsw_rx;
ethsw->fq[i].fqid = ctrl_if_attr.tx_err_conf_fqid;
ethsw->fq[i].ethsw = ethsw;
- ethsw->fq[i++].type = DPSW_QUEUE_TX_ERR_CONF;
+ ethsw->fq[i].type = DPSW_QUEUE_TX_ERR_CONF;
+ ethsw->fq[i++].consume = ethsw_tx_conf;
return 0;
}
@@ -1476,6 +1645,31 @@ static int ethsw_add_bufs(struct ethsw_core *ethsw, u16 bpid)
return 0;
}
+static int ethsw_refill_bp(struct ethsw_core *ethsw)
+{
+ int *count = ðsw->buf_count;
+ int new_count;
+ int err = 0;
+
+ if (unlikely(*count < DPAA2_ETHSW_REFILL_THRESH)) {
+ do {
+ new_count = ethsw_add_bufs(ethsw, ethsw->bpid);
+ if (unlikely(!new_count)) {
+ /* Out of memory; abort for now, we'll
+ * try later on
+ */
+ break;
+ }
+ *count += new_count;
+ } while (*count < DPAA2_ETHSW_NUM_BUFS);
+
+ if (unlikely(*count < DPAA2_ETHSW_NUM_BUFS))
+ err = -ENOMEM;
+ }
+
+ return err;
+}
+
static int ethsw_seed_bp(struct ethsw_core *ethsw)
{
int *count, i;
@@ -1613,6 +1807,106 @@ static void ethsw_destroy_rings(struct ethsw_core *ethsw)
dpaa2_io_store_destroy(ethsw->fq[i].store);
}
+static int ethsw_pull_fq(struct ethsw_fq *fq)
+{
+ int err, retries = 0;
+
+ /* Try to pull from the FQ while the portal is busy and we didn't hit
+ * the maximum number fo retries
+ */
+ do {
+ err = dpaa2_io_service_pull_fq(NULL,
+ fq->fqid,
+ fq->store);
+ cpu_relax();
+ } while (err == -EBUSY && retries++ < DPAA2_ETHSW_SWP_BUSY_RETRIES);
+
+ if (unlikely(err))
+ dev_err(fq->ethsw->dev, "dpaa2_io_service_pull err %d", err);
+
+ return err;
+}
+
+/* Consume all frames pull-dequeued into the store */
+static int ethsw_store_consume(struct ethsw_fq *fq)
+{
+ struct ethsw_core *ethsw = fq->ethsw;
+ int cleaned = 0, is_last;
+ struct dpaa2_dq *dq;
+ int retries = 0;
+
+ do {
+ /* Get the next available FD from the store */
+ dq = dpaa2_io_store_next(fq->store, &is_last);
+ if (unlikely(!dq)) {
+ if (retries++ >= DPAA2_ETHSW_SWP_BUSY_RETRIES) {
+ dev_err_once(ethsw->dev,
+ "No valid dequeue response\n");
+ return -ETIMEDOUT;
+ }
+ continue;
+ }
+
+ /* Process the FD */
+ fq->consume(fq, dpaa2_dq_fd(dq));
+ cleaned++;
+
+ } while (!is_last);
+
+ return cleaned;
+}
+
+/* NAPI poll routine */
+static int ethsw_poll(struct napi_struct *napi, int budget)
+{
+ int err, cleaned = 0, store_cleaned, work_done;
+ struct ethsw_fq *fq;
+ int retries = 0;
+
+ fq = container_of(napi, struct ethsw_fq, napi);
+
+ do {
+ err = ethsw_pull_fq(fq);
+ if (unlikely(err))
+ break;
+
+ /* Refill pool if appropriate */
+ ethsw_refill_bp(fq->ethsw);
+
+ store_cleaned = ethsw_store_consume(fq);
+ cleaned += store_cleaned;
+
+ if (cleaned >= budget) {
+ work_done = budget;
+ goto out;
+ }
+
+ } while (store_cleaned);
+
+ /* We didn't consume entire budget, so finish napi and
+ * re-enable data availability notifications
+ */
+ napi_complete_done(napi, cleaned);
+ do {
+ err = dpaa2_io_service_rearm(NULL, &fq->nctx);
+ cpu_relax();
+ } while (err == -EBUSY && retries++ < DPAA2_ETHSW_SWP_BUSY_RETRIES);
+
+ work_done = max(cleaned, 1);
+out:
+
+ return work_done;
+}
+
+static void ethsw_fqdan_cb(struct dpaa2_io_notification_ctx *nctx)
+{
+ struct ethsw_fq *fq;
+
+ fq = container_of(nctx, struct ethsw_fq, nctx);
+
+ napi_schedule_irqoff(&fq->napi);
+}
+
static int ethsw_setup_dpio(struct ethsw_core *ethsw)
{
struct dpsw_ctrl_if_queue_cfg queue_cfg;
@@ -1629,6 +1923,7 @@ static int ethsw_setup_dpio(struct ethsw_core *ethsw)
nctx->is_cdan = 0;
nctx->id = ethsw->fq[i].fqid;
nctx->desired_cpu = DPAA2_IO_ANY_CPU;
+ nctx->cb = ethsw_fqdan_cb;
err = dpaa2_io_service_register(NULL, nctx, ethsw->dev);
if (err) {
err = -EPROBE_DEFER;
@@ -1850,7 +2145,6 @@ static int ethsw_acl_mac_to_ctr_if(struct ethsw_port_priv *port_priv,
memset(&acl_entry_cfg, 0, sizeof(acl_entry_cfg));
acl_entry_cfg.precedence = port_priv->acl_cnt;
acl_entry_cfg.result.action = DPSW_ACL_ACTION_REDIRECT_TO_CTRL_IF;
-
acl_entry_cfg.key_iova = dma_map_single(dev, cmd_buff,
DPAA2_ETHSW_PORT_ACL_KEY_SIZE,
DMA_TO_DEVICE);
@@ -2147,6 +2441,17 @@ static int ethsw_probe(struct fsl_mc_device *sw_dev)
goto err_free_ports;
}
+ /* Add a NAPI instance for each of the Rx queues. The first port's
+ * net_device will be associated with the instances since we do not have
+ * different queues for each switch ports.
+ */
+ if (ethsw_has_ctrl_if(ethsw)) {
+ for (i = 0; i < ETHSW_RX_NUM_FQS; i++)
+ netif_napi_add(ethsw->ports[0]->netdev,
+ ðsw->fq[i].napi, ethsw_poll,
+ NAPI_POLL_WEIGHT);
+ }
+
err = dpsw_enable(ethsw->mc_io, 0, ethsw->dpsw_handle);
if (err) {
dev_err(ethsw->dev, "dpsw_enable err %d\n", err);
diff --git a/drivers/staging/fsl-dpaa2/ethsw/ethsw.h b/drivers/staging/fsl-dpaa2/ethsw/ethsw.h
index a118cb87b1c8..b585d06be105 100644
--- a/drivers/staging/fsl-dpaa2/ethsw/ethsw.h
+++ b/drivers/staging/fsl-dpaa2/ethsw/ethsw.h
@@ -57,6 +57,7 @@
/* Buffer management */
#define BUFS_PER_CMD 7
#define DPAA2_ETHSW_NUM_BUFS (1024 * BUFS_PER_CMD)
+#define DPAA2_ETHSW_REFILL_THRESH (DPAA2_ETHSW_NUM_BUFS * 5 / 6)
/* ACL related configuration points */
#define DPAA2_ETHSW_PORT_MAX_ACL_ENTRIES 16
@@ -76,10 +77,12 @@
struct ethsw_core;
struct ethsw_fq {
+ void (*consume)(struct ethsw_fq *fq, const struct dpaa2_fd *fd);
struct ethsw_core *ethsw;
enum dpsw_queue_type type;
struct dpaa2_io_notification_ctx nctx;
struct dpaa2_io_store *store;
+ struct napi_struct napi;
u32 fqid;
};
@@ -116,6 +119,7 @@ struct ethsw_core {
struct fsl_mc_device *dpbp_dev;
int buf_count;
u16 bpid;
+ int napi_users;
};
static inline bool ethsw_has_ctrl_if(struct ethsw_core *ethsw)
--
1.9.1
For each of the switch ports, an ACL table is created and initialized at
port probe. These tables will be used to add ACL entries to redirect
control traffic to the CPU.
Signed-off-by: Ioana Ciornei <[email protected]>
---
drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h | 27 +++++++
drivers/staging/fsl-dpaa2/ethsw/dpsw.c | 111 +++++++++++++++++++++++++++++
drivers/staging/fsl-dpaa2/ethsw/dpsw.h | 30 ++++++++
drivers/staging/fsl-dpaa2/ethsw/ethsw.c | 44 +++++++++++-
drivers/staging/fsl-dpaa2/ethsw/ethsw.h | 3 +
5 files changed, 214 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h b/drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h
index ad25872a10b7..00244c6d39c9 100644
--- a/drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h
+++ b/drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h
@@ -71,6 +71,11 @@
#define DPSW_CMDID_FDB_SET_LEARNING_MODE DPSW_CMD_ID(0x088)
#define DPSW_CMDID_FDB_DUMP DPSW_CMD_ID(0x08A)
+#define DPSW_CMDID_ACL_ADD DPSW_CMD_ID(0x090)
+#define DPSW_CMDID_ACL_REMOVE DPSW_CMD_ID(0x091)
+#define DPSW_CMDID_ACL_ADD_IF DPSW_CMD_ID(0x094)
+#define DPSW_CMDID_ACL_REMOVE_IF DPSW_CMD_ID(0x095)
+
#define DPSW_CMDID_CTRL_IF_GET_ATTR DPSW_CMD_ID(0x0A0)
#define DPSW_CMDID_CTRL_IF_SET_POOLS DPSW_CMD_ID(0x0A1)
#define DPSW_CMDID_CTRL_IF_SET_QUEUE DPSW_CMD_ID(0x0A6)
@@ -400,6 +405,28 @@ struct dpsw_cmd_ctrl_if_set_queue {
__le32 options;
};
+struct dpsw_cmd_acl_add {
+ __le16 pad;
+ __le16 max_entries;
+};
+
+struct dpsw_rsp_acl_add {
+ __le16 acl_id;
+};
+
+struct dpsw_cmd_acl_remove {
+ __le16 acl_id;
+};
+
+struct dpsw_cmd_acl_if {
+ /* cmd word 0 */
+ __le16 acl_id;
+ __le16 num_ifs;
+ __le32 pad;
+ /* cmd word 1 */
+ __le64 if_id[4];
+};
+
struct dpsw_rsp_get_api_version {
__le16 version_major;
__le16 version_minor;
diff --git a/drivers/staging/fsl-dpaa2/ethsw/dpsw.c b/drivers/staging/fsl-dpaa2/ethsw/dpsw.c
index 6f4d63b9f02e..d9e27f0e9edb 100644
--- a/drivers/staging/fsl-dpaa2/ethsw/dpsw.c
+++ b/drivers/staging/fsl-dpaa2/ethsw/dpsw.c
@@ -1278,6 +1278,117 @@ int dpsw_ctrl_if_set_queue(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
}
/**
+ * dpsw_acl_add() - Adds ACL to L2 switch.
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPSW object
+ * @acl_id: Returned ACL ID, for the future reference
+ * @cfg: ACL configuration
+ *
+ * Create Access Control List. Multiple ACLs can be created and
+ * co-exist in L2 switch
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dpsw_acl_add(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
+ u16 *acl_id, const struct dpsw_acl_cfg *cfg)
+{
+ struct dpsw_cmd_acl_add *cmd_params;
+ struct dpsw_rsp_acl_add *rsp_params;
+ struct fsl_mc_command cmd = { 0 };
+ int err;
+
+ cmd.header = mc_encode_cmd_header(DPSW_CMDID_ACL_ADD, cmd_flags, token);
+ cmd_params = (struct dpsw_cmd_acl_add *)cmd.params;
+ cmd_params->max_entries = cpu_to_le16(cfg->max_entries);
+
+ err = mc_send_command(mc_io, &cmd);
+ if (err)
+ return err;
+
+ rsp_params = (struct dpsw_rsp_acl_add *)cmd.params;
+ *acl_id = le16_to_cpu(rsp_params->acl_id);
+
+ return 0;
+}
+
+/**
+ * dpsw_acl_remove() - Removes ACL from L2 switch.
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPSW object
+ * @acl_id: ACL ID
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dpsw_acl_remove(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
+ u16 acl_id)
+{
+ struct dpsw_cmd_acl_remove *cmd_params;
+ struct fsl_mc_command cmd = { 0 };
+
+ cmd.header = mc_encode_cmd_header(DPSW_CMDID_ACL_REMOVE, cmd_flags,
+ token);
+ cmd_params = (struct dpsw_cmd_acl_remove *)cmd.params;
+ cmd_params->acl_id = cpu_to_le16(acl_id);
+
+ return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpsw_acl_add_if() - Associate interface/interfaces with ACL.
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPSW object
+ * @acl_id: ACL ID
+ * @cfg: Interfaces list
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dpsw_acl_add_if(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
+ u16 acl_id, const struct dpsw_acl_if_cfg *cfg)
+{
+ struct dpsw_cmd_acl_if *cmd_params;
+ struct fsl_mc_command cmd = { 0 };
+
+ cmd.header = mc_encode_cmd_header(DPSW_CMDID_ACL_ADD_IF, cmd_flags,
+ token);
+ cmd_params = (struct dpsw_cmd_acl_if *)cmd.params;
+ cmd_params->acl_id = cpu_to_le16(acl_id);
+ cmd_params->num_ifs = cpu_to_le16(cfg->num_ifs);
+ build_if_id_bitmap(cmd_params->if_id, cfg->if_id, cfg->num_ifs);
+
+ return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpsw_acl_remove_if() - De-associate interface/interfaces from ACL.
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPSW object
+ * @acl_id: ACL ID
+ * @cfg: Interfaces list
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dpsw_acl_remove_if(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
+ u16 acl_id, const struct dpsw_acl_if_cfg *cfg)
+{
+ struct fsl_mc_command cmd = { 0 };
+ struct dpsw_cmd_acl_if *cmd_params;
+
+ cmd.header = mc_encode_cmd_header(DPSW_CMDID_ACL_REMOVE_IF,
+ cmd_flags,
+ token);
+ cmd_params = (struct dpsw_cmd_acl_if *)cmd.params;
+ cmd_params->acl_id = cpu_to_le16(acl_id);
+ cmd_params->num_ifs = cpu_to_le16(cfg->num_ifs);
+ build_if_id_bitmap(cmd_params->if_id, cfg->if_id, cfg->num_ifs);
+
+ return mc_send_command(mc_io, &cmd);
+}
+
+/**
* dpsw_get_api_version() - Get Data Path Switch API version
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
diff --git a/drivers/staging/fsl-dpaa2/ethsw/dpsw.h b/drivers/staging/fsl-dpaa2/ethsw/dpsw.h
index 55b5bbac9fbd..0726a5313c4e 100644
--- a/drivers/staging/fsl-dpaa2/ethsw/dpsw.h
+++ b/drivers/staging/fsl-dpaa2/ethsw/dpsw.h
@@ -645,6 +645,36 @@ struct dpsw_fdb_attr {
u16 max_fdb_mc_groups;
};
+/**
+ * struct dpsw_acl_cfg - ACL Configuration
+ * @max_entries: Number of FDB entries
+ */
+struct dpsw_acl_cfg {
+ u16 max_entries;
+};
+
+int dpsw_acl_add(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
+ u16 *acl_id, const struct dpsw_acl_cfg *cfg);
+
+int dpsw_acl_remove(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
+ u16 acl_id);
+
+/**
+ * struct dpsw_acl_if_cfg - List of interfaces to associate with ACL table
+ * @num_ifs: Number of interfaces
+ * @if_id: List of interfaces
+ */
+struct dpsw_acl_if_cfg {
+ u16 num_ifs;
+ u16 if_id[DPSW_MAX_IF];
+};
+
+int dpsw_acl_add_if(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
+ u16 acl_id, const struct dpsw_acl_if_cfg *cfg);
+
+int dpsw_acl_remove_if(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
+ u16 acl_id, const struct dpsw_acl_if_cfg *cfg);
+
int dpsw_get_api_version(struct fsl_mc_io *mc_io,
u32 cmd_flags,
u16 *major_ver,
diff --git a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
index 125fd6ce669e..f3e339c5e9a1 100644
--- a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
+++ b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
@@ -1683,9 +1683,11 @@ static int ethsw_init(struct fsl_mc_device *sw_dev)
static int ethsw_port_init(struct ethsw_port_priv *port_priv, u16 port)
{
- struct net_device *netdev = port_priv->netdev;
struct ethsw_core *ethsw = port_priv->ethsw_data;
+ struct net_device *netdev = port_priv->netdev;
+ struct dpsw_acl_if_cfg acl_if_cfg;
struct dpsw_vlan_if_cfg vcfg;
+ struct dpsw_acl_cfg acl_cfg;
int err;
/* Switch starts with all ports configured to VLAN 1. Need to
@@ -1711,6 +1713,30 @@ static int ethsw_port_init(struct ethsw_port_priv *port_priv, u16 port)
if (err)
netdev_err(netdev, "dpsw_vlan_remove_if err %d\n", err);
+ /* create the ACL table for this particular interface */
+ acl_cfg.max_entries = DPAA2_ETHSW_PORT_MAX_ACL_ENTRIES,
+ err = dpsw_acl_add(ethsw->mc_io, 0, ethsw->dpsw_handle,
+ &port_priv->acl_id, &acl_cfg);
+ if (err) {
+ netdev_err(netdev, "dpsw_acl_add err %d\n", err);
+ return err;
+ }
+
+ acl_if_cfg.num_ifs = 1;
+ acl_if_cfg.if_id[0] = port_priv->idx;
+ err = dpsw_acl_add_if(ethsw->mc_io, 0, ethsw->dpsw_handle,
+ port_priv->acl_id, &acl_if_cfg);
+ if (err) {
+ netdev_err(netdev, "dpsw_acl_add_if err %d\n", err);
+ goto err_acl_add;
+ }
+
+ return 0;
+
+err_acl_add:
+ dpsw_acl_remove(ethsw->mc_io, 0, ethsw->dpsw_handle,
+ port_priv->acl_id);
+
return err;
}
@@ -1756,6 +1782,21 @@ static void ethsw_ctrl_if_teardown(struct ethsw_core *ethsw)
ethsw_free_dpbp(ethsw);
}
+static void ethsw_port_takedown(struct ethsw_port_priv *port_priv)
+{
+ struct dpsw_acl_if_cfg acl_if_cfg;
+
+ acl_if_cfg.num_ifs = 1,
+ acl_if_cfg.if_id[0] = port_priv->idx;
+ dpsw_acl_remove_if(port_priv->ethsw_data->mc_io, 0,
+ port_priv->ethsw_data->dpsw_handle,
+ port_priv->acl_id, &acl_if_cfg);
+
+ dpsw_acl_remove(port_priv->ethsw_data->mc_io, 0,
+ port_priv->ethsw_data->dpsw_handle,
+ port_priv->acl_id);
+}
+
static int ethsw_remove(struct fsl_mc_device *sw_dev)
{
struct ethsw_port_priv *port_priv;
@@ -1778,6 +1819,7 @@ static int ethsw_remove(struct fsl_mc_device *sw_dev)
for (i = 0; i < ethsw->sw_attr.num_ifs; i++) {
port_priv = ethsw->ports[i];
unregister_netdev(port_priv->netdev);
+ ethsw_port_takedown(port_priv);
free_netdev(port_priv->netdev);
}
kfree(ethsw->ports);
diff --git a/drivers/staging/fsl-dpaa2/ethsw/ethsw.h b/drivers/staging/fsl-dpaa2/ethsw/ethsw.h
index e9a80cf185d7..3e1da3de0ca6 100644
--- a/drivers/staging/fsl-dpaa2/ethsw/ethsw.h
+++ b/drivers/staging/fsl-dpaa2/ethsw/ethsw.h
@@ -53,6 +53,8 @@
/* Dequeue store size */
#define DPAA2_ETHSW_STORE_SIZE 16
+#define DPAA2_ETHSW_PORT_MAX_ACL_ENTRIES 16
+
extern const struct ethtool_ops ethsw_port_ethtool_ops;
struct ethsw_core;
@@ -77,6 +79,7 @@ struct ethsw_port_priv {
u8 vlans[VLAN_VID_MASK + 1];
u16 pvid;
struct net_device *bridge_dev;
+ u16 acl_id;
};
/* Switch data */
--
1.9.1
Setup interrupts on the control interface queues. We do not force an
exact affinity between the interrupts received from a specific queue and
a cpu.
Also, the DPSW object version is incremented since the
dpsw_ctrl_if_set_queue() API is introduced in the v8.4 object
(first seen in the MC 10.19.0 firmware).
Signed-off-by: Ioana Ciornei <[email protected]>
---
drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h | 17 +++++++-
drivers/staging/fsl-dpaa2/ethsw/dpsw.c | 33 +++++++++++++++
drivers/staging/fsl-dpaa2/ethsw/dpsw.h | 23 +++++++++++
drivers/staging/fsl-dpaa2/ethsw/ethsw.c | 65 ++++++++++++++++++++++++++++++
drivers/staging/fsl-dpaa2/ethsw/ethsw.h | 1 +
5 files changed, 138 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h b/drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h
index 0f1f2c787e99..ad25872a10b7 100644
--- a/drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h
+++ b/drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h
@@ -12,7 +12,7 @@
/* DPSW Version */
#define DPSW_VER_MAJOR 8
-#define DPSW_VER_MINOR 1
+#define DPSW_VER_MINOR 4
#define DPSW_CMD_BASE_VERSION 1
#define DPSW_CMD_ID_OFFSET 4
@@ -73,6 +73,7 @@
#define DPSW_CMDID_CTRL_IF_GET_ATTR DPSW_CMD_ID(0x0A0)
#define DPSW_CMDID_CTRL_IF_SET_POOLS DPSW_CMD_ID(0x0A1)
+#define DPSW_CMDID_CTRL_IF_SET_QUEUE DPSW_CMD_ID(0x0A6)
/* Macros for accessing command fields smaller than 1byte */
#define DPSW_MASK(field) \
@@ -385,6 +386,20 @@ struct dpsw_cmd_ctrl_if_set_pools {
__le16 buffer_size[DPSW_MAX_DPBP];
};
+#define DPSW_DEST_TYPE_SHIFT 0
+#define DPSW_DEST_TYPE_SIZE 4
+
+struct dpsw_cmd_ctrl_if_set_queue {
+ __le32 dest_id;
+ u8 dest_priority;
+ u8 pad;
+ /* from LSB: dest_type:4 */
+ u8 dest_type;
+ u8 qtype;
+ __le64 user_ctx;
+ __le32 options;
+};
+
struct dpsw_rsp_get_api_version {
__le16 version_major;
__le16 version_minor;
diff --git a/drivers/staging/fsl-dpaa2/ethsw/dpsw.c b/drivers/staging/fsl-dpaa2/ethsw/dpsw.c
index bed7537efee5..6f4d63b9f02e 100644
--- a/drivers/staging/fsl-dpaa2/ethsw/dpsw.c
+++ b/drivers/staging/fsl-dpaa2/ethsw/dpsw.c
@@ -1245,6 +1245,39 @@ int dpsw_ctrl_if_set_pools(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
}
/**
+ * dpsw_ctrl_if_set_queue() - Set Rx queue configuration
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of dpsw object
+ * @qtype: dpsw_queue_type of the targeted queue
+ * @cfg: Rx queue configuration
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dpsw_ctrl_if_set_queue(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
+ enum dpsw_queue_type qtype,
+ const struct dpsw_ctrl_if_queue_cfg *cfg)
+{
+ struct dpsw_cmd_ctrl_if_set_queue *cmd_params;
+ struct fsl_mc_command cmd = { 0 };
+
+ cmd.header = mc_encode_cmd_header(DPSW_CMDID_CTRL_IF_SET_QUEUE,
+ cmd_flags,
+ token);
+ cmd_params = (struct dpsw_cmd_ctrl_if_set_queue *)cmd.params;
+ cmd_params->dest_id = cpu_to_le32(cfg->dest_cfg.dest_id);
+ cmd_params->dest_priority = cfg->dest_cfg.priority;
+ cmd_params->qtype = qtype;
+ cmd_params->user_ctx = cpu_to_le64(cfg->user_ctx);
+ cmd_params->options = cpu_to_le32(cfg->options);
+ dpsw_set_field(cmd_params->dest_type,
+ DEST_TYPE,
+ cfg->dest_cfg.dest_type);
+
+ return mc_send_command(mc_io, &cmd);
+}
+
+/**
* dpsw_get_api_version() - Get Data Path Switch API version
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
diff --git a/drivers/staging/fsl-dpaa2/ethsw/dpsw.h b/drivers/staging/fsl-dpaa2/ethsw/dpsw.h
index 9596d0ebe921..55b5bbac9fbd 100644
--- a/drivers/staging/fsl-dpaa2/ethsw/dpsw.h
+++ b/drivers/staging/fsl-dpaa2/ethsw/dpsw.h
@@ -222,6 +222,29 @@ struct dpsw_ctrl_if_pools_cfg {
int dpsw_ctrl_if_set_pools(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
const struct dpsw_ctrl_if_pools_cfg *cfg);
+#define DPSW_CTRL_IF_QUEUE_OPT_USER_CTX 0x00000001
+#define DPSW_CTRL_IF_QUEUE_OPT_DEST 0x00000002
+
+enum dpsw_ctrl_if_dest {
+ DPSW_CTRL_IF_DEST_NONE = 0,
+ DPSW_CTRL_IF_DEST_DPIO = 1,
+};
+
+struct dpsw_ctrl_if_dest_cfg {
+ enum dpsw_ctrl_if_dest dest_type;
+ int dest_id;
+ u8 priority;
+};
+
+struct dpsw_ctrl_if_queue_cfg {
+ u32 options;
+ u64 user_ctx;
+ struct dpsw_ctrl_if_dest_cfg dest_cfg;
+};
+
+int dpsw_ctrl_if_set_queue(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
+ enum dpsw_queue_type qtype,
+ const struct dpsw_ctrl_if_queue_cfg *cfg);
/**
* enum dpsw_action - Action selection for special/control frames
* @DPSW_ACTION_DROP: Drop frame
diff --git a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
index 7b68eb22a951..125fd6ce669e 100644
--- a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
+++ b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
@@ -1486,6 +1486,64 @@ static void ethsw_destroy_rings(struct ethsw_core *ethsw)
dpaa2_io_store_destroy(ethsw->fq[i].store);
}
+static int ethsw_setup_dpio(struct ethsw_core *ethsw)
+{
+ struct dpsw_ctrl_if_queue_cfg queue_cfg;
+ struct dpaa2_io_notification_ctx *nctx;
+ int err, i, j;
+
+ for (i = 0; i < ETHSW_RX_NUM_FQS; i++) {
+ nctx = ðsw->fq[i].nctx;
+
+ /* Register a new software context for the FQID.
+ * By using NULL as the first parameter, we specify that we do
+ * not care on which cpu are interrupts received for this queue
+ */
+ nctx->is_cdan = 0;
+ nctx->id = ethsw->fq[i].fqid;
+ nctx->desired_cpu = DPAA2_IO_ANY_CPU;
+ err = dpaa2_io_service_register(NULL, nctx, ethsw->dev);
+ if (err) {
+ err = -EPROBE_DEFER;
+ goto err_register;
+ }
+
+ queue_cfg.options = DPSW_CTRL_IF_QUEUE_OPT_DEST |
+ DPSW_CTRL_IF_QUEUE_OPT_USER_CTX;
+ queue_cfg.dest_cfg.dest_type = DPSW_CTRL_IF_DEST_DPIO;
+ queue_cfg.dest_cfg.dest_id = nctx->dpio_id;
+ queue_cfg.dest_cfg.priority = 0;
+ queue_cfg.user_ctx = nctx->qman64;
+
+ err = dpsw_ctrl_if_set_queue(ethsw->mc_io, 0,
+ ethsw->dpsw_handle,
+ ethsw->fq[i].type,
+ &queue_cfg);
+ if (err)
+ goto err_set_queue;
+ }
+
+ return 0;
+
+err_set_queue:
+ dpaa2_io_service_deregister(NULL, nctx, ethsw->dev);
+err_register:
+ for (j = 0; j < i; j++)
+ dpaa2_io_service_deregister(NULL, ðsw->fq[j].nctx,
+ ethsw->dev);
+
+ return err;
+}
+
+static void ethsw_free_dpio(struct ethsw_core *ethsw)
+{
+ int i;
+
+ for (i = 0; i < ETHSW_RX_NUM_FQS; i++)
+ dpaa2_io_service_deregister(NULL, ðsw->fq[i].nctx,
+ ethsw->dev);
+}
+
static int ethsw_ctrl_if_setup(struct ethsw_core *ethsw)
{
int err;
@@ -1504,8 +1562,14 @@ static int ethsw_ctrl_if_setup(struct ethsw_core *ethsw)
if (err)
goto err_free_dpbp;
+ err = ethsw_setup_dpio(ethsw);
+ if (err)
+ goto err_destroy_rings;
+
return 0;
+err_destroy_rings:
+ ethsw_destroy_rings(ethsw);
err_free_dpbp:
ethsw_free_dpbp(ethsw);
@@ -1687,6 +1751,7 @@ static void ethsw_takedown(struct fsl_mc_device *sw_dev)
static void ethsw_ctrl_if_teardown(struct ethsw_core *ethsw)
{
+ ethsw_free_dpio(ethsw);
ethsw_destroy_rings(ethsw);
ethsw_free_dpbp(ethsw);
}
diff --git a/drivers/staging/fsl-dpaa2/ethsw/ethsw.h b/drivers/staging/fsl-dpaa2/ethsw/ethsw.h
index a1ca30c615d5..e9a80cf185d7 100644
--- a/drivers/staging/fsl-dpaa2/ethsw/ethsw.h
+++ b/drivers/staging/fsl-dpaa2/ethsw/ethsw.h
@@ -60,6 +60,7 @@
struct ethsw_fq {
struct ethsw_core *ethsw;
enum dpsw_queue_type type;
+ struct dpaa2_io_notification_ctx nctx;
struct dpaa2_io_store *store;
u32 fqid;
};
--
1.9.1
Allocate and setup a buffer pool, needed on the Rx path of the control
interface. Also, define the Rx buffer size seen by the WRIOP from the
PAGE_SIZE buffers seeded.
Signed-off-by: Ioana Ciornei <[email protected]>
---
drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h | 12 ++++
drivers/staging/fsl-dpaa2/ethsw/dpsw.c | 31 ++++++++++
drivers/staging/fsl-dpaa2/ethsw/dpsw.h | 26 +++++++++
drivers/staging/fsl-dpaa2/ethsw/ethsw.c | 90 ++++++++++++++++++++++++++++++
drivers/staging/fsl-dpaa2/ethsw/ethsw.h | 10 ++++
5 files changed, 169 insertions(+)
diff --git a/drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h b/drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h
index 07a8178f4b37..0f1f2c787e99 100644
--- a/drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h
+++ b/drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h
@@ -8,6 +8,8 @@
#ifndef __FSL_DPSW_CMD_H
#define __FSL_DPSW_CMD_H
+#include "dpsw.h"
+
/* DPSW Version */
#define DPSW_VER_MAJOR 8
#define DPSW_VER_MINOR 1
@@ -70,6 +72,7 @@
#define DPSW_CMDID_FDB_DUMP DPSW_CMD_ID(0x08A)
#define DPSW_CMDID_CTRL_IF_GET_ATTR DPSW_CMD_ID(0x0A0)
+#define DPSW_CMDID_CTRL_IF_SET_POOLS DPSW_CMD_ID(0x0A1)
/* Macros for accessing command fields smaller than 1byte */
#define DPSW_MASK(field) \
@@ -373,6 +376,15 @@ struct dpsw_rsp_ctrl_if_get_attr {
__le32 tx_err_conf_fqid;
};
+#define DPSW_BACKUP_POOL(val, order) (((val) & 0x1) << (order))
+struct dpsw_cmd_ctrl_if_set_pools {
+ u8 num_dpbp;
+ u8 backup_pool_mask;
+ __le16 pad;
+ __le32 dpbp_id[DPSW_MAX_DPBP];
+ __le16 buffer_size[DPSW_MAX_DPBP];
+};
+
struct dpsw_rsp_get_api_version {
__le16 version_major;
__le16 version_minor;
diff --git a/drivers/staging/fsl-dpaa2/ethsw/dpsw.c b/drivers/staging/fsl-dpaa2/ethsw/dpsw.c
index f093d622a0de..bed7537efee5 100644
--- a/drivers/staging/fsl-dpaa2/ethsw/dpsw.c
+++ b/drivers/staging/fsl-dpaa2/ethsw/dpsw.c
@@ -1214,6 +1214,37 @@ int dpsw_ctrl_if_get_attributes(struct fsl_mc_io *mc_io, u32 cmd_flags,
}
/**
+ * dpsw_ctrl_if_set_pools() - Set control interface buffer pools
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPSW object
+ * @cfg: Buffer pools configuration
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dpsw_ctrl_if_set_pools(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
+ const struct dpsw_ctrl_if_pools_cfg *cfg)
+{
+ struct dpsw_cmd_ctrl_if_set_pools *cmd_params;
+ struct fsl_mc_command cmd = { 0 };
+ int i;
+
+ cmd.header = mc_encode_cmd_header(DPSW_CMDID_CTRL_IF_SET_POOLS,
+ cmd_flags, token);
+ cmd_params = (struct dpsw_cmd_ctrl_if_set_pools *)cmd.params;
+ cmd_params->num_dpbp = cfg->num_dpbp;
+ for (i = 0; i < DPSW_MAX_DPBP; i++) {
+ cmd_params->dpbp_id[i] = cpu_to_le32(cfg->pools[i].dpbp_id);
+ cmd_params->buffer_size[i] =
+ cpu_to_le16(cfg->pools[i].buffer_size);
+ cmd_params->backup_pool_mask |=
+ DPSW_BACKUP_POOL(cfg->pools[i].backup_pool, i);
+ }
+
+ return mc_send_command(mc_io, &cmd);
+}
+
+/**
* dpsw_get_api_version() - Get Data Path Switch API version
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
diff --git a/drivers/staging/fsl-dpaa2/ethsw/dpsw.h b/drivers/staging/fsl-dpaa2/ethsw/dpsw.h
index cae69e20490c..9596d0ebe921 100644
--- a/drivers/staging/fsl-dpaa2/ethsw/dpsw.h
+++ b/drivers/staging/fsl-dpaa2/ethsw/dpsw.h
@@ -197,6 +197,32 @@ enum dpsw_queue_type {
};
/**
+ * Maximum number of DPBP
+ */
+#define DPSW_MAX_DPBP 8
+
+/**
+ * struct dpsw_ctrl_if_pools_cfg - Control interface buffer pools configuration
+ * @num_dpbp: Number of DPBPs
+ * @pools: Array of buffer pools parameters; The number of valid entries
+ * must match 'num_dpbp' value
+ * @pools.dpbp_id: DPBP object ID
+ * @pools.buffer_size: Buffer size
+ * @pools.backup_pool: Backup pool
+ */
+struct dpsw_ctrl_if_pools_cfg {
+ u8 num_dpbp;
+ struct {
+ int dpbp_id;
+ u16 buffer_size;
+ int backup_pool;
+ } pools[DPSW_MAX_DPBP];
+};
+
+int dpsw_ctrl_if_set_pools(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
+ const struct dpsw_ctrl_if_pools_cfg *cfg);
+
+/**
* enum dpsw_action - Action selection for special/control frames
* @DPSW_ACTION_DROP: Drop frame
* @DPSW_ACTION_REDIRECT: Redirect frame to control port
diff --git a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
index 633eff42b996..4ed335af2cc8 100644
--- a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
+++ b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
@@ -1382,6 +1382,83 @@ static int ethsw_setup_fqs(struct ethsw_core *ethsw)
return 0;
}
+static int ethsw_setup_dpbp(struct ethsw_core *ethsw)
+{
+ struct dpsw_ctrl_if_pools_cfg dpsw_ctrl_if_pools_cfg = { 0 };
+ struct device *dev = ethsw->dev;
+ struct fsl_mc_device *dpbp_dev;
+ struct dpbp_attr dpbp_attrs;
+ int err;
+
+ err = fsl_mc_object_allocate(to_fsl_mc_device(dev), FSL_MC_POOL_DPBP,
+ &dpbp_dev);
+ if (err) {
+ if (err == -ENXIO)
+ err = -EPROBE_DEFER;
+ else
+ dev_err(dev, "DPBP device allocation failed\n");
+ return err;
+ }
+ ethsw->dpbp_dev = dpbp_dev;
+
+ err = dpbp_open(ethsw->mc_io, 0, dpbp_dev->obj_desc.id,
+ &dpbp_dev->mc_handle);
+ if (err) {
+ dev_err(dev, "dpbp_open() failed\n");
+ goto err_open;
+ }
+
+ err = dpbp_reset(ethsw->mc_io, 0, dpbp_dev->mc_handle);
+ if (err) {
+ dev_err(dev, "dpbp_reset() failed\n");
+ goto err_reset;
+ }
+
+ err = dpbp_enable(ethsw->mc_io, 0, dpbp_dev->mc_handle);
+ if (err) {
+ dev_err(dev, "dpbp_enable() failed\n");
+ goto err_enable;
+ }
+
+ err = dpbp_get_attributes(ethsw->mc_io, 0, dpbp_dev->mc_handle,
+ &dpbp_attrs);
+ if (err) {
+ dev_err(dev, "dpbp_get_attributes() failed\n");
+ goto err_get_attr;
+ }
+
+ dpsw_ctrl_if_pools_cfg.num_dpbp = 1;
+ dpsw_ctrl_if_pools_cfg.pools[0].dpbp_id = dpbp_attrs.id;
+ dpsw_ctrl_if_pools_cfg.pools[0].buffer_size = DPAA2_ETHSW_RX_BUF_SIZE;
+ dpsw_ctrl_if_pools_cfg.pools[0].backup_pool = 0;
+
+ err = dpsw_ctrl_if_set_pools(ethsw->mc_io, 0, ethsw->dpsw_handle,
+ &dpsw_ctrl_if_pools_cfg);
+ if (err) {
+ dev_err(dev, "dpsw_ctrl_if_set_pools() failed\n");
+ goto err_get_attr;
+ }
+ ethsw->bpid = dpbp_attrs.id;
+
+ return 0;
+
+err_get_attr:
+ dpbp_disable(ethsw->mc_io, 0, dpbp_dev->mc_handle);
+err_enable:
+err_reset:
+ dpbp_close(ethsw->mc_io, 0, dpbp_dev->mc_handle);
+err_open:
+ fsl_mc_object_free(dpbp_dev);
+ return err;
+}
+
+static void ethsw_free_dpbp(struct ethsw_core *ethsw)
+{
+ dpbp_disable(ethsw->mc_io, 0, ethsw->dpbp_dev->mc_handle);
+ dpbp_close(ethsw->mc_io, 0, ethsw->dpbp_dev->mc_handle);
+ fsl_mc_object_free(ethsw->dpbp_dev);
+}
+
static int ethsw_ctrl_if_setup(struct ethsw_core *ethsw)
{
int err;
@@ -1391,6 +1468,11 @@ static int ethsw_ctrl_if_setup(struct ethsw_core *ethsw)
if (err)
return err;
+ /* setup the buffer poll needed on the Rx path */
+ err = ethsw_setup_dpbp(ethsw);
+ if (err)
+ return err;
+
return 0;
}
@@ -1567,6 +1649,11 @@ static void ethsw_takedown(struct fsl_mc_device *sw_dev)
dev_warn(dev, "dpsw_close err %d\n", err);
}
+static void ethsw_ctrl_if_teardown(struct ethsw_core *ethsw)
+{
+ ethsw_free_dpbp(ethsw);
+}
+
static int ethsw_remove(struct fsl_mc_device *sw_dev)
{
struct ethsw_port_priv *port_priv;
@@ -1577,6 +1664,9 @@ static int ethsw_remove(struct fsl_mc_device *sw_dev)
dev = &sw_dev->dev;
ethsw = dev_get_drvdata(dev);
+ if (ethsw_has_ctrl_if(ethsw))
+ ethsw_ctrl_if_teardown(ethsw);
+
ethsw_teardown_irqs(sw_dev);
destroy_workqueue(ethsw_owq);
diff --git a/drivers/staging/fsl-dpaa2/ethsw/ethsw.h b/drivers/staging/fsl-dpaa2/ethsw/ethsw.h
index 3ce4ac4e84fc..c3cfd08c21c7 100644
--- a/drivers/staging/fsl-dpaa2/ethsw/ethsw.h
+++ b/drivers/staging/fsl-dpaa2/ethsw/ethsw.h
@@ -17,6 +17,7 @@
#include <uapi/linux/if_bridge.h>
#include <net/switchdev.h>
#include <linux/if_bridge.h>
+#include <linux/fsl/mc.h>
#include "dpsw.h"
@@ -40,6 +41,13 @@
/* Number of receive queues (one RX and one TX_CONF) */
#define ETHSW_RX_NUM_FQS 2
+/* Hardware requires alignment for ingress/egress buffer addresses */
+#define DPAA2_ETHSW_RX_BUF_RAW_SIZE PAGE_SIZE
+#define DPAA2_ETHSW_RX_BUF_TAILROOM \
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info))
+#define DPAA2_ETHSW_RX_BUF_SIZE \
+ (DPAA2_ETHSW_RX_BUF_RAW_SIZE - DPAA2_ETHSW_RX_BUF_TAILROOM)
+
extern const struct ethtool_ops ethsw_port_ethtool_ops;
struct ethsw_core;
@@ -77,6 +85,8 @@ struct ethsw_core {
bool learning;
struct ethsw_fq fq[ETHSW_RX_NUM_FQS];
+ struct fsl_mc_device *dpbp_dev;
+ u16 bpid;
};
static inline bool ethsw_has_ctrl_if(struct ethsw_core *ethsw)
--
1.9.1
On Tue, Nov 05, 2019 at 02:34:23PM +0200, Ioana Ciornei wrote:
> This patch set adds support for Rx/Tx capabilities on switch port interfaces.
> Also, control traffic is redirected through ACLs to the CPU in order to
> enable proper STP protocol handling.
>
> The control interface is comprised of 3 queues in total: Rx, Rx error and
> Tx confirmation. In this patch set we only enable Rx and Tx conf. All
> switch ports share the same queues when frames are redirected to the CPU.
> Information regarding the ingress switch port is passed through frame
> metadata - the flow context field of the descriptor. NAPI instances are
> also shared between switch net_devices and are enabled when at least on
> one of the switch ports .dev_open() was called and disabled when at least
> one switch port is still up.
>
> The new feature is enabled only on MC versions greater than 10.19.0
> (which is soon to be released).
I thought I asked for no new features until this code gets out of
staging? Only then can you add new stuff. Please work to make that
happen first.
thanks,
greg k-h
On 05.11.2019 15:24, Greg KH wrote:
> On Tue, Nov 05, 2019 at 02:34:23PM +0200, Ioana Ciornei wrote:
>> This patch set adds support for Rx/Tx capabilities on switch port interfaces.
>> Also, control traffic is redirected through ACLs to the CPU in order to
>> enable proper STP protocol handling.
>>
>> The control interface is comprised of 3 queues in total: Rx, Rx error and
>> Tx confirmation. In this patch set we only enable Rx and Tx conf. All
>> switch ports share the same queues when frames are redirected to the CPU.
>> Information regarding the ingress switch port is passed through frame
>> metadata - the flow context field of the descriptor. NAPI instances are
>> also shared between switch net_devices and are enabled when at least on
>> one of the switch ports .dev_open() was called and disabled when at least
>> one switch port is still up.
>>
>> The new feature is enabled only on MC versions greater than 10.19.0
>> (which is soon to be released).
>
> I thought I asked for no new features until this code gets out of
> staging? Only then can you add new stuff. Please work to make that
> happen first.
>
> thanks,
>
> greg k-h
>
Sorry but I do not remember your suggestion on first moving the driver
out of staging.
Anyhow, I submitted against staging because in an earlier discussion[1]
it was suggested to first add Rx/Tx capabilities before moving it:
"Ah. Does this also mean it cannot receive?
That makes some of this code pointless and untested.
I'm not sure we would be willing to move this out of staging until it
can transmit and receive."
I'm not sure how I should proceed here.
Thanks,
Ioana
[1] https://lkml.org/lkml/2019/8/9/892
On Tue, Nov 05, 2019 at 02:24:35PM +0100, Greg KH wrote:
> On Tue, Nov 05, 2019 at 02:34:23PM +0200, Ioana Ciornei wrote:
> > This patch set adds support for Rx/Tx capabilities on switch port interfaces.
> > Also, control traffic is redirected through ACLs to the CPU in order to
> > enable proper STP protocol handling.
> I thought I asked for no new features until this code gets out of
> staging? Only then can you add new stuff. Please work to make that
> happen first.
Hi Greg
This is in response to my review of the code in staging. The current
code is missing a core feature for an Ethernet switch driver, being
able to send/receive frames from the host. At the moment it can only
control the hardware for how it switches Ethernet frames coming
into/going out of external ports.
One of the core ideas behind how linux handles Ethernet switches is
that they are just a bunch of network interfaces. And currently, these
network interfaces cannot send/receive. We would never move an
Ethernet driver out of staging which cannot send/receive, so i don't
see why we should move an Ethernet switch driver out of staging which
also cannot send/receive.
Maybe this patchset could be minimised. The STP handling is just nice
to have, and could wait until the driver has moved into the main tree.
Andrew
> +static void ethsw_rx(struct ethsw_fq *fq,
> + const struct dpaa2_fd *fd)
> +{
> + struct ethsw_core *ethsw = fq->ethsw;
> + struct ethsw_port_priv *port_priv;
> + struct net_device *netdev;
> + struct vlan_ethhdr *hdr;
> + struct sk_buff *skb;
> + u16 vlan_tci, vid;
> + int if_id = -1;
> + int err;
> +
> + /* prefetch the frame descriptor */
> + prefetch(fd);
> +
> + /* get switch ingress interface ID */
> + if_id = upper_32_bits(dpaa2_fd_get_flc(fd)) & 0x0000FFFF;
Does the prefetch do any good, since the very next thing you do is
access the frame descriptor? Ideally you want to issue the prefetch,
do something else for a while, and then make use of what the prefetch
got.
> +
> + if (if_id < 0 || if_id >= ethsw->sw_attr.num_ifs) {
Is if_id signed? Seems odd.
Andrew
> Hi Andrew,
>
> Just to clarify...if the STP handling is removed, then we'll have a
> receive code path but no frame will reach it.
> This is because, the only way we could direct traffic to the CPU is by
> adding an ACL rule.
Ah, that is not good.
As i said in one of my reviews, you need to receive all multicast and
broadcast traffic by default. Without that, ARP will not work. Does
the switch perform learning on frames sent from the CPU? Does the
switch learn the CPUs MAC address and forward frames to it? Can i add
an IP address to the interface and use ping?
Andrew
On Tue, Nov 05, 2019 at 03:02:56PM +0100, Andrew Lunn wrote:
> On Tue, Nov 05, 2019 at 02:24:35PM +0100, Greg KH wrote:
> > On Tue, Nov 05, 2019 at 02:34:23PM +0200, Ioana Ciornei wrote:
> > > This patch set adds support for Rx/Tx capabilities on switch port interfaces.
> > > Also, control traffic is redirected through ACLs to the CPU in order to
> > > enable proper STP protocol handling.
>
> > I thought I asked for no new features until this code gets out of
> > staging? Only then can you add new stuff. Please work to make that
> > happen first.
>
> Hi Greg
>
> This is in response to my review of the code in staging. The current
> code is missing a core feature for an Ethernet switch driver, being
> able to send/receive frames from the host. At the moment it can only
> control the hardware for how it switches Ethernet frames coming
> into/going out of external ports.
>
> One of the core ideas behind how linux handles Ethernet switches is
> that they are just a bunch of network interfaces. And currently, these
> network interfaces cannot send/receive. We would never move an
> Ethernet driver out of staging which cannot send/receive, so i don't
> see why we should move an Ethernet switch driver out of staging which
> also cannot send/receive.
>
> Maybe this patchset could be minimised. The STP handling is just nice
> to have, and could wait until the driver has moved into the main tree.
Ok, if the netdev developers say that this is needed before it can move
out of staging, then that's fine, I didn't get that from this
submission, sorry.
greg k-h
On 05.11.2019 16:03, Andrew Lunn wrote:
> On Tue, Nov 05, 2019 at 02:24:35PM +0100, Greg KH wrote:
>> On Tue, Nov 05, 2019 at 02:34:23PM +0200, Ioana Ciornei wrote:
>>> This patch set adds support for Rx/Tx capabilities on switch port interfaces.
>>> Also, control traffic is redirected through ACLs to the CPU in order to
>>> enable proper STP protocol handling.
>
>> I thought I asked for no new features until this code gets out of
>> staging? Only then can you add new stuff. Please work to make that
>> happen first.
>
> Hi Greg
>
> This is in response to my review of the code in staging. The current
> code is missing a core feature for an Ethernet switch driver, being
> able to send/receive frames from the host. At the moment it can only
> control the hardware for how it switches Ethernet frames coming
> into/going out of external ports.
>
> One of the core ideas behind how linux handles Ethernet switches is
> that they are just a bunch of network interfaces. And currently, these
> network interfaces cannot send/receive. We would never move an
> Ethernet driver out of staging which cannot send/receive, so i don't
> see why we should move an Ethernet switch driver out of staging which
> also cannot send/receive.
>
> Maybe this patchset could be minimised. The STP handling is just nice
> to have, and could wait until the driver has moved into the main tree.
>
> Andrew
>
Hi Andrew,
Just to clarify...if the STP handling is removed, then we'll have a
receive code path but no frame will reach it.
This is because, the only way we could direct traffic to the CPU is by
adding an ACL rule.
Is that something that the netdev community would maybe accept?
Ioana