2012-10-03 19:25:39

by Bounine, Alexandre

[permalink] [raw]
Subject: [PATCH 0/5] rapidio: patches to support multiple master ports

The following set of patches provides modifications targeting support of multiple
RapidIO master port (mport) devices on a CPU-side of RapidIO-capable board.
While the RapidIO subsystem code has definitions suitable for
multi-controller/multi-net support, the existing implementation cannot be
considered ready for multiple mport configurations.

=========== NOTES: =============
a) The patches below do not address RapidIO side view of multiport processing
elements defined in Part 6 of RapidIO spec Rev.2.1 (section 6.4.1).
These devices have Base Device ID CSR (0x60) and Component Tag CSR (0x6C) shared
by all SRIO ports. For example, Freescale's P4080, P3041 and P5020 have a
dual-port SRIO controller implemented according the specification.
Enumeration/discovery of such devices from RapidIO side may require
device-specific fixups.
b) Devices referenced above may also require implementation specific code to setup
a host device ID for mport device. These operations are not addressed by patches
in this package.
=================================

Details about provided patches:

1. Fix blocking wait for discovery ready

While it does not happen on PowerPC based platforms, there is possibility of
stalled CPU warning dump on x86 based platforms that run RapidIO discovery
process if they wait too long for being enumerated.
Currently users can avoid it by disabling the soft-lockup detector using
"nosoftlockup" kernel parameter OR by ensuring that enumeration is completed
before soft-lockup is detected.

This patch eliminates blocking wait and keeps a scheduler running.
It also is required for patch 3 below which introduces asynchronous discovery
process.

2. Use device lists handling on per-net basis

This patch allows to correctly support multiple RapidIO nets and resolves possible
issues caused by using single global list of devices during RapidIO system
enumeration/discovery. The most common sign of existing issue is incorrect
contents of switch routing tables in systems with multiple mport controllers
while single-port configuration performs as expected.

The patch does not eliminate the global RapidIO device list but changes some
routines in enumeration/discovery to use per-net device lists instead. This way
compatibility with upper layer RIO routines is preserved.

3. Run discovery as an asynchronous process

This patch modifies RapidIO initialization routine to asynchronously run the
discovery process for each corresponding mport. This allows having an arbitrary
order of enumerating and discovering mports without creating a deadlock situation
if an enumerator port was registered after a discovering one.

On boards with multiple discovering mports it also eliminates order dependency
between mports and may reduce total time of RapidIO subsystem initialization.

Making netID matching to mportID ensures consistent netID assignment in
multiport RapidIO systems with asynchronous discovery process (global counter
implementation is affected by race between threads).

4. Rework RIONET to support multiple RIO master ports

In the current version of the driver rionet_probe() has comment
"XXX Make multi-net safe". Now it is a good time to address this comment.

This patch makes RIONET driver multi-net safe/capable by introducing per-net
lists of RapidIO network peers. It also enables to register network adapters for
all available mport devices.

5. Add destination ID allocation mechanism

The patch replaces a single global destination ID counter with per-net allocation
mechanism to allow independent destID management for each available RapidIO
network. Using bitmap based mechanism instead of counters allows destination ID
release and reuse in systems that support hot-swap.


Alexandre Bounine (5):
rapidio: fix blocking wait for discovery ready
rapidio: use device lists handling on per-net basis
rapidio: run discovery as an asynchronous process
rapidio/rionet: rework to support multiple RIO master ports
rapidio: add destination ID allocation mechanism

drivers/net/rionet.c | 133 ++++++++++---------
drivers/rapidio/rio-scan.c | 326 +++++++++++++++++++++++++++++--------------
drivers/rapidio/rio.c | 51 +++++++-
include/linux/rio.h | 10 ++
4 files changed, 349 insertions(+), 171 deletions(-)

--
1.7.8.4


2012-10-03 19:25:44

by Bounine, Alexandre

[permalink] [raw]
Subject: [PATCH 5/5] rapidio: add destination ID allocation mechanism

Replace the single global destination ID counter with per-net allocation
mechanism to allow independent destID management for each available RapidIO
network. Using bitmap based mechanism instead of counters allows
destination ID release and reuse in systems that support hot-swap.

Signed-off-by: Alexandre Bounine <[email protected]>
Cc: Matt Porter <[email protected]>
Cc: Li Yang <[email protected]>
---
drivers/rapidio/rio-scan.c | 205 ++++++++++++++++++++++++++++++++++++--------
include/linux/rio.h | 9 ++
2 files changed, 179 insertions(+), 35 deletions(-)

diff --git a/drivers/rapidio/rio-scan.c b/drivers/rapidio/rio-scan.c
index 745670f..48e9041 100644
--- a/drivers/rapidio/rio-scan.c
+++ b/drivers/rapidio/rio-scan.c
@@ -54,6 +54,114 @@ static int rio_mport_phys_table[] = {
-1,
};

+
+/*
+ * rio_destid_alloc - Allocate next available destID for given network
+ * net: RIO network
+ *
+ * Returns next available device destination ID for the specified RIO network.
+ * Marks allocated ID as one in use.
+ * Returns RIO_INVALID_DESTID if new destID is not available.
+ */
+static u16 rio_destid_alloc(struct rio_net *net)
+{
+ int destid;
+ struct rio_id_table *idtab = &net->destid_table;
+
+ spin_lock(&idtab->lock);
+ destid = find_next_zero_bit(idtab->table, idtab->max, idtab->next);
+ if (destid >= idtab->max)
+ destid = find_first_zero_bit(idtab->table, idtab->max);
+
+ if (destid < idtab->max) {
+ idtab->next = destid + 1;
+ if (idtab->next >= idtab->max)
+ idtab->next = 0;
+ set_bit(destid, idtab->table);
+ destid += idtab->start;
+ } else
+ destid = RIO_INVALID_DESTID;
+
+ spin_unlock(&idtab->lock);
+ return (u16)destid;
+}
+
+/*
+ * rio_destid_reserve - Reserve the specivied destID
+ * net: RIO network
+ * destid: destID to reserve
+ *
+ * Tries to reserve the specified destID.
+ * Returns 0 if successfull.
+ */
+static int rio_destid_reserve(struct rio_net *net, u16 destid)
+{
+ int oldbit;
+ struct rio_id_table *idtab = &net->destid_table;
+
+ destid -= idtab->start;
+ spin_lock(&idtab->lock);
+ oldbit = test_and_set_bit(destid, idtab->table);
+ spin_unlock(&idtab->lock);
+ return oldbit;
+}
+
+/*
+ * rio_destid_free - free a previously allocated destID
+ * net: RIO network
+ * destid: destID to free
+ *
+ * Makes the specified destID available for use.
+ */
+static void rio_destid_free(struct rio_net *net, u16 destid)
+{
+ struct rio_id_table *idtab = &net->destid_table;
+
+ destid -= idtab->start;
+ spin_lock(&idtab->lock);
+ clear_bit(destid, idtab->table);
+ spin_unlock(&idtab->lock);
+}
+
+/*
+ * rio_destid_first - return first destID in use
+ * net: RIO network
+ */
+static u16 rio_destid_first(struct rio_net *net)
+{
+ int destid;
+ struct rio_id_table *idtab = &net->destid_table;
+
+ spin_lock(&idtab->lock);
+ destid = find_first_bit(idtab->table, idtab->max);
+ if (destid >= idtab->max)
+ destid = RIO_INVALID_DESTID;
+ else
+ destid += idtab->start;
+ spin_unlock(&idtab->lock);
+ return (u16)destid;
+}
+
+/*
+ * rio_destid_next - return next destID in use
+ * net: RIO network
+ * from: destination ID from which search shall continue
+ */
+static u16 rio_destid_next(struct rio_net *net, u16 from)
+{
+ int destid;
+ struct rio_id_table *idtab = &net->destid_table;
+
+ spin_lock(&idtab->lock);
+ destid = find_next_bit(idtab->table, idtab->max, from);
+ if (destid >= idtab->max)
+ destid = RIO_INVALID_DESTID;
+ else
+ destid += idtab->start;
+ spin_unlock(&idtab->lock);
+ return (u16)destid;
+}
+
/**
* rio_get_device_id - Get the base/extended device id for a device
* @port: RIO master port
@@ -171,10 +279,6 @@ static int rio_enum_host(struct rio_mport *port)

/* Set master port destid and init destid ctr */
rio_local_set_device_id(port, port->host_deviceid);
-
- if (next_destid == port->host_deviceid)
- next_destid++;
-
return 0;
}

@@ -441,9 +545,8 @@ static struct rio_dev __devinit *rio_setup_device(struct rio_net *net,
if (rio_device_has_destid(port, rdev->src_ops, rdev->dst_ops)) {
if (do_enum) {
rio_set_device_id(port, destid, hopcount, next_destid);
- rdev->destid = next_destid++;
- if (next_destid == port->host_deviceid)
- next_destid++;
+ rdev->destid = next_destid;
+ next_destid = rio_destid_alloc(net);
} else
rdev->destid = rio_get_device_id(port, destid, hopcount);

@@ -742,12 +845,7 @@ static u16 rio_get_host_deviceid_lock(struct rio_mport *port, u8 hopcount)
static int __devinit rio_enum_peer(struct rio_net *net, struct rio_mport *port,
u8 hopcount, struct rio_dev *prev, int prev_port)
{
- int port_num;
- int cur_destid;
- int sw_destid;
- int sw_inport;
struct rio_dev *rdev;
- u16 destid;
u32 regval;
int tmp;

@@ -813,19 +911,26 @@ static int __devinit rio_enum_peer(struct rio_net *net, struct rio_mport *port,
return -1;

if (rio_is_switch(rdev)) {
+ int sw_destid;
+ int cur_destid;
+ int sw_inport;
+ u16 destid;
+ int port_num;
+
sw_inport = RIO_GET_PORT_NUM(rdev->swpinfo);
rio_route_add_entry(rdev, RIO_GLOBAL_TABLE,
port->host_deviceid, sw_inport, 0);
rdev->rswitch->route_table[port->host_deviceid] = sw_inport;

- for (destid = 0; destid < next_destid; destid++) {
- if (destid == port->host_deviceid)
- continue;
- rio_route_add_entry(rdev, RIO_GLOBAL_TABLE,
- destid, sw_inport, 0);
- rdev->rswitch->route_table[destid] = sw_inport;
+ destid = rio_destid_first(net);
+ while (destid != RIO_INVALID_DESTID && destid < next_destid) {
+ if (destid != port->host_deviceid) {
+ rio_route_add_entry(rdev, RIO_GLOBAL_TABLE,
+ destid, sw_inport, 0);
+ rdev->rswitch->route_table[destid] = sw_inport;
+ }
+ destid = rio_destid_next(net, destid + 1);
}
-
pr_debug(
"RIO: found %s (vid %4.4x did %4.4x) with %d ports\n",
rio_name(rdev), rdev->vid, rdev->did,
@@ -863,19 +968,22 @@ static int __devinit rio_enum_peer(struct rio_net *net, struct rio_mport *port,
return -1;

/* Update routing tables */
- if (next_destid > cur_destid) {
+ destid = rio_destid_next(net, cur_destid + 1);
+ if (destid != RIO_INVALID_DESTID) {
for (destid = cur_destid;
- destid < next_destid; destid++) {
- if (destid == port->host_deviceid)
- continue;
- rio_route_add_entry(rdev,
+ destid < next_destid;) {
+ if (destid != port->host_deviceid) {
+ rio_route_add_entry(rdev,
RIO_GLOBAL_TABLE,
destid,
port_num,
0);
- rdev->rswitch->
- route_table[destid] =
- port_num;
+ rdev->rswitch->
+ route_table[destid] =
+ port_num;
+ }
+ destid = rio_destid_next(net,
+ destid + 1);
}
}
} else {
@@ -901,11 +1009,8 @@ static int __devinit rio_enum_peer(struct rio_net *net, struct rio_mport *port,
rio_init_em(rdev);

/* Check for empty switch */
- if (next_destid == sw_destid) {
- next_destid++;
- if (next_destid == port->host_deviceid)
- next_destid++;
- }
+ if (next_destid == sw_destid)
+ next_destid = rio_destid_alloc(net);

rdev->destid = sw_destid;
} else
@@ -1043,17 +1148,39 @@ static int rio_mport_is_active(struct rio_mport *port)
/**
* rio_alloc_net- Allocate and configure a new RIO network
* @port: Master port associated with the RIO network
+ * @do_enum: Enumeration/Discovery mode flag
+ * @start: logical minimal start id for new net
*
* Allocates a RIO network structure, initializes per-network
* list heads, and adds the associated master port to the
* network list of associated master ports. Returns a
* RIO network pointer on success or %NULL on failure.
*/
-static struct rio_net __devinit *rio_alloc_net(struct rio_mport *port)
+static struct rio_net __devinit *rio_alloc_net(struct rio_mport *port,
+ int do_enum, u16 start)
{
struct rio_net *net;

net = kzalloc(sizeof(struct rio_net), GFP_KERNEL);
+ if (net && do_enum) {
+ net->destid_table.table = kzalloc(
+ BITS_TO_LONGS(RIO_MAX_ROUTE_ENTRIES(port->sys_size)) *
+ sizeof(long),
+ GFP_KERNEL);
+
+ if (net->destid_table.table == NULL) {
+ pr_err("RIO: failed to allocate destID table\n");
+ kfree(net);
+ net = NULL;
+ } else {
+ net->destid_table.start = start;
+ net->destid_table.next = 0;
+ net->destid_table.max =
+ RIO_MAX_ROUTE_ENTRIES(port->sys_size);
+ spin_lock_init(&net->destid_table.lock);
+ }
+ }
+
if (net) {
INIT_LIST_HEAD(&net->node);
INIT_LIST_HEAD(&net->devices);
@@ -1163,12 +1290,16 @@ int __devinit rio_enum_mport(struct rio_mport *mport)

/* If master port has an active link, allocate net and enum peers */
if (rio_mport_is_active(mport)) {
- if (!(net = rio_alloc_net(mport))) {
+ net = rio_alloc_net(mport, 1, 0);
+ if (!net) {
printk(KERN_ERR "RIO: failed to allocate new net\n");
rc = -ENOMEM;
goto out;
}

+ /* reserve mport destID in new net */
+ rio_destid_reserve(net, mport->host_deviceid);
+
/* Enable Input Output Port (transmitter reviever) */
rio_enable_rx_tx_port(mport, 1, 0, 0, 0);

@@ -1176,6 +1307,8 @@ int __devinit rio_enum_mport(struct rio_mport *mport)
rio_local_write_config_32(mport, RIO_COMPONENT_TAG_CSR,
next_comptag++);

+ next_destid = rio_destid_alloc(net);
+
if (rio_enum_peer(net, mport, 0, NULL, 0) < 0) {
/* A higher priority host won enumeration, bail. */
printk(KERN_INFO
@@ -1185,6 +1318,8 @@ int __devinit rio_enum_mport(struct rio_mport *mport)
rc = -EBUSY;
goto out;
}
+ /* free the last allocated destID (unused) */
+ rio_destid_free(net, next_destid);
rio_update_route_tables(net);
rio_clear_locks(net);
rio_pw_enable(mport, 1);
@@ -1265,7 +1400,7 @@ int __devinit rio_disc_mport(struct rio_mport *mport)
enum_done:
pr_debug("RIO: ... enumeration done\n");

- net = rio_alloc_net(mport);
+ net = rio_alloc_net(mport, 0, 0);
if (!net) {
printk(KERN_ERR "RIO: Failed to allocate new net\n");
goto bail;
diff --git a/include/linux/rio.h b/include/linux/rio.h
index 7ea02c4..d2dff22 100644
--- a/include/linux/rio.h
+++ b/include/linux/rio.h
@@ -264,6 +264,14 @@ struct rio_mport {
#endif
};

+struct rio_id_table {
+ u16 start; /* logical minimal id */
+ u16 next; /* hint for find */
+ u32 max; /* max number of IDs in table */
+ spinlock_t lock;
+ unsigned long *table;
+};
+
/**
* struct rio_net - RIO network info
* @node: Node in global list of RIO networks
@@ -279,6 +287,7 @@ struct rio_net {
struct list_head mports; /* list of ports accessing net */
struct rio_mport *hport; /* primary port for accessing net */
unsigned char id; /* RIO network ID */
+ struct rio_id_table destid_table; /* destID allocation table */
};

/* Definitions used by switch sysfs initialization callback */
--
1.7.8.4

2012-10-03 19:25:50

by Bounine, Alexandre

[permalink] [raw]
Subject: [PATCH 2/5] rapidio: use device lists handling on per-net basis

Modify handling of device lists to resolve issues caused by using single global
list of RIO devices during enumeration/discovery. The most common sign of
existing issue is incorrect contents of switch routing tables in systems with
multiple mport controllers while single-port configuration performs as expected.

Signed-off-by: Alexandre Bounine <[email protected]>
Cc: Matt Porter <[email protected]>
Cc: Li Yang <[email protected]>
---
drivers/rapidio/rio-scan.c | 60 ++++++++++++++++++++++---------------------
include/linux/rio.h | 1 +
2 files changed, 32 insertions(+), 29 deletions(-)

diff --git a/drivers/rapidio/rio-scan.c b/drivers/rapidio/rio-scan.c
index 0a27253..8b7c4bc 100644
--- a/drivers/rapidio/rio-scan.c
+++ b/drivers/rapidio/rio-scan.c
@@ -38,7 +38,6 @@
#include "rio.h"

LIST_HEAD(rio_devices);
-static LIST_HEAD(rio_switches);

static void rio_init_em(struct rio_dev *rdev);

@@ -104,14 +103,15 @@ static void rio_local_set_device_id(struct rio_mport *port, u16 did)

/**
* rio_clear_locks- Release all host locks and signal enumeration complete
- * @port: Master port to issue transaction
+ * @net: RIO network to run on
*
* Marks the component tag CSR on each device with the enumeration
* complete flag. When complete, it then release the host locks on
* each device. Returns 0 on success or %-EINVAL on failure.
*/
-static int rio_clear_locks(struct rio_mport *port)
+static int rio_clear_locks(struct rio_net *net)
{
+ struct rio_mport *port = net->hport;
struct rio_dev *rdev;
u32 result;
int ret = 0;
@@ -126,7 +126,7 @@ static int rio_clear_locks(struct rio_mport *port)
result);
ret = -EINVAL;
}
- list_for_each_entry(rdev, &rio_devices, global_list) {
+ list_for_each_entry(rdev, &net->devices, net_list) {
rio_write_config_32(rdev, RIO_HOST_DID_LOCK_CSR,
port->host_deviceid);
rio_read_config_32(rdev, RIO_HOST_DID_LOCK_CSR, &result);
@@ -479,7 +479,7 @@ static struct rio_dev __devinit *rio_setup_device(struct rio_net *net,
rswitch->clr_table(port, destid, hopcount,
RIO_GLOBAL_TABLE);

- list_add_tail(&rswitch->node, &rio_switches);
+ list_add_tail(&rswitch->node, &net->switches);

} else {
if (do_enum)
@@ -1058,6 +1058,7 @@ static struct rio_net __devinit *rio_alloc_net(struct rio_mport *port)
if (net) {
INIT_LIST_HEAD(&net->node);
INIT_LIST_HEAD(&net->devices);
+ INIT_LIST_HEAD(&net->switches);
INIT_LIST_HEAD(&net->mports);
list_add_tail(&port->nnode, &net->mports);
net->hport = port;
@@ -1068,24 +1069,24 @@ static struct rio_net __devinit *rio_alloc_net(struct rio_mport *port)

/**
* rio_update_route_tables- Updates route tables in switches
- * @port: Master port associated with the RIO network
+ * @net: RIO network to run update on
*
* For each enumerated device, ensure that each switch in a system
* has correct routing entries. Add routes for devices that where
* unknown dirung the first enumeration pass through the switch.
*/
-static void rio_update_route_tables(struct rio_mport *port)
+static void rio_update_route_tables(struct rio_net *net)
{
struct rio_dev *rdev, *swrdev;
struct rio_switch *rswitch;
u8 sport;
u16 destid;

- list_for_each_entry(rdev, &rio_devices, global_list) {
+ list_for_each_entry(rdev, &net->devices, net_list) {

destid = rdev->destid;

- list_for_each_entry(rswitch, &rio_switches, node) {
+ list_for_each_entry(rswitch, &net->switches, node) {

if (rio_is_switch(rdev) && (rdev->rswitch == rswitch))
continue;
@@ -1181,12 +1182,12 @@ int __devinit rio_enum_mport(struct rio_mport *mport)
printk(KERN_INFO
"RIO: master port %d device has lost enumeration to a remote host\n",
mport->id);
- rio_clear_locks(mport);
+ rio_clear_locks(net);
rc = -EBUSY;
goto out;
}
- rio_update_route_tables(mport);
- rio_clear_locks(mport);
+ rio_update_route_tables(net);
+ rio_clear_locks(net);
rio_pw_enable(mport, 1);
} else {
printk(KERN_INFO "RIO: master port %d link inactive\n",
@@ -1200,33 +1201,34 @@ int __devinit rio_enum_mport(struct rio_mport *mport)

/**
* rio_build_route_tables- Generate route tables from switch route entries
+ * @net: RIO network to run route tables scan on
*
* For each switch device, generate a route table by copying existing
* route entries from the switch.
*/
-static void rio_build_route_tables(void)
+static void rio_build_route_tables(struct rio_net *net)
{
+ struct rio_switch *rswitch;
struct rio_dev *rdev;
int i;
u8 sport;

- list_for_each_entry(rdev, &rio_devices, global_list)
- if (rio_is_switch(rdev)) {
- rio_lock_device(rdev->net->hport, rdev->destid,
- rdev->hopcount, 1000);
- for (i = 0;
- i < RIO_MAX_ROUTE_ENTRIES(rdev->net->hport->sys_size);
- i++) {
- if (rio_route_get_entry(rdev,
- RIO_GLOBAL_TABLE, i, &sport, 0) < 0)
- continue;
- rdev->rswitch->route_table[i] = sport;
- }
+ list_for_each_entry(rswitch, &net->switches, node) {
+ rdev = sw_to_rio_dev(rswitch);

- rio_unlock_device(rdev->net->hport,
- rdev->destid,
- rdev->hopcount);
+ rio_lock_device(net->hport, rdev->destid,
+ rdev->hopcount, 1000);
+ for (i = 0;
+ i < RIO_MAX_ROUTE_ENTRIES(net->hport->sys_size);
+ i++) {
+ if (rio_route_get_entry(rdev, RIO_GLOBAL_TABLE,
+ i, &sport, 0) < 0)
+ continue;
+ rswitch->route_table[i] = sport;
}
+
+ rio_unlock_device(net->hport, rdev->destid, rdev->hopcount);
+ }
}

/**
@@ -1284,7 +1286,7 @@ enum_done:
goto bail;
}

- rio_build_route_tables();
+ rio_build_route_tables(net);
}

return 0;
diff --git a/include/linux/rio.h b/include/linux/rio.h
index 4d1a104..7ea02c4 100644
--- a/include/linux/rio.h
+++ b/include/linux/rio.h
@@ -275,6 +275,7 @@ struct rio_mport {
struct rio_net {
struct list_head node; /* node in list of networks */
struct list_head devices; /* list of devices in this net */
+ struct list_head switches; /* list of switches in this net */
struct list_head mports; /* list of ports accessing net */
struct rio_mport *hport; /* primary port for accessing net */
unsigned char id; /* RIO network ID */
--
1.7.8.4

2012-10-03 19:25:42

by Bounine, Alexandre

[permalink] [raw]
Subject: [PATCH 4/5] rapidio/rionet: rework to support multiple RIO master ports

Make RIONET driver multi-net safe/capable by introducing per-net lists of
RapidIO network peers. Rework registration of network adapters to support
all available RIO master port devices.

Signed-off-by: Alexandre Bounine <[email protected]>
Cc: Matt Porter <[email protected]>
Cc: Li Yang <[email protected]>
Cc: David S. Miller <[email protected]>
---
drivers/net/rionet.c | 133 ++++++++++++++++++++++++++------------------------
1 files changed, 70 insertions(+), 63 deletions(-)

diff --git a/drivers/net/rionet.c b/drivers/net/rionet.c
index 1470d3e..d8b9b1e 100644
--- a/drivers/net/rionet.c
+++ b/drivers/net/rionet.c
@@ -26,7 +26,7 @@
#include <linux/ethtool.h>

#define DRV_NAME "rionet"
-#define DRV_VERSION "0.2"
+#define DRV_VERSION "0.3"
#define DRV_AUTHOR "Matt Porter <[email protected]>"
#define DRV_DESC "Ethernet over RapidIO"

@@ -47,8 +47,7 @@ MODULE_LICENSE("GPL");

#define RIONET_TX_RING_SIZE CONFIG_RIONET_TX_SIZE
#define RIONET_RX_RING_SIZE CONFIG_RIONET_RX_SIZE
-
-static LIST_HEAD(rionet_peers);
+#define RIONET_MAX_NETS 8

struct rionet_private {
struct rio_mport *mport;
@@ -69,17 +68,14 @@ struct rionet_peer {
struct resource *res;
};

-static int rionet_check = 0;
-static int rionet_capable = 1;
+struct rionet_net {
+ struct net_device *ndev;
+ struct list_head peers;
+ struct rio_dev **active;
+ int nact; /* number of active peers */
+};

-/*
- * This is a fast lookup table for translating TX
- * Ethernet packets into a destination RIO device. It
- * could be made into a hash table to save memory depending
- * on system trade-offs.
- */
-static struct rio_dev **rionet_active;
-static int nact; /* total number of active rionet peers */
+static struct rionet_net nets[RIONET_MAX_NETS];

#define is_rionet_capable(src_ops, dst_ops) \
((src_ops & RIO_SRC_OPS_DATA_MSG) && \
@@ -185,7 +181,7 @@ static int rionet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
}

if (is_multicast_ether_addr(eth->h_dest))
- add_num = nact;
+ add_num = nets[rnet->mport->id].nact;

if ((rnet->tx_cnt + add_num) > RIONET_TX_RING_SIZE) {
netif_stop_queue(ndev);
@@ -197,19 +193,21 @@ static int rionet_start_xmit(struct sk_buff *skb, struct net_device *ndev)

if (is_multicast_ether_addr(eth->h_dest)) {
int count = 0;
+
for (i = 0; i < RIO_MAX_ROUTE_ENTRIES(rnet->mport->sys_size);
i++)
- if (rionet_active[i]) {
+ if (nets[rnet->mport->id].active[i]) {
rionet_queue_tx_msg(skb, ndev,
- rionet_active[i]);
+ nets[rnet->mport->id].active[i]);
if (count)
atomic_inc(&skb->users);
count++;
}
} else if (RIONET_MAC_MATCH(eth->h_dest)) {
destid = RIONET_GET_DESTID(eth->h_dest);
- if (rionet_active[destid])
- rionet_queue_tx_msg(skb, ndev, rionet_active[destid]);
+ if (nets[rnet->mport->id].active[destid])
+ rionet_queue_tx_msg(skb, ndev,
+ nets[rnet->mport->id].active[destid]);
}

spin_unlock_irqrestore(&rnet->tx_lock, flags);
@@ -228,19 +226,21 @@ static void rionet_dbell_event(struct rio_mport *mport, void *dev_id, u16 sid, u
printk(KERN_INFO "%s: doorbell sid %4.4x tid %4.4x info %4.4x",
DRV_NAME, sid, tid, info);
if (info == RIONET_DOORBELL_JOIN) {
- if (!rionet_active[sid]) {
- list_for_each_entry(peer, &rionet_peers, node) {
+ if (!nets[rnet->mport->id].active[sid]) {
+ list_for_each_entry(peer,
+ &nets[rnet->mport->id].peers, node) {
if (peer->rdev->destid == sid) {
- rionet_active[sid] = peer->rdev;
- nact++;
+ nets[rnet->mport->id].active[sid] =
+ peer->rdev;
+ nets[rnet->mport->id].nact++;
}
}
rio_mport_send_doorbell(mport, sid,
RIONET_DOORBELL_JOIN);
}
} else if (info == RIONET_DOORBELL_LEAVE) {
- rionet_active[sid] = NULL;
- nact--;
+ nets[rnet->mport->id].active[sid] = NULL;
+ nets[rnet->mport->id].nact--;
} else {
if (netif_msg_intr(rnet))
printk(KERN_WARNING "%s: unhandled doorbell\n",
@@ -334,7 +334,8 @@ static int rionet_open(struct net_device *ndev)
netif_carrier_on(ndev);
netif_start_queue(ndev);

- list_for_each_entry_safe(peer, tmp, &rionet_peers, node) {
+ list_for_each_entry_safe(peer, tmp,
+ &nets[rnet->mport->id].peers, node) {
if (!(peer->res = rio_request_outb_dbell(peer->rdev,
RIONET_DOORBELL_JOIN,
RIONET_DOORBELL_LEAVE)))
@@ -359,7 +360,7 @@ static int rionet_close(struct net_device *ndev)
int i;

if (netif_msg_ifup(rnet))
- printk(KERN_INFO "%s: close\n", DRV_NAME);
+ printk(KERN_INFO "%s: close %s\n", DRV_NAME, ndev->name);

netif_stop_queue(ndev);
netif_carrier_off(ndev);
@@ -367,10 +368,11 @@ static int rionet_close(struct net_device *ndev)
for (i = 0; i < RIONET_RX_RING_SIZE; i++)
kfree_skb(rnet->rx_skb[i]);

- list_for_each_entry_safe(peer, tmp, &rionet_peers, node) {
- if (rionet_active[peer->rdev->destid]) {
+ list_for_each_entry_safe(peer, tmp,
+ &nets[rnet->mport->id].peers, node) {
+ if (nets[rnet->mport->id].active[peer->rdev->destid]) {
rio_send_doorbell(peer->rdev, RIONET_DOORBELL_LEAVE);
- rionet_active[peer->rdev->destid] = NULL;
+ nets[rnet->mport->id].active[peer->rdev->destid] = NULL;
}
rio_release_outb_dbell(peer->rdev, peer->res);
}
@@ -386,17 +388,21 @@ static int rionet_close(struct net_device *ndev)
static void rionet_remove(struct rio_dev *rdev)
{
struct net_device *ndev = rio_get_drvdata(rdev);
+ unsigned char netid = rdev->net->hport->id;
struct rionet_peer *peer, *tmp;

- free_pages((unsigned long)rionet_active, get_order(sizeof(void *) *
- RIO_MAX_ROUTE_ENTRIES(rdev->net->hport->sys_size)));
unregister_netdev(ndev);
- free_netdev(ndev);

- list_for_each_entry_safe(peer, tmp, &rionet_peers, node) {
+ free_pages((unsigned long)nets[netid].active, get_order(sizeof(void *) *
+ RIO_MAX_ROUTE_ENTRIES(rdev->net->hport->sys_size)));
+ nets[netid].active = NULL;
+
+ list_for_each_entry_safe(peer, tmp, &nets[netid].peers, node) {
list_del(&peer->node);
kfree(peer);
}
+
+ free_netdev(ndev);
}

static void rionet_get_drvinfo(struct net_device *ndev,
@@ -448,13 +454,13 @@ static int rionet_setup_netdev(struct rio_mport *mport, struct net_device *ndev)
const size_t rionet_active_bytes = sizeof(void *) *
RIO_MAX_ROUTE_ENTRIES(mport->sys_size);

- rionet_active = (struct rio_dev **)__get_free_pages(GFP_KERNEL,
- get_order(rionet_active_bytes));
- if (!rionet_active) {
+ nets[mport->id].active = (struct rio_dev **)__get_free_pages(GFP_KERNEL,
+ get_order(rionet_active_bytes));
+ if (!nets[mport->id].active) {
rc = -ENOMEM;
goto out;
}
- memset((void *)rionet_active, 0, rionet_active_bytes);
+ memset((void *)nets[mport->id].active, 0, rionet_active_bytes);

/* Set up private area */
rnet = netdev_priv(ndev);
@@ -483,61 +489,62 @@ static int rionet_setup_netdev(struct rio_mport *mport, struct net_device *ndev)
if (rc != 0)
goto out;

- printk("%s: %s %s Version %s, MAC %pM\n",
+ printk(KERN_INFO "%s: %s %s Version %s, MAC %pM, %s\n",
ndev->name,
DRV_NAME,
DRV_DESC,
DRV_VERSION,
- ndev->dev_addr);
+ ndev->dev_addr,
+ mport->name);

out:
return rc;
}

-/*
- * XXX Make multi-net safe
- */
+static unsigned long net_table[RIONET_MAX_NETS/sizeof(unsigned long) + 1];
+
static int rionet_probe(struct rio_dev *rdev, const struct rio_device_id *id)
{
int rc = -ENODEV;
u32 lsrc_ops, ldst_ops;
struct rionet_peer *peer;
struct net_device *ndev = NULL;
+ unsigned char netid = rdev->net->hport->id;
+ int oldnet;

- /* If local device is not rionet capable, give up quickly */
- if (!rionet_capable)
- goto out;
+ if (netid >= RIONET_MAX_NETS)
+ return rc;

- /* Allocate our net_device structure */
- ndev = alloc_etherdev(sizeof(struct rionet_private));
- if (ndev == NULL) {
- rc = -ENOMEM;
- goto out;
- }
+ oldnet = test_and_set_bit(netid, net_table);

/*
* First time through, make sure local device is rionet
- * capable, setup netdev, and set flags so this is skipped
- * on later probes
+ * capable, setup netdev (will be skipped on later probes)
*/
- if (!rionet_check) {
+ if (!oldnet) {
rio_local_read_config_32(rdev->net->hport, RIO_SRC_OPS_CAR,
&lsrc_ops);
rio_local_read_config_32(rdev->net->hport, RIO_DST_OPS_CAR,
&ldst_ops);
if (!is_rionet_capable(lsrc_ops, ldst_ops)) {
printk(KERN_ERR
- "%s: local device is not network capable\n",
- DRV_NAME);
- rionet_check = 1;
- rionet_capable = 0;
+ "%s: local device %s is not network capable\n",
+ DRV_NAME, rdev->net->hport->name);
goto out;
}

+ /* Allocate our net_device structure */
+ ndev = alloc_etherdev(sizeof(struct rionet_private));
+ if (ndev == NULL) {
+ rc = -ENOMEM;
+ goto out;
+ }
+ nets[netid].ndev = ndev;
rc = rionet_setup_netdev(rdev->net->hport, ndev);
- rionet_check = 1;
- nact = 0;
- }
+ INIT_LIST_HEAD(&nets[netid].peers);
+ nets[netid].nact = 0;
+ } else if (nets[netid].ndev == NULL)
+ goto out;

/*
* If the remote device has mailbox/doorbell capabilities,
@@ -549,10 +556,10 @@ static int rionet_probe(struct rio_dev *rdev, const struct rio_device_id *id)
goto out;
}
peer->rdev = rdev;
- list_add_tail(&peer->node, &rionet_peers);
+ list_add_tail(&peer->node, &nets[netid].peers);
}

- rio_set_drvdata(rdev, ndev);
+ rio_set_drvdata(rdev, nets[netid].ndev);

out:
return rc;
--
1.7.8.4

2012-10-03 19:25:41

by Bounine, Alexandre

[permalink] [raw]
Subject: [PATCH 1/5] rapidio: fix blocking wait for discovery ready

Fix blocking wait loop in the RapidIO discovery routine to avoid warning
dumps about stalled CPU on x86 platforms.

Signed-off-by: Alexandre Bounine <[email protected]>
Cc: Matt Porter <[email protected]>
Cc: Li Yang <[email protected]>
---
drivers/rapidio/rio-scan.c | 62 ++++++++++++++-----------------------------
1 files changed, 20 insertions(+), 42 deletions(-)

diff --git a/drivers/rapidio/rio-scan.c b/drivers/rapidio/rio-scan.c
index 02e686b..0a27253 100644
--- a/drivers/rapidio/rio-scan.c
+++ b/drivers/rapidio/rio-scan.c
@@ -31,6 +31,7 @@
#include <linux/module.h>
#include <linux/spinlock.h>
#include <linux/timer.h>
+#include <linux/sched.h>
#include <linux/jiffies.h>
#include <linux/slab.h>

@@ -39,8 +40,6 @@
LIST_HEAD(rio_devices);
static LIST_HEAD(rio_switches);

-static void rio_enum_timeout(unsigned long);
-
static void rio_init_em(struct rio_dev *rdev);

DEFINE_SPINLOCK(rio_global_list_lock);
@@ -49,9 +48,6 @@ static int next_destid = 0;
static int next_net = 0;
static int next_comptag = 1;

-static struct timer_list rio_enum_timer =
-TIMER_INITIALIZER(rio_enum_timeout, 0, 0);
-
static int rio_mport_phys_table[] = {
RIO_EFB_PAR_EP_ID,
RIO_EFB_PAR_EP_REC_ID,
@@ -1234,20 +1230,6 @@ static void rio_build_route_tables(void)
}

/**
- * rio_enum_timeout- Signal that enumeration timed out
- * @data: Address of timeout flag.
- *
- * When the enumeration complete timer expires, set a flag that
- * signals to the discovery process that enumeration did not
- * complete in a sane amount of time.
- */
-static void rio_enum_timeout(unsigned long data)
-{
- /* Enumeration timed out, set flag */
- *(int *)data = 1;
-}
-
-/**
* rio_disc_mport- Start discovery through a master port
* @mport: Master port to send transactions
*
@@ -1260,34 +1242,33 @@ static void rio_enum_timeout(unsigned long data)
int __devinit rio_disc_mport(struct rio_mport *mport)
{
struct rio_net *net = NULL;
- int enum_timeout_flag = 0;
+ unsigned long to_end;

printk(KERN_INFO "RIO: discover master port %d, %s\n", mport->id,
mport->name);

/* If master port has an active link, allocate net and discover peers */
if (rio_mport_is_active(mport)) {
- if (!(net = rio_alloc_net(mport))) {
- printk(KERN_ERR "RIO: Failed to allocate new net\n");
- goto bail;
- }
+ pr_debug("RIO: wait for enumeration to complete...\n");

- pr_debug("RIO: wait for enumeration complete...");
-
- rio_enum_timer.expires =
- jiffies + CONFIG_RAPIDIO_DISC_TIMEOUT * HZ;
- rio_enum_timer.data = (unsigned long)&enum_timeout_flag;
- add_timer(&rio_enum_timer);
- while (!rio_enum_complete(mport)) {
- mdelay(1);
- if (enum_timeout_flag) {
- del_timer_sync(&rio_enum_timer);
- goto timeout;
- }
+ to_end = jiffies + CONFIG_RAPIDIO_DISC_TIMEOUT * HZ;
+ while (time_before(jiffies, to_end)) {
+ if (rio_enum_complete(mport))
+ goto enum_done;
+ schedule_timeout_uninterruptible(msecs_to_jiffies(10));
}
- del_timer_sync(&rio_enum_timer);

- pr_debug("done\n");
+ pr_debug("RIO: discovery timeout on mport %d %s\n",
+ mport->id, mport->name);
+ goto bail;
+enum_done:
+ pr_debug("RIO: ... enumeration done\n");
+
+ net = rio_alloc_net(mport);
+ if (!net) {
+ printk(KERN_ERR "RIO: Failed to allocate new net\n");
+ goto bail;
+ }

/* Read DestID assigned by enumerator */
rio_local_read_config_32(mport, RIO_DID_CSR,
@@ -1307,9 +1288,6 @@ int __devinit rio_disc_mport(struct rio_mport *mport)
}

return 0;
-
- timeout:
- pr_debug("timeout\n");
- bail:
+bail:
return -EBUSY;
}
--
1.7.8.4

2012-10-03 19:26:44

by Bounine, Alexandre

[permalink] [raw]
Subject: [PATCH 3/5] rapidio: run discovery as an asynchronous process

Modify mport initialization routine to run the RapidIO discovery process
asynchronously. This allows to have an arbitrary order of enumerating and
discovering ports in systems with multiple RapidIO controllers without
creating a deadlock situation if enumerator port is registered after a
discovering one.

Making netID matching to mportID ensures consistent net ID assignment in
multiport RapidIO systems with asynchronous discovery process (global counter
implementation is affected by race between threads).

Signed-off-by: Alexandre Bounine <[email protected]>
Cc: Matt Porter <[email protected]>
Cc: Li Yang <[email protected]>
---
drivers/rapidio/rio-scan.c | 3 +-
drivers/rapidio/rio.c | 51 ++++++++++++++++++++++++++++++++++++++++++-
2 files changed, 50 insertions(+), 4 deletions(-)

diff --git a/drivers/rapidio/rio-scan.c b/drivers/rapidio/rio-scan.c
index 8b7c4bc..745670f 100644
--- a/drivers/rapidio/rio-scan.c
+++ b/drivers/rapidio/rio-scan.c
@@ -44,7 +44,6 @@ static void rio_init_em(struct rio_dev *rdev);
DEFINE_SPINLOCK(rio_global_list_lock);

static int next_destid = 0;
-static int next_net = 0;
static int next_comptag = 1;

static int rio_mport_phys_table[] = {
@@ -1062,7 +1061,7 @@ static struct rio_net __devinit *rio_alloc_net(struct rio_mport *port)
INIT_LIST_HEAD(&net->mports);
list_add_tail(&port->nnode, &net->mports);
net->hport = port;
- net->id = next_net++;
+ net->id = port->id;
}
return net;
}
diff --git a/drivers/rapidio/rio.c b/drivers/rapidio/rio.c
index d7b68cc..7cdc3e6d 100644
--- a/drivers/rapidio/rio.c
+++ b/drivers/rapidio/rio.c
@@ -1260,15 +1260,62 @@ static int __devinit rio_init(void)
return 0;
}

+static struct workqueue_struct *rio_wq;
+
+struct rio_disc_work {
+ struct work_struct work;
+ struct rio_mport *mport;
+};
+
+static void __devinit disc_work_handler(struct work_struct *_work)
+{
+ struct rio_disc_work *work = container_of(_work,
+ struct rio_disc_work, work);
+
+ pr_debug("RIO: discovery work for mport %d %s\n",
+ work->mport->id, work->mport->name);
+ rio_disc_mport(work->mport);
+
+ kfree(work);
+}
+
int __devinit rio_init_mports(void)
{
struct rio_mport *port;
+ struct rio_disc_work *work;
+ int no_disc = 0;

list_for_each_entry(port, &rio_mports, node) {
if (port->host_deviceid >= 0)
rio_enum_mport(port);
- else
- rio_disc_mport(port);
+ else if (!no_disc) {
+ if (!rio_wq) {
+ rio_wq = alloc_workqueue("riodisc", 0, 0);
+ if (!rio_wq) {
+ pr_err("RIO: unable allocate rio_wq\n");
+ no_disc = 1;
+ continue;
+ }
+ }
+
+ work = kzalloc(sizeof *work, GFP_KERNEL);
+ if (!work) {
+ pr_err("RIO: no memory for work struct\n");
+ no_disc = 1;
+ continue;
+ }
+
+ work->mport = port;
+ INIT_WORK(&work->work, disc_work_handler);
+ queue_work(rio_wq, &work->work);
+ }
+ }
+
+ if (rio_wq) {
+ pr_debug("RIO: flush discovery workqueue\n");
+ flush_workqueue(rio_wq);
+ pr_debug("RIO: flush discovery workqueue finished\n");
+ destroy_workqueue(rio_wq);
}

rio_init();
--
1.7.8.4

2012-10-03 22:20:05

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH 1/5] rapidio: fix blocking wait for discovery ready

On Wed, 3 Oct 2012 15:18:39 -0400
Alexandre Bounine <[email protected]> wrote:

> Fix blocking wait loop in the RapidIO discovery routine to avoid warning
> dumps about stalled CPU on x86 platforms.
>
> ...
>
> + to_end = jiffies + CONFIG_RAPIDIO_DISC_TIMEOUT * HZ;
> + while (time_before(jiffies, to_end)) {
> + if (rio_enum_complete(mport))
> + goto enum_done;
> + schedule_timeout_uninterruptible(msecs_to_jiffies(10));

I think a simple msleep(10) would suffice here?

2012-10-03 22:29:56

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH 3/5] rapidio: run discovery as an asynchronous process

On Wed, 3 Oct 2012 15:18:41 -0400
Alexandre Bounine <[email protected]> wrote:

> Modify mport initialization routine to run the RapidIO discovery process
> asynchronously. This allows to have an arbitrary order of enumerating and
> discovering ports in systems with multiple RapidIO controllers without
> creating a deadlock situation if enumerator port is registered after a
> discovering one.
>
> Making netID matching to mportID ensures consistent net ID assignment in
> multiport RapidIO systems with asynchronous discovery process (global counter
> implementation is affected by race between threads).
>
>
> ...
>
> +static void __devinit disc_work_handler(struct work_struct *_work)
> +{
> + struct rio_disc_work *work = container_of(_work,
> + struct rio_disc_work, work);

There's a nice simple way to avoid such ugliness:

--- a/drivers/rapidio/rio.c~rapidio-run-discovery-as-an-asynchronous-process-fix
+++ a/drivers/rapidio/rio.c
@@ -1269,9 +1269,9 @@ struct rio_disc_work {

static void __devinit disc_work_handler(struct work_struct *_work)
{
- struct rio_disc_work *work = container_of(_work,
- struct rio_disc_work, work);
+ struct rio_disc_work *work;

+ work = container_of(_work, struct rio_disc_work, work);
pr_debug("RIO: discovery work for mport %d %s\n",
work->mport->id, work->mport->name);
rio_disc_mport(work->mport);
_

> + pr_debug("RIO: discovery work for mport %d %s\n",
> + work->mport->id, work->mport->name);
> + rio_disc_mport(work->mport);
> +
> + kfree(work);
> +}
> +
> int __devinit rio_init_mports(void)
> {
> struct rio_mport *port;
> + struct rio_disc_work *work;
> + int no_disc = 0;
>
> list_for_each_entry(port, &rio_mports, node) {
> if (port->host_deviceid >= 0)
> rio_enum_mport(port);
> - else
> - rio_disc_mport(port);
> + else if (!no_disc) {
> + if (!rio_wq) {
> + rio_wq = alloc_workqueue("riodisc", 0, 0);
> + if (!rio_wq) {
> + pr_err("RIO: unable allocate rio_wq\n");
> + no_disc = 1;
> + continue;
> + }
> + }
> +
> + work = kzalloc(sizeof *work, GFP_KERNEL);
> + if (!work) {
> + pr_err("RIO: no memory for work struct\n");
> + no_disc = 1;
> + continue;
> + }
> +
> + work->mport = port;
> + INIT_WORK(&work->work, disc_work_handler);
> + queue_work(rio_wq, &work->work);
> + }
> + }

I'm having a lot of trouble with `no_disc'. afacit what it does is to
cease running async discovery for any remaining devices if the workqueue
allocation failed (vaguely reasonable) or if the allocation of a single
work item failed (incomprehensible).

But if we don't run discovery, the subsystem is permanently busted for
at least some devices, isn't it?

And this code is basically untestable unless the programmer does
deliberate fault injection, which makes it pretty much unmaintainable.

So... if I haven't totally misunderstood, I suggest a rethink is in
order?

> + if (rio_wq) {
> + pr_debug("RIO: flush discovery workqueue\n");
> + flush_workqueue(rio_wq);
> + pr_debug("RIO: flush discovery workqueue finished\n");
> + destroy_workqueue(rio_wq);
> }

2012-10-03 22:36:28

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH 5/5] rapidio: add destination ID allocation mechanism

On Wed, 3 Oct 2012 15:18:43 -0400
Alexandre Bounine <[email protected]> wrote:

> Replace the single global destination ID counter with per-net allocation
> mechanism to allow independent destID management for each available RapidIO
> network. Using bitmap based mechanism instead of counters allows
> destination ID release and reuse in systems that support hot-swap.
>
>
> ...
>
> +static u16 rio_destid_alloc(struct rio_net *net)
> +{
> + int destid;
> + struct rio_id_table *idtab = &net->destid_table;
> +
> + spin_lock(&idtab->lock);
> + destid = find_next_zero_bit(idtab->table, idtab->max, idtab->next);
> + if (destid >= idtab->max)
> + destid = find_first_zero_bit(idtab->table, idtab->max);
> +
> + if (destid < idtab->max) {
> + idtab->next = destid + 1;
> + if (idtab->next >= idtab->max)
> + idtab->next = 0;
> + set_bit(destid, idtab->table);
> + destid += idtab->start;
> + } else
> + destid = RIO_INVALID_DESTID;
> +
> + spin_unlock(&idtab->lock);
> + return (u16)destid;
> +}

This is round-robin rather than the simpler first-fit, and this reader
doesn't know why. Suggest the addition of a code comment explaining
this decision.

> +/*
> + * rio_destid_reserve - Reserve the specivied destID
> + * net: RIO network
> + * destid: destID to reserve
> + *
> + * Tries to reserve the specified destID.
> + * Returns 0 if successfull.
> + */
> +static int rio_destid_reserve(struct rio_net *net, u16 destid)
> +{
> + int oldbit;
> + struct rio_id_table *idtab = &net->destid_table;
> +
> + destid -= idtab->start;
> + spin_lock(&idtab->lock);
> + oldbit = test_and_set_bit(destid, idtab->table);
> + spin_unlock(&idtab->lock);
> + return oldbit;
> +}
> +
> +/*
> + * rio_destid_free - free a previously allocated destID
> + * net: RIO network
> + * destid: destID to free
> + *
> + * Makes the specified destID available for use.
> + */
> +static void rio_destid_free(struct rio_net *net, u16 destid)
> +{
> + struct rio_id_table *idtab = &net->destid_table;
> +
> + destid -= idtab->start;
> + spin_lock(&idtab->lock);
> + clear_bit(destid, idtab->table);
> + spin_unlock(&idtab->lock);
> +}
> +
> +/*
> + * rio_destid_first - return first destID in use
> + * net: RIO network
> + */
> +static u16 rio_destid_first(struct rio_net *net)
> +{
> + int destid;
> + struct rio_id_table *idtab = &net->destid_table;
> +
> + spin_lock(&idtab->lock);
> + destid = find_first_bit(idtab->table, idtab->max);
> + if (destid >= idtab->max)
> + destid = RIO_INVALID_DESTID;
> + else
> + destid += idtab->start;
> + spin_unlock(&idtab->lock);
> + return (u16)destid;
> +}
> +
> +/*
> + * rio_destid_next - return next destID in use
> + * net: RIO network
> + * from: destination ID from which search shall continue
> + */

All these code comments look like kerneldoc, but they aren't. kerneldoc
uses /** and identifiers have a leading `@'. And that's OK - one
doesn't *have* to use kerneldoc. But a lot of
drivers/rapidio/rio-scan.c is already using kerneldoc so the
inconsistency is odd.

>
> ...
>
> -static struct rio_net __devinit *rio_alloc_net(struct rio_mport *port)
> +static struct rio_net __devinit *rio_alloc_net(struct rio_mport *port,
> + int do_enum, u16 start)
> {
> struct rio_net *net;
>
> net = kzalloc(sizeof(struct rio_net), GFP_KERNEL);
> + if (net && do_enum) {
> + net->destid_table.table = kzalloc(
> + BITS_TO_LONGS(RIO_MAX_ROUTE_ENTRIES(port->sys_size)) *
> + sizeof(long),
> + GFP_KERNEL);

kcalloc() would be idiomatic here.

> + if (net->destid_table.table == NULL) {
> + pr_err("RIO: failed to allocate destID table\n");
> + kfree(net);
> + net = NULL;
> + } else {
> + net->destid_table.start = start;
> + net->destid_table.next = 0;
> + net->destid_table.max =
> + RIO_MAX_ROUTE_ENTRIES(port->sys_size);
> + spin_lock_init(&net->destid_table.lock);
> + }
> + }
> +
>
> ...
>

2012-10-04 17:20:40

by Bounine, Alexandre

[permalink] [raw]
Subject: RE: [PATCH 1/5] rapidio: fix blocking wait for discovery ready

On Wed, October 03, 2012 6:20 PM
Andrew Morton <[email protected]> wrote:
>
> On Wed, 3 Oct 2012 15:18:39 -0400
> Alexandre Bounine <[email protected]> wrote:
>
> > Fix blocking wait loop in the RapidIO discovery routine to avoid
> > warning dumps about stalled CPU on x86 platforms.
> >
> > ...
> >
> > + to_end = jiffies + CONFIG_RAPIDIO_DISC_TIMEOUT * HZ;
> > + while (time_before(jiffies, to_end)) {
> > + if (rio_enum_complete(mport))
> > + goto enum_done;
> > +
> schedule_timeout_uninterruptible(msecs_to_jiffies(10));
>
> I think a simple msleep(10) would suffice here?
>
Agree, same thing but looks simpler. Will update.

2012-10-04 19:09:41

by Bounine, Alexandre

[permalink] [raw]
Subject: RE: [PATCH 3/5] rapidio: run discovery as an asynchronous process

On Wed, October 03, 2012 6:30 PM
Andrew Morton <[email protected]> wrote:
>
> On Wed, 3 Oct 2012 15:18:41 -0400
> Alexandre Bounine <[email protected]> wrote:
>
> > ...
> >
> > +static void __devinit disc_work_handler(struct work_struct *_work)
> > +{
> > + struct rio_disc_work *work = container_of(_work,
> > + struct rio_disc_work, work);
>
> There's a nice simple way to avoid such ugliness:
>
> --- a/drivers/rapidio/rio.c~rapidio-run-discovery-as-an-asynchronous-
> process-fix
> +++ a/drivers/rapidio/rio.c
> @@ -1269,9 +1269,9 @@ struct rio_disc_work {
>
> static void __devinit disc_work_handler(struct work_struct *_work)
> {
> - struct rio_disc_work *work = container_of(_work,
> - struct rio_disc_work, work);
> + struct rio_disc_work *work;
>
> + work = container_of(_work, struct rio_disc_work, work);
> pr_debug("RIO: discovery work for mport %d %s\n",
> work->mport->id, work->mport->name);
> rio_disc_mport(work->mport);
> _
>

Thank you for the fix. Will avoid that ugliness in the future.

> > + pr_debug("RIO: discovery work for mport %d %s\n",
> > + work->mport->id, work->mport->name);
> > + rio_disc_mport(work->mport);
> > +
> > + kfree(work);
> > +}
> > +
> > int __devinit rio_init_mports(void)
> > {
> > struct rio_mport *port;
> > + struct rio_disc_work *work;
> > + int no_disc = 0;
> >
> > list_for_each_entry(port, &rio_mports, node) {
> > if (port->host_deviceid >= 0)
> > rio_enum_mport(port);
> > - else
> > - rio_disc_mport(port);
> > + else if (!no_disc) {
> > + if (!rio_wq) {
> > + rio_wq = alloc_workqueue("riodisc", 0, 0);
> > + if (!rio_wq) {
> > + pr_err("RIO: unable allocate rio_wq\n");
> > + no_disc = 1;
> > + continue;
> > + }
> > + }
> > +
> > + work = kzalloc(sizeof *work, GFP_KERNEL);
> > + if (!work) {
> > + pr_err("RIO: no memory for work struct\n");
> > + no_disc = 1;
> > + continue;
> > + }
> > +
> > + work->mport = port;
> > + INIT_WORK(&work->work, disc_work_handler);
> > + queue_work(rio_wq, &work->work);
> > + }
> > + }
>
> I'm having a lot of trouble with `no_disc'. afacit what it does is to
> cease running async discovery for any remaining devices if the
> workqueue
> allocation failed (vaguely reasonable) or if the allocation of a single
> work item failed (incomprehensible).
>
> But if we don't run discovery, the subsystem is permanently busted for
> at least some devices, isn't it?

This is correct. We are considering ways to restart discovery
process later but it is not applicable now.

>
> And this code is basically untestable unless the programmer does
> deliberate fault injection, which makes it pretty much unmaintainable.
>
> So... if I haven't totally misunderstood, I suggest a rethink is in
> order?
>

I will review and simplify. Probably, just try to allocate all required
resources ahead of port list scan. Simple and safe.

> > + if (rio_wq) {
> > + pr_debug("RIO: flush discovery workqueue\n");
> > + flush_workqueue(rio_wq);
> > + pr_debug("RIO: flush discovery workqueue finished\n");
> > + destroy_workqueue(rio_wq);
> > }

2012-10-04 20:39:26

by Bounine, Alexandre

[permalink] [raw]
Subject: RE: [PATCH 5/5] rapidio: add destination ID allocation mechanism

On Wed, October 03, 2012 6:36 PM
Andrew Morton <[email protected]> wrote:
>
> On Wed, 3 Oct 2012 15:18:43 -0400
> Alexandre Bounine <[email protected]> wrote:
>
> > ...
> >
> > +static u16 rio_destid_alloc(struct rio_net *net)
> > +{
> > + int destid;
> > + struct rio_id_table *idtab = &net->destid_table;
> > +
> > + spin_lock(&idtab->lock);
> > + destid = find_next_zero_bit(idtab->table, idtab->max, idtab-
> >next);
> > + if (destid >= idtab->max)
> > + destid = find_first_zero_bit(idtab->table, idtab->max);
> > +
> > + if (destid < idtab->max) {
> > + idtab->next = destid + 1;
> > + if (idtab->next >= idtab->max)
> > + idtab->next = 0;
> > + set_bit(destid, idtab->table);
> > + destid += idtab->start;
> > + } else
> > + destid = RIO_INVALID_DESTID;
> > +
> > + spin_unlock(&idtab->lock);
> > + return (u16)destid;
> > +}
>
> This is round-robin rather than the simpler first-fit, and this reader
> doesn't know why. Suggest the addition of a code comment explaining
> this decision.

This is to make debugging easier. Having fresh new destID assigned to
a device after insertion helps to analyze switch routing table updates.
Yes, find-first is sufficient and better understandable (I had it in
early version).
I will switch to find-first scenario to make things clear.

>
> > +/*
> > + * rio_destid_reserve - Reserve the specivied destID
> > + * net: RIO network
> > + * destid: destID to reserve
> > + *
> > + * Tries to reserve the specified destID.
> > + * Returns 0 if successfull.
> > + */
> > +static int rio_destid_reserve(struct rio_net *net, u16 destid)
> > +{
> > + int oldbit;
> > + struct rio_id_table *idtab = &net->destid_table;
> > +
> > + destid -= idtab->start;
> > + spin_lock(&idtab->lock);
> > + oldbit = test_and_set_bit(destid, idtab->table);
> > + spin_unlock(&idtab->lock);
> > + return oldbit;
> > +}
> > +
> > +/*
> > + * rio_destid_free - free a previously allocated destID
> > + * net: RIO network
> > + * destid: destID to free
> > + *
> > + * Makes the specified destID available for use.
> > + */
> > +static void rio_destid_free(struct rio_net *net, u16 destid)
> > +{
> > + struct rio_id_table *idtab = &net->destid_table;
> > +
> > + destid -= idtab->start;
> > + spin_lock(&idtab->lock);
> > + clear_bit(destid, idtab->table);
> > + spin_unlock(&idtab->lock);
> > +}
> > +
> > +/*
> > + * rio_destid_first - return first destID in use
> > + * net: RIO network
> > + */
> > +static u16 rio_destid_first(struct rio_net *net)
> > +{
> > + int destid;
> > + struct rio_id_table *idtab = &net->destid_table;
> > +
> > + spin_lock(&idtab->lock);
> > + destid = find_first_bit(idtab->table, idtab->max);
> > + if (destid >= idtab->max)
> > + destid = RIO_INVALID_DESTID;
> > + else
> > + destid += idtab->start;
> > + spin_unlock(&idtab->lock);
> > + return (u16)destid;
> > +}
> > +
> > +/*
> > + * rio_destid_next - return next destID in use
> > + * net: RIO network
> > + * from: destination ID from which search shall continue
> > + */
>
> All these code comments look like kerneldoc, but they aren't.
> kerneldoc
> uses /** and identifiers have a leading `@'. And that's OK - one
> doesn't *have* to use kerneldoc. But a lot of
> drivers/rapidio/rio-scan.c is already using kerneldoc so the
> inconsistency is odd.

Idea here was that keeping static functions out of kerneldoc may
have sense and result in cleaner doc output. This was my first attempt
to take that path. Probably, kerneldoc adjustment patch for entire
file (or even all RapidIO files) would be more appropriate instead of
changing style half-way.
As you noticed, these comments are similar to kerneldoc - easy to get back
to old style. I will restore kerneldoc style for affected functions.

>
> >
> > ...
> >
> > -static struct rio_net __devinit *rio_alloc_net(struct rio_mport
> *port)
> > +static struct rio_net __devinit *rio_alloc_net(struct rio_mport
> *port,
> > + int do_enum, u16 start)
> > {
> > struct rio_net *net;
> >
> > net = kzalloc(sizeof(struct rio_net), GFP_KERNEL);
> > + if (net && do_enum) {
> > + net->destid_table.table = kzalloc(
> > + BITS_TO_LONGS(RIO_MAX_ROUTE_ENTRIES(port->sys_size))
> *
> > + sizeof(long),
> > + GFP_KERNEL);
>
> kcalloc() would be idiomatic here.

Agree. Will change.

>
> > + if (net->destid_table.table == NULL) {
> > + pr_err("RIO: failed to allocate destID table\n");
> > + kfree(net);
> > + net = NULL;
> > + } else {
> > + net->destid_table.start = start;
> > + net->destid_table.next = 0;
> > + net->destid_table.max =
> > + RIO_MAX_ROUTE_ENTRIES(port->sys_size);
> > + spin_lock_init(&net->destid_table.lock);
> > + }
> > + }
> > +
> >
> > ...
> >