Changelog
v6 - v5
- Added Review-by from Andy.
- Synced with mainline
v4 - v5
- Addressed comments from Andy Shevchenko and Greg
Kroah-Hartman
- Split the patch into a series of 3 patches in order to
provide a better bisecting, and facilitate with reviewing.
v3 - v4
- Added device_shutdown_serial kernel parameter to disable
multi-threading as suggested by Greg Kroah-Hartman
v2 - v3
- Fixed warning from kbuild test.
- Moved device_lock/device_unlock inside device_shutdown_tree().
v1 - v2
- It turns out we cannot lock more than MAX_LOCK_DEPTH by a single
thread. (By default this value is 48), and is used to detect
deadlocks. So, I re-wrote the code to only lock one devices per
thread instead of pre-locking all devices by the main thread.
- Addressed comments from Tobin C. Harding.
- As suggested by Alexander Duyck removed ixgbe changes. It can be
done as a separate work scaling RTNL mutex.
Do a faster shutdown by calling dev->*->shutdown(dev) in parallel.
device_shutdown() calls these functions for every single device but
only using one thread.
Since, nothing else is running on the machine by the time
device_shutdown() is called, there is no reason not to utilize all the
available CPU resources.
Pavel Tatashin (3):
drivers core: refactor device_shutdown
drivers core: prepare device_shutdown for multi-threading
drivers core: multi-threading device shutdown
drivers/base/core.c | 290 +++++++++++++++++++++++++++++++++++++-------
1 file changed, 243 insertions(+), 47 deletions(-)
--
2.18.0
device_shutdown() traverses through the list of devices, and calls
dev->{bug/driver}->shutdown() for each entry in the list.
Refactor the function by keeping device_shutdown() to do the logic of
traversing the list of devices, and device_shutdown_one() to perform the
actual shutdown operation on one device.
Signed-off-by: Pavel Tatashin <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
---
drivers/base/core.c | 50 +++++++++++++++++++++++++++------------------
1 file changed, 30 insertions(+), 20 deletions(-)
diff --git a/drivers/base/core.c b/drivers/base/core.c
index df3e1a44707a..07380aa0683b 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -2802,6 +2802,35 @@ int device_move(struct device *dev, struct device *new_parent,
}
EXPORT_SYMBOL_GPL(device_move);
+/*
+ * device_shutdown_one - call ->shutdown() for the device passed as
+ * argument.
+ */
+static void device_shutdown_one(struct device *dev)
+{
+ /* Don't allow any more runtime suspends */
+ pm_runtime_get_noresume(dev);
+ pm_runtime_barrier(dev);
+
+ if (dev->class && dev->class->shutdown_pre) {
+ if (initcall_debug)
+ dev_info(dev, "shutdown_pre\n");
+ dev->class->shutdown_pre(dev);
+ }
+ if (dev->bus && dev->bus->shutdown) {
+ if (initcall_debug)
+ dev_info(dev, "shutdown\n");
+ dev->bus->shutdown(dev);
+ } else if (dev->driver && dev->driver->shutdown) {
+ if (initcall_debug)
+ dev_info(dev, "shutdown\n");
+ dev->driver->shutdown(dev);
+ }
+
+ /* decrement the reference counter */
+ put_device(dev);
+}
+
/**
* device_shutdown - call ->shutdown() on each device to shutdown.
*/
@@ -2838,30 +2867,11 @@ void device_shutdown(void)
device_lock(parent);
device_lock(dev);
- /* Don't allow any more runtime suspends */
- pm_runtime_get_noresume(dev);
- pm_runtime_barrier(dev);
-
- if (dev->class && dev->class->shutdown_pre) {
- if (initcall_debug)
- dev_info(dev, "shutdown_pre\n");
- dev->class->shutdown_pre(dev);
- }
- if (dev->bus && dev->bus->shutdown) {
- if (initcall_debug)
- dev_info(dev, "shutdown\n");
- dev->bus->shutdown(dev);
- } else if (dev->driver && dev->driver->shutdown) {
- if (initcall_debug)
- dev_info(dev, "shutdown\n");
- dev->driver->shutdown(dev);
- }
-
+ device_shutdown_one(dev);
device_unlock(dev);
if (parent)
device_unlock(parent);
- put_device(dev);
put_device(parent);
spin_lock(&devices_kset->list_lock);
--
2.18.0
When system is rebooted, halted or kexeced device_shutdown() is
called.
This function shuts down every single device by calling either:
dev->bus->shutdown(dev)
dev->driver->shutdown(dev)
Even on a machine with just a moderate amount of devices, device_shutdown()
may take multiple seconds to complete. This is because many devices require
a specific delays to perform this operation.
Here is a sample analysis of time it takes to call device_shutdown() on a
two socket Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz machine.
device_shutdown 2.95s
-----------------------------
mlx4_shutdown 1.14s
megasas_shutdown 0.24s
ixgbe_shutdown 0.37s x 4 (four ixgbe devices on this machine).
the rest 0.09s
In mlx4 we spent the most time, but that is because there is a 1 second
sleep, which is defined by hardware specifications:
mlx4_shutdown
mlx4_unload_one
mlx4_free_ownership
msleep(1000)
With megasas we spend a quarter of a second, but sometimes longer (up-to
0.5s) in this path:
megasas_shutdown
megasas_flush_cache
megasas_issue_blocked_cmd
wait_event_timeout
Finally, with ixgbe_shutdown() it takes 0.37 for each device, but that time
is spread all over the place, with bigger offenders:
ixgbe_shutdown
__ixgbe_shutdown
ixgbe_close_suspend
ixgbe_down
ixgbe_init_hw_generic
ixgbe_reset_hw_X540
msleep(100); 0.104483472
ixgbe_get_san_mac_addr_generic 0.048414851
ixgbe_get_wwn_prefix_generic 0.048409893
ixgbe_start_hw_X540
ixgbe_start_hw_generic
ixgbe_clear_hw_cntrs_generic 0.048581502
ixgbe_setup_fc_generic 0.024225800
All the ixgbe_*generic functions end-up calling:
ixgbe_read_eerd_X540()
ixgbe_acquire_swfw_sync_X540
usleep_range(5000, 6000);
ixgbe_release_swfw_sync_X540
usleep_range(5000, 6000);
While these are short sleeps, they end-up calling them over 24 times!
24 * 0.0055s = 0.132s. Adding-up to 0.528s for four devices. Also we have
four msleep(100). Totaling to: 0.928s
While we should keep optimizing the individual device drivers, in some
cases this is simply a hardware property that forces a specific delay, and
we must wait.
So, the solution for this problem is to shutdown devices in parallel.
However, we must shutdown children before shutting down parents, so parent
device must wait for its children to finish.
With this patch, on the same machine devices_shutdown() takes 1.142s, and
without mlx4 one second delay only 0.38s
This feature can be optionally disabled via kernel parameter:
device_shutdown_serial. When booted with this parameter, device_shutdown()
will shutdown devices one by one.
Signed-off-by: Pavel Tatashin <[email protected]>
---
drivers/base/core.c | 70 +++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 68 insertions(+), 2 deletions(-)
diff --git a/drivers/base/core.c b/drivers/base/core.c
index 5d8623b38d4d..b0f9e70daf52 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -22,6 +22,7 @@
#include <linux/genhd.h>
#include <linux/mutex.h>
#include <linux/pm_runtime.h>
+#include <linux/kthread.h>
#include <linux/netdevice.h>
#include <linux/sched/signal.h>
#include <linux/sysfs.h>
@@ -2887,19 +2888,39 @@ static void device_shutdown_one(struct device *dev)
/*
* Passed as an argument to device_shutdown_child_task().
* child_next_index the next available child index.
+ * tasks_running number of tasks still running. Each tasks decrements it
+ * when job is finished and the last task signals that the
+ * job is complete.
+ * complete Used to signal job competition.
* parent Parent device.
*/
struct device_shutdown_task_data {
atomic_t child_next_index;
+ atomic_t tasks_running;
+ struct completion complete;
struct device *parent;
};
static int device_shutdown_child_task(void *data);
+static bool device_shutdown_serial;
+
+/*
+ * These globals are used by tasks that are started for root devices.
+ * device_root_tasks_finished Number of root devices finished shutting down.
+ * device_root_tasks_started Total number of root devices tasks started.
+ * device_root_tasks_done The completion signal to the main thread.
+ */
+static atomic_t device_root_tasks_finished;
+static atomic_t device_root_tasks_started;
+static struct completion device_root_tasks_done;
/*
* Shutdown device tree with root started in dev. If dev has no children
* simply shutdown only this device. If dev has children recursively shutdown
* children first, and only then the parent.
+ * For performance reasons children are shutdown in parallel using kernel
+ * threads. because we lock dev its children cannot be removed while this
+ * functions is running.
*/
static void device_shutdown_tree(struct device *dev)
{
@@ -2913,11 +2934,20 @@ static void device_shutdown_tree(struct device *dev)
int i;
atomic_set(&tdata.child_next_index, 0);
+ atomic_set(&tdata.tasks_running, children_count);
+ init_completion(&tdata.complete);
tdata.parent = dev;
for (i = 0; i < children_count; i++) {
- device_shutdown_child_task(&tdata);
+ if (device_shutdown_serial) {
+ device_shutdown_child_task(&tdata);
+ } else {
+ kthread_run(device_shutdown_child_task,
+ &tdata, "device_shutdown.%s",
+ dev_name(dev));
+ }
}
+ wait_for_completion(&tdata.complete);
}
device_shutdown_one(dev);
device_unlock(dev);
@@ -2937,6 +2967,10 @@ static int device_shutdown_child_task(void *data)
/* ref. counter is going to be decremented in device_shutdown_one() */
get_device(dev);
device_shutdown_tree(dev);
+
+ /* If we are the last to exit, signal the completion */
+ if (atomic_dec_return(&tdata->tasks_running) == 0)
+ complete(&tdata->complete);
return 0;
}
@@ -2947,9 +2981,14 @@ static int device_shutdown_child_task(void *data)
static int device_shutdown_root_task(void *data)
{
struct device *dev = (struct device *)data;
+ int root_devices;
device_shutdown_tree(dev);
+ /* If we are the last to exit, signal the completion */
+ root_devices = atomic_inc_return(&device_root_tasks_finished);
+ if (root_devices == atomic_read(&device_root_tasks_started))
+ complete(&device_root_tasks_done);
return 0;
}
@@ -2958,10 +2997,17 @@ static int device_shutdown_root_task(void *data)
*/
void device_shutdown(void)
{
+ int root_devices = 0;
struct device *dev;
+ atomic_set(&device_root_tasks_finished, 0);
+ atomic_set(&device_root_tasks_started, 0);
+ init_completion(&device_root_tasks_done);
+
/* Shutdown the root devices. The children are going to be
* shutdown first in device_shutdown_tree().
+ * We shutdown root devices in parallel by starting thread
+ * for each root device.
*/
spin_lock(&devices_kset->list_lock);
while (!list_empty(&devices_kset->list)) {
@@ -2992,13 +3038,33 @@ void device_shutdown(void)
*/
spin_unlock(&devices_kset->list_lock);
- device_shutdown_root_task(dev);
+ root_devices++;
+ if (device_shutdown_serial) {
+ device_shutdown_root_task(dev);
+ } else {
+ kthread_run(device_shutdown_root_task,
+ dev, "device_root_shutdown.%s",
+ dev_name(dev));
+ }
spin_lock(&devices_kset->list_lock);
}
}
spin_unlock(&devices_kset->list_lock);
+
+ /* Set number of root tasks started, and waits for completion */
+ atomic_set(&device_root_tasks_started, root_devices);
+ if (root_devices != atomic_read(&device_root_tasks_finished))
+ wait_for_completion(&device_root_tasks_done);
+}
+
+static int __init _device_shutdown_serial(char *arg)
+{
+ device_shutdown_serial = true;
+ return 0;
}
+early_param("device_shutdown_serial", _device_shutdown_serial);
+
/*
* Device logging functions
*/
--
2.18.0
Do all the necessary refactoring to prepare device_shutdown() logic to
be multi-threaded.
Which includes:
1. Change the direction of traversing the list instead of going backward,
we now go forward.
2. Children are shutdown recursively for each root device from bottom-up.
3. Functions that can be multi-threaded have _task() in their name.
Signed-off-by: Pavel Tatashin <[email protected]>
---
drivers/base/core.c | 178 ++++++++++++++++++++++++++++++++++++--------
1 file changed, 149 insertions(+), 29 deletions(-)
diff --git a/drivers/base/core.c b/drivers/base/core.c
index 07380aa0683b..5d8623b38d4d 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -2134,6 +2134,59 @@ const char *device_get_devnode(struct device *dev,
return *tmp = s;
}
+/*
+ * device_children_count - device children count
+ * @parent: parent struct device.
+ *
+ * Returns number of children for this device or 0 if none.
+ */
+static int device_children_count(struct device *parent)
+{
+ struct klist_iter i;
+ int children = 0;
+
+ if (!parent->p)
+ return 0;
+
+ klist_iter_init(&parent->p->klist_children, &i);
+ while (next_device(&i))
+ children++;
+ klist_iter_exit(&i);
+
+ return children;
+}
+
+/*
+ * device_get_child_by_index - Return child using the provided index.
+ * @parent: parent struct device.
+ * @index: Index of the child, where 0 is the first child in the children list,
+ * and so on.
+ *
+ * Returns child or NULL if child with this index is not present.
+ */
+static struct device *
+device_get_child_by_index(struct device *parent, int index)
+{
+ struct klist_iter i;
+ struct device *dev = NULL, *d;
+ int child_index = 0;
+
+ if (!parent->p || index < 0)
+ return NULL;
+
+ klist_iter_init(&parent->p->klist_children, &i);
+ while ((d = next_device(&i))) {
+ if (child_index == index) {
+ dev = d;
+ break;
+ }
+ child_index++;
+ }
+ klist_iter_exit(&i);
+
+ return dev;
+}
+
/**
* device_for_each_child - device child iterator.
* @parent: parent struct device.
@@ -2831,50 +2884,117 @@ static void device_shutdown_one(struct device *dev)
put_device(dev);
}
+/*
+ * Passed as an argument to device_shutdown_child_task().
+ * child_next_index the next available child index.
+ * parent Parent device.
+ */
+struct device_shutdown_task_data {
+ atomic_t child_next_index;
+ struct device *parent;
+};
+
+static int device_shutdown_child_task(void *data);
+
+/*
+ * Shutdown device tree with root started in dev. If dev has no children
+ * simply shutdown only this device. If dev has children recursively shutdown
+ * children first, and only then the parent.
+ */
+static void device_shutdown_tree(struct device *dev)
+{
+ int children_count;
+
+ device_lock(dev);
+ children_count = device_children_count(dev);
+
+ if (children_count) {
+ struct device_shutdown_task_data tdata;
+ int i;
+
+ atomic_set(&tdata.child_next_index, 0);
+ tdata.parent = dev;
+
+ for (i = 0; i < children_count; i++) {
+ device_shutdown_child_task(&tdata);
+ }
+ }
+ device_shutdown_one(dev);
+ device_unlock(dev);
+}
+
+/*
+ * Only devices with parent are going through this function. The parent is
+ * locked and waits for all of its children to finish shutting down before
+ * calling shutdown function for itself.
+ */
+static int device_shutdown_child_task(void *data)
+{
+ struct device_shutdown_task_data *tdata = data;
+ int cidx = atomic_inc_return(&tdata->child_next_index) - 1;
+ struct device *dev = device_get_child_by_index(tdata->parent, cidx);
+
+ /* ref. counter is going to be decremented in device_shutdown_one() */
+ get_device(dev);
+ device_shutdown_tree(dev);
+ return 0;
+}
+
+/*
+ * On shutdown each root device (the one that does not have a parent) goes
+ * through this function.
+ */
+static int device_shutdown_root_task(void *data)
+{
+ struct device *dev = (struct device *)data;
+
+ device_shutdown_tree(dev);
+
+ return 0;
+}
+
/**
* device_shutdown - call ->shutdown() on each device to shutdown.
*/
void device_shutdown(void)
{
- struct device *dev, *parent;
+ struct device *dev;
- spin_lock(&devices_kset->list_lock);
- /*
- * Walk the devices list backward, shutting down each in turn.
- * Beware that device unplug events may also start pulling
- * devices offline, even as the system is shutting down.
+ /* Shutdown the root devices. The children are going to be
+ * shutdown first in device_shutdown_tree().
*/
+ spin_lock(&devices_kset->list_lock);
while (!list_empty(&devices_kset->list)) {
- dev = list_entry(devices_kset->list.prev, struct device,
- kobj.entry);
+ dev = list_entry(devices_kset->list.next, struct device,
+ kobj.entry);
- /*
- * hold reference count of device's parent to
- * prevent it from being freed because parent's
- * lock is to be held
- */
- parent = get_device(dev->parent);
- get_device(dev);
/*
* Make sure the device is off the kset list, in the
* event that dev->*->shutdown() doesn't remove it.
*/
list_del_init(&dev->kobj.entry);
- spin_unlock(&devices_kset->list_lock);
- /* hold lock to avoid race with probe/release */
- if (parent)
- device_lock(parent);
- device_lock(dev);
-
- device_shutdown_one(dev);
- device_unlock(dev);
- if (parent)
- device_unlock(parent);
-
- put_device(parent);
-
- spin_lock(&devices_kset->list_lock);
+ /* Here we start tasks for root devices only */
+ if (!dev->parent) {
+ /* Prevents devices from being freed. The counter is
+ * going to be decremented in device_shutdown_one() once
+ * this root device is shutdown.
+ */
+ get_device(dev);
+
+ /* We unlock list for performance reasons,
+ * dev->*->shutdown(), may try to take this lock to
+ * remove us from kset list. To avoid unlocking this
+ * list we could replace spin lock in:
+ * dev->kobj.kset->list_lock with a dummy one once
+ * device is locked in device_shutdown_root_task() and
+ * in device_shutdown_child_task().
+ */
+ spin_unlock(&devices_kset->list_lock);
+
+ device_shutdown_root_task(dev);
+ spin_lock(&devices_kset->list_lock);
+ }
}
spin_unlock(&devices_kset->list_lock);
}
--
2.18.0
On Fri, Jun 29, 2018 at 9:25 PM, Pavel Tatashin
<[email protected]> wrote:
> Do all the necessary refactoring to prepare device_shutdown() logic to
> be multi-threaded.
>
> Which includes:
> 1. Change the direction of traversing the list instead of going backward,
> we now go forward.
> 2. Children are shutdown recursively for each root device from bottom-up.
> 3. Functions that can be multi-threaded have _task() in their name.
> +/*
> + * device_children_count - device children count
> + * @parent: parent struct device.
> + *
> + * Returns number of children for this device or 0 if none.
> + */
> +static int device_children_count(struct device *parent)
> +{
> + struct klist_iter i;
> + int children = 0;
> +
> + if (!parent->p)
> + return 0;
> +
> + klist_iter_init(&parent->p->klist_children, &i);
> + while (next_device(&i))
> + children++;
> + klist_iter_exit(&i);
> +
> + return children;
> +}
Looking at another patch in LKML (for gluedir children counting) I
would suggest to consider to cache children value.
Would it make sense?
(So, basically you would update that value on children registered / removed)
--
With Best Regards,
Andy Shevchenko
On 06/29/2018 04:38 PM, Andy Shevchenko wrote:
> On Fri, Jun 29, 2018 at 9:25 PM, Pavel Tatashin
> <[email protected]> wrote:
>> Do all the necessary refactoring to prepare device_shutdown() logic to
>> be multi-threaded.
>>
>> Which includes:
>> 1. Change the direction of traversing the list instead of going backward,
>> we now go forward.
>> 2. Children are shutdown recursively for each root device from bottom-up.
>> 3. Functions that can be multi-threaded have _task() in their name.
>
>> +/*
>> + * device_children_count - device children count
>> + * @parent: parent struct device.
>> + *
>> + * Returns number of children for this device or 0 if none.
>> + */
>> +static int device_children_count(struct device *parent)
>> +{
>> + struct klist_iter i;
>> + int children = 0;
>> +
>> + if (!parent->p)
>> + return 0;
>> +
>> + klist_iter_init(&parent->p->klist_children, &i);
>> + while (next_device(&i))
>> + children++;
>> + klist_iter_exit(&i);
>> +
>> + return children;
>> +}
>
> Looking at another patch in LKML (for gluedir children counting) I
> would suggest to consider to cache children value.
> Would it make sense?
>
> (So, basically you would update that value on children registered / removed)
>
Hi Andy,
Thank you for looking at this patch.
For this particular series, this function is called only once per device during shutdown, when nothing else is running on the machine. So, caching won't really provide us any improvement.
If, in the future this function is going to be called from other sites during run-time of the machine, then yes, that would mean that caching might help.
Thank you,
Pavel