Hi,
Here is the 7th version of the patch to expose last succeeded resumed
timestamp in sysfs as /sys/power/suspend_stats/last_success_resume_time.
The previous version is here.
https://lore.kernel.org/all/170245316678.651355.6640896026073025688.stgit@mhiramat.roam.corp.google.com/
This version fixes code comments, sysfs description and patch
description.
On some system like the ChromeOS, the system suspend and resume are
controlled by a power management process. The user-space tasks will be
noticed the suspend and the resume signal from it.
To improve the suspend/resume performance and/or to find regressions,
we would like to know how long the resume processes are taken in kernel
and in user-space.
This patch introduces a last succeeded resumed timestamp (just before
thawing processes) on sysfs which allows us to find when the kernel
resume process successfully done in MONOTONIC clock. Thus user processes
can measure the elapsed time taken by its resume process at any point
in time.
This will help us to detect abnormal value (longer time) process in
the resuming and quickly decide the root cause is in the kernel or
user-space. The kernel side we can use many tools (e.g. printk or
ftrace) but for user-space we need to define the starting point of
the resuming process. Actually, the kernel side needs to use local
clock because the clock subsystem is also suspended. But in that
case, user space can not use that timestamp because the local clock
is not exposed.
So this will be used something like
where_the_user_space_resume_finish() {
clock_gettime(CLOCK_MONOTONIC, &etime_ts);
fileread("/sys/.../last_success_resume_time", stime);
convert_timespec(stime, &stime_ts);
user_resume_time = timespec_delta(&etime_ts, &stime_ts);
...
}
Thank you,
---
Masami Hiramatsu (1):
PM: sleep: Expose last succeeded resumed timestamp in sysfs
Documentation/ABI/testing/sysfs-power | 11 +++++++++++
include/linux/suspend.h | 2 ++
kernel/power/main.c | 15 +++++++++++++++
kernel/power/suspend.c | 9 +++++++++
4 files changed, 37 insertions(+)
--
Masami Hiramatsu (Google) <[email protected]>
From: Masami Hiramatsu <[email protected]>
Expose last succeeded resumed timestamp as last_success_resume_time
attribute of suspend_stats in sysfs so that user can use this time
stamp as a reference point of resuming user space.
On some system like the ChromeOS, the system suspend and resume are
controlled by a power management process. The user-space tasks will be
noticed the suspend and the resume signal from it.
To improve the suspend/resume performance and/or to find regressions,
we would like to know how long the resume processes are taken in kernel
and in user-space.
For this purpose, expose the accarate time when the kernel is finished
to resume so that we can distinguish the duration of kernel resume and
user space resume.
This suspend_stats attribute is easy to access and only expose the
timestamp in CLOCK_MONOTONIC. User can find the accarate time when the
kernel finished to resume its drivers/subsystems and start thawing, and
measure the elapsed time from the time when the kernel finished the
resume to a user-space action (e.g. displaying the UI).
Signed-off-by: Masami Hiramatsu (Google) <[email protected]>
---
Changes in v7:
- Update patch description.
- Update sysfs documentation to say the exact timing.
- Update the comment.
Changes in v6:
- Fix to record resume time before thawing user processes.
Changes in v5:
- Just updated for v6.7-rc3.
Changes in v4.1:
- Fix document typo (again).
Changes in v4:
- Update description to add why.
- Fix document typo.
Changes in v3:
- Add (unsigned long long) casting for %llu.
- Add a line after last_success_resume_time_show().
Changes in v2:
- Use %llu instead of %lu for printing u64 value.
- Remove unneeded indent spaces from the last_success_resume_time
line in the debugfs suspend_stat file.
---
Documentation/ABI/testing/sysfs-power | 11 +++++++++++
include/linux/suspend.h | 2 ++
kernel/power/main.c | 15 +++++++++++++++
kernel/power/suspend.c | 9 +++++++++
4 files changed, 37 insertions(+)
diff --git a/Documentation/ABI/testing/sysfs-power b/Documentation/ABI/testing/sysfs-power
index a3942b1036e2..ee567e7e9d4a 100644
--- a/Documentation/ABI/testing/sysfs-power
+++ b/Documentation/ABI/testing/sysfs-power
@@ -442,6 +442,17 @@ Description:
'total_hw_sleep' and 'last_hw_sleep' may not be accurate.
This number is measured in microseconds.
+What: /sys/power/suspend_stats/last_success_resume_time
+Date: Dec 2023
+Contact: Masami Hiramatsu <[email protected]>
+Description:
+ The /sys/power/suspend_stats/last_success_resume_time file
+ contains the timestamp of when the kernel successfully
+ resumed drivers/subsystems from suspend/hibernate. This is
+ just before thawing the user processes.
+ This floating point number is measured in seconds by monotonic
+ clock.
+
What: /sys/power/sync_on_suspend
Date: October 2019
Contact: Jonas Meurer <[email protected]>
diff --git a/include/linux/suspend.h b/include/linux/suspend.h
index ef503088942d..ddd789044960 100644
--- a/include/linux/suspend.h
+++ b/include/linux/suspend.h
@@ -8,6 +8,7 @@
#include <linux/pm.h>
#include <linux/mm.h>
#include <linux/freezer.h>
+#include <linux/timekeeping.h>
#include <asm/errno.h>
#ifdef CONFIG_VT
@@ -71,6 +72,7 @@ struct suspend_stats {
u64 last_hw_sleep;
u64 total_hw_sleep;
u64 max_hw_sleep;
+ struct timespec64 last_success_resume_time;
enum suspend_stat_step failed_steps[REC_FAILED_NUM];
};
diff --git a/kernel/power/main.c b/kernel/power/main.c
index f6425ae3e8b0..2ab23fd3daac 100644
--- a/kernel/power/main.c
+++ b/kernel/power/main.c
@@ -421,6 +421,17 @@ static ssize_t last_failed_step_show(struct kobject *kobj,
}
static struct kobj_attribute last_failed_step = __ATTR_RO(last_failed_step);
+static ssize_t last_success_resume_time_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sprintf(buf, "%llu.%llu\n",
+ (unsigned long long)suspend_stats.last_success_resume_time.tv_sec,
+ (unsigned long long)suspend_stats.last_success_resume_time.tv_nsec);
+}
+
+static struct kobj_attribute last_success_resume_time =
+ __ATTR_RO(last_success_resume_time);
+
static struct attribute *suspend_attrs[] = {
&success.attr,
&fail.attr,
@@ -438,6 +449,7 @@ static struct attribute *suspend_attrs[] = {
&last_hw_sleep.attr,
&total_hw_sleep.attr,
&max_hw_sleep.attr,
+ &last_success_resume_time.attr,
NULL,
};
@@ -514,6 +526,9 @@ static int suspend_stats_show(struct seq_file *s, void *unused)
suspend_step_name(
suspend_stats.failed_steps[index]));
}
+ seq_printf(s, "last_success_resume_time:\t%-llu.%llu\n",
+ (unsigned long long)suspend_stats.last_success_resume_time.tv_sec,
+ (unsigned long long)suspend_stats.last_success_resume_time.tv_nsec);
return 0;
}
diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c
index fa3bf161d13f..2d0f46b4d0cf 100644
--- a/kernel/power/suspend.c
+++ b/kernel/power/suspend.c
@@ -595,6 +595,15 @@ static int enter_state(suspend_state_t state)
Finish:
events_check_enabled = false;
pm_pr_dbg("Finishing wakeup.\n");
+
+ /*
+ * Record last succeeded resume timestamp just before thawing processes.
+ * This is for helping users to measure user-space resume performance
+ * for improving their programs or finding regressions.
+ */
+ if (!error)
+ ktime_get_ts64(&suspend_stats.last_success_resume_time);
+
suspend_finish();
Unlock:
mutex_unlock(&system_transition_mutex);
Hi Rafael,
Could you review this new version?
Thank you,
On Tue, 26 Dec 2023 22:18:07 +0900
"Masami Hiramatsu (Google)" <[email protected]> wrote:
> Hi,
>
> Here is the 7th version of the patch to expose last succeeded resumed
> timestamp in sysfs as /sys/power/suspend_stats/last_success_resume_time.
> The previous version is here.
>
> https://lore.kernel.org/all/170245316678.651355.6640896026073025688.stgit@mhiramat.roam.corp.google.com/
>
> This version fixes code comments, sysfs description and patch
> description.
>
> On some system like the ChromeOS, the system suspend and resume are
> controlled by a power management process. The user-space tasks will be
> noticed the suspend and the resume signal from it.
> To improve the suspend/resume performance and/or to find regressions,
> we would like to know how long the resume processes are taken in kernel
> and in user-space.
>
> This patch introduces a last succeeded resumed timestamp (just before
> thawing processes) on sysfs which allows us to find when the kernel
> resume process successfully done in MONOTONIC clock. Thus user processes
> can measure the elapsed time taken by its resume process at any point
> in time.
>
> This will help us to detect abnormal value (longer time) process in
> the resuming and quickly decide the root cause is in the kernel or
> user-space. The kernel side we can use many tools (e.g. printk or
> ftrace) but for user-space we need to define the starting point of
> the resuming process. Actually, the kernel side needs to use local
> clock because the clock subsystem is also suspended. But in that
> case, user space can not use that timestamp because the local clock
> is not exposed.
>
> So this will be used something like
>
> where_the_user_space_resume_finish() {
> clock_gettime(CLOCK_MONOTONIC, &etime_ts);
> fileread("/sys/.../last_success_resume_time", stime);
> convert_timespec(stime, &stime_ts);
> user_resume_time = timespec_delta(&etime_ts, &stime_ts);
> ...
> }
>
> Thank you,
>
> ---
>
> Masami Hiramatsu (1):
> PM: sleep: Expose last succeeded resumed timestamp in sysfs
>
>
> Documentation/ABI/testing/sysfs-power | 11 +++++++++++
> include/linux/suspend.h | 2 ++
> kernel/power/main.c | 15 +++++++++++++++
> kernel/power/suspend.c | 9 +++++++++
> 4 files changed, 37 insertions(+)
>
> --
> Masami Hiramatsu (Google) <[email protected]>
>
--
Masami Hiramatsu (Google) <[email protected]>
Gently ping,
I would like to know this is enough or I should add more info/update.
Thank you,
On Tue, 26 Dec 2023 22:18:16 +0900
"Masami Hiramatsu (Google)" <[email protected]> wrote:
> From: Masami Hiramatsu <[email protected]>
>
> Expose last succeeded resumed timestamp as last_success_resume_time
> attribute of suspend_stats in sysfs so that user can use this time
> stamp as a reference point of resuming user space.
>
> On some system like the ChromeOS, the system suspend and resume are
> controlled by a power management process. The user-space tasks will be
> noticed the suspend and the resume signal from it.
> To improve the suspend/resume performance and/or to find regressions,
> we would like to know how long the resume processes are taken in kernel
> and in user-space.
> For this purpose, expose the accarate time when the kernel is finished
> to resume so that we can distinguish the duration of kernel resume and
> user space resume.
>
> This suspend_stats attribute is easy to access and only expose the
> timestamp in CLOCK_MONOTONIC. User can find the accarate time when the
> kernel finished to resume its drivers/subsystems and start thawing, and
> measure the elapsed time from the time when the kernel finished the
> resume to a user-space action (e.g. displaying the UI).
>
> Signed-off-by: Masami Hiramatsu (Google) <[email protected]>
> ---
> Changes in v7:
> - Update patch description.
> - Update sysfs documentation to say the exact timing.
> - Update the comment.
> Changes in v6:
> - Fix to record resume time before thawing user processes.
> Changes in v5:
> - Just updated for v6.7-rc3.
> Changes in v4.1:
> - Fix document typo (again).
> Changes in v4:
> - Update description to add why.
> - Fix document typo.
> Changes in v3:
> - Add (unsigned long long) casting for %llu.
> - Add a line after last_success_resume_time_show().
> Changes in v2:
> - Use %llu instead of %lu for printing u64 value.
> - Remove unneeded indent spaces from the last_success_resume_time
> line in the debugfs suspend_stat file.
> ---
> Documentation/ABI/testing/sysfs-power | 11 +++++++++++
> include/linux/suspend.h | 2 ++
> kernel/power/main.c | 15 +++++++++++++++
> kernel/power/suspend.c | 9 +++++++++
> 4 files changed, 37 insertions(+)
>
> diff --git a/Documentation/ABI/testing/sysfs-power b/Documentation/ABI/testing/sysfs-power
> index a3942b1036e2..ee567e7e9d4a 100644
> --- a/Documentation/ABI/testing/sysfs-power
> +++ b/Documentation/ABI/testing/sysfs-power
> @@ -442,6 +442,17 @@ Description:
> 'total_hw_sleep' and 'last_hw_sleep' may not be accurate.
> This number is measured in microseconds.
>
> +What: /sys/power/suspend_stats/last_success_resume_time
> +Date: Dec 2023
> +Contact: Masami Hiramatsu <[email protected]>
> +Description:
> + The /sys/power/suspend_stats/last_success_resume_time file
> + contains the timestamp of when the kernel successfully
> + resumed drivers/subsystems from suspend/hibernate. This is
> + just before thawing the user processes.
> + This floating point number is measured in seconds by monotonic
> + clock.
> +
> What: /sys/power/sync_on_suspend
> Date: October 2019
> Contact: Jonas Meurer <[email protected]>
> diff --git a/include/linux/suspend.h b/include/linux/suspend.h
> index ef503088942d..ddd789044960 100644
> --- a/include/linux/suspend.h
> +++ b/include/linux/suspend.h
> @@ -8,6 +8,7 @@
> #include <linux/pm.h>
> #include <linux/mm.h>
> #include <linux/freezer.h>
> +#include <linux/timekeeping.h>
> #include <asm/errno.h>
>
> #ifdef CONFIG_VT
> @@ -71,6 +72,7 @@ struct suspend_stats {
> u64 last_hw_sleep;
> u64 total_hw_sleep;
> u64 max_hw_sleep;
> + struct timespec64 last_success_resume_time;
> enum suspend_stat_step failed_steps[REC_FAILED_NUM];
> };
>
> diff --git a/kernel/power/main.c b/kernel/power/main.c
> index f6425ae3e8b0..2ab23fd3daac 100644
> --- a/kernel/power/main.c
> +++ b/kernel/power/main.c
> @@ -421,6 +421,17 @@ static ssize_t last_failed_step_show(struct kobject *kobj,
> }
> static struct kobj_attribute last_failed_step = __ATTR_RO(last_failed_step);
>
> +static ssize_t last_success_resume_time_show(struct kobject *kobj,
> + struct kobj_attribute *attr, char *buf)
> +{
> + return sprintf(buf, "%llu.%llu\n",
> + (unsigned long long)suspend_stats.last_success_resume_time.tv_sec,
> + (unsigned long long)suspend_stats.last_success_resume_time.tv_nsec);
> +}
> +
> +static struct kobj_attribute last_success_resume_time =
> + __ATTR_RO(last_success_resume_time);
> +
> static struct attribute *suspend_attrs[] = {
> &success.attr,
> &fail.attr,
> @@ -438,6 +449,7 @@ static struct attribute *suspend_attrs[] = {
> &last_hw_sleep.attr,
> &total_hw_sleep.attr,
> &max_hw_sleep.attr,
> + &last_success_resume_time.attr,
> NULL,
> };
>
> @@ -514,6 +526,9 @@ static int suspend_stats_show(struct seq_file *s, void *unused)
> suspend_step_name(
> suspend_stats.failed_steps[index]));
> }
> + seq_printf(s, "last_success_resume_time:\t%-llu.%llu\n",
> + (unsigned long long)suspend_stats.last_success_resume_time.tv_sec,
> + (unsigned long long)suspend_stats.last_success_resume_time.tv_nsec);
>
> return 0;
> }
> diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c
> index fa3bf161d13f..2d0f46b4d0cf 100644
> --- a/kernel/power/suspend.c
> +++ b/kernel/power/suspend.c
> @@ -595,6 +595,15 @@ static int enter_state(suspend_state_t state)
> Finish:
> events_check_enabled = false;
> pm_pr_dbg("Finishing wakeup.\n");
> +
> + /*
> + * Record last succeeded resume timestamp just before thawing processes.
> + * This is for helping users to measure user-space resume performance
> + * for improving their programs or finding regressions.
> + */
> + if (!error)
> + ktime_get_ts64(&suspend_stats.last_success_resume_time);
> +
> suspend_finish();
> Unlock:
> mutex_unlock(&system_transition_mutex);
>
--
Masami Hiramatsu (Google) <[email protected]>
On Wed, Jan 17, 2024 at 1:07 AM Masami Hiramatsu <[email protected]> wrote:
>
> Gently ping,
>
> I would like to know this is enough or I should add more info/update.
I still am not sure what this is going to be useful for.
Do you have a specific example?
Also please note that suspend stats are going to be reworked shortly
and I would prefer to make any changes to it after the rework.
Thanks!
On Fri, Jan 19, 2024 at 1:08 PM Rafael J. Wysocki <[email protected]> wrote:
> On Wed, Jan 17, 2024 at 1:07 AM Masami Hiramatsu <mhiramat@kernelorg> wrote:
> >
> > Gently ping,
> >
> > I would like to know this is enough or I should add more info/update.
>
> I still am not sure what this is going to be useful for.
>
> Do you have a specific example?
Since there seems to be some communication gap here, I'll give it a try.
First, I'll paste the key phrase of its use case from the cover letter:
"we would like to know how long the resume processes are taken in kernel
and in user-space"
This is a "system measurement" question, for use in tests (e.g., in a
test lab for CI or for pre-release testing, where we suspend
Chromebooks, wake them back up, and measure how long the wakeup took)
or for user-reported metrics (e.g., similar statistics from real
users' systems, if they've agreed to automatically report usage
statistics, back to Google). We'd like to know how long it takes for a
system to wake up, so we can detect when there are problems that lead
to a slow system-resume experience. The user experience includes both
time spent in the kernel and time spent after user space has thawed
(and is spending time in potentially complex power and display manager
stacks) before a Chromebook's display lights back up.
If I understand the whole of Masami's work correctly, I believe we're
taking "timestamps parsed out of dmesg" (or potentially out of ftrace,
trace events, etc.) to measure the kernel side, plus "timestamp
provided here in CLOCK_MONOTONIC" and "timestamp determined in our
power/display managers" to measure user space.
Does that make sense? Or are we still missing something "specific" for
you? I could give code pointers [1], as it's all open source. But I'm
not sure browsing our metric-collection code would help understanding
any more than these explanations.
(TBH, this all still seems kinda odd to me, since parsing dmesg isn't
a great way to get machine-readable information. But this at least
serves to close some gaps in measurement.)
Brian
[1] e.g., https://source.chromium.org/chromiumos/chromiumos/codesearch/+/main:src/platform2/power_manager/powerd/metrics_collector.cc;l=294;drc=ce8075df179c4f8b2f4e4c4df6978d3df665c4d1
On Mon, 22 Jan 2024 18:08:22 -0800
Brian Norris <[email protected]> wrote:
> On Fri, Jan 19, 2024 at 1:08 PM Rafael J. Wysocki <[email protected]> wrote:
> > On Wed, Jan 17, 2024 at 1:07 AM Masami Hiramatsu <[email protected]> wrote:
> > >
> > > Gently ping,
> > >
> > > I would like to know this is enough or I should add more info/update.
> >
> > I still am not sure what this is going to be useful for.
> >
> > Do you have a specific example?
>
> Since there seems to be some communication gap here, I'll give it a try.
>
> First, I'll paste the key phrase of its use case from the cover letter:
>
> "we would like to know how long the resume processes are taken in kernel
> and in user-space"
>
> This is a "system measurement" question, for use in tests (e.g., in a
> test lab for CI or for pre-release testing, where we suspend
> Chromebooks, wake them back up, and measure how long the wakeup took)
> or for user-reported metrics (e.g., similar statistics from real
> users' systems, if they've agreed to automatically report usage
> statistics, back to Google). We'd like to know how long it takes for a
> system to wake up, so we can detect when there are problems that lead
> to a slow system-resume experience. The user experience includes both
> time spent in the kernel and time spent after user space has thawed
> (and is spending time in potentially complex power and display manager
> stacks) before a Chromebook's display lights back up.
Thanks Brian for explaining, this is correctly explained how we are
using this for measuring resume process duration.
> If I understand the whole of Masami's work correctly, I believe we're
> taking "timestamps parsed out of dmesg" (or potentially out of ftrace,
> trace events, etc.) to measure the kernel side, plus "timestamp
> provided here in CLOCK_MONOTONIC" and "timestamp determined in our
> power/display managers" to measure user space.
Yes, I decided to decouple the kernel and user space because the clock
subsystem is adjusted when resuming. So for the kernel, we will use
local clock (which is not exposed to user space), and use CLOCK_MONOTONIC
for the user space.
> Does that make sense? Or are we still missing something "specific" for
> you? I could give code pointers [1], as it's all open source. But I'm
> not sure browsing our metric-collection code would help understanding
> any more than these explanations.
I hope it helps you understand more about this. If you have further
questions, I will be happy to explain.
> (TBH, this all still seems kinda odd to me, since parsing dmesg isn't
> a great way to get machine-readable information. But this at least
> serves to close some gaps in measurement.)
Yeah, if I can add more in the stat, I would like to add another duration
of the kernel resuming as "last_success_resume_duration". Is that smarter
solution? Or maybe we also can use ftrace for kernel things. But anyway,
to measure the user-space things, in user-space, we need a reference point
of start of resuming.
Thank you,
>
> Brian
>
> [1] e.g., https://source.chromium.org/chromiumos/chromiumos/codesearch/+/main:src/platform2/power_manager/powerd/metrics_collector.cc;l=294;drc=ce8075df179c4f8b2f4e4c4df6978d3df665c4d1
--
Masami Hiramatsu (Google) <[email protected]>
On Thu, Jan 25, 2024 at 1:43 AM Masami Hiramatsu <[email protected]> wrote:
>
> On Mon, 22 Jan 2024 18:08:22 -0800
> Brian Norris <[email protected]> wrote:
>
> > On Fri, Jan 19, 2024 at 1:08 PM Rafael J. Wysocki <[email protected]> wrote:
> > > On Wed, Jan 17, 2024 at 1:07 AM Masami Hiramatsu <[email protected]> wrote:
> > > >
> > > > Gently ping,
> > > >
> > > > I would like to know this is enough or I should add more info/update.
> > >
> > > I still am not sure what this is going to be useful for.
> > >
> > > Do you have a specific example?
> >
> > Since there seems to be some communication gap here, I'll give it a try.
> >
> > First, I'll paste the key phrase of its use case from the cover letter:
> >
> > "we would like to know how long the resume processes are taken in kernel
> > and in user-space"
> >
> > This is a "system measurement" question, for use in tests (e.g., in a
> > test lab for CI or for pre-release testing, where we suspend
> > Chromebooks, wake them back up, and measure how long the wakeup took)
> > or for user-reported metrics (e.g., similar statistics from real
> > users' systems, if they've agreed to automatically report usage
> > statistics, back to Google). We'd like to know how long it takes for a
> > system to wake up, so we can detect when there are problems that lead
> > to a slow system-resume experience. The user experience includes both
> > time spent in the kernel and time spent after user space has thawed
> > (and is spending time in potentially complex power and display manager
> > stacks) before a Chromebook's display lights back up.
>
> Thanks Brian for explaining, this is correctly explained how we are
> using this for measuring resume process duration.
>
> > If I understand the whole of Masami's work correctly, I believe we're
> > taking "timestamps parsed out of dmesg" (or potentially out of ftrace,
> > trace events, etc.) to measure the kernel side, plus "timestamp
> > provided here in CLOCK_MONOTONIC" and "timestamp determined in our
> > power/display managers" to measure user space.
>
> Yes, I decided to decouple the kernel and user space because the clock
> subsystem is adjusted when resuming. So for the kernel, we will use
> local clock (which is not exposed to user space), and use CLOCK_MONOTONIC
> for the user space.
The problem with this split is that you cannot know how much time
elapses between the "successful kernel resume time" and the time when
user space gets to resume.
As of this patch, the kernel timestamp is taken when the kernel is
about to thaw user space and some user space tasks may start running
right away.
Some other tasks, however, will wait for what happens next in the
kernel (because it is not done with resuming yet) and some of them
will wait until explicitly asked to resume by the resume process IIUC.
Your results depend on which tasks participate in the "user
experience", so to speak. If they are the tasks that wait to be
kicked by the resume process, the kernel timestamp taken as per the
above is useless for them, because there is quite some stuff that
happens in the kernel before they will get kicked.
Moreover, some tasks will wait for certain device drivers to get ready
after the rest of the system resumes and that may still take some more
time after the kernel has returned to the process driving the system
suspend-resume.
I'm not sure if there is a single point which can be used as a "user
space resume start" time for every task, which is why I'm not
convinced about this patch.
BTW, there is a utility called sleepgraph that measures the kernel
part of the system suspend-resume. It does its best to measure it
very precisely and uses different techniques for that. Also, it is
included in the kernel source tree. Can you please have a look at it
and see how much there is in common between it and your tools? Maybe
there are some interfaces that can be used in common, or maybe it
could benefit from some interfaces that you are planning to add.
Hi Rafael,
Thanks for the comment!
On Thu, 25 Jan 2024 21:19:07 +0100
"Rafael J. Wysocki" <[email protected]> wrote:
> On Thu, Jan 25, 2024 at 1:43 AM Masami Hiramatsu <[email protected]> wrote:
> >
> > On Mon, 22 Jan 2024 18:08:22 -0800
> > Brian Norris <[email protected]> wrote:
> >
> > > On Fri, Jan 19, 2024 at 1:08 PM Rafael J. Wysocki <[email protected]> wrote:
> > > > On Wed, Jan 17, 2024 at 1:07 AM Masami Hiramatsu <[email protected]> wrote:
> > > > >
> > > > > Gently ping,
> > > > >
> > > > > I would like to know this is enough or I should add more info/update.
> > > >
> > > > I still am not sure what this is going to be useful for.
> > > >
> > > > Do you have a specific example?
> > >
> > > Since there seems to be some communication gap here, I'll give it a try.
> > >
> > > First, I'll paste the key phrase of its use case from the cover letter:
> > >
> > > "we would like to know how long the resume processes are taken in kernel
> > > and in user-space"
> > >
> > > This is a "system measurement" question, for use in tests (e.g., in a
> > > test lab for CI or for pre-release testing, where we suspend
> > > Chromebooks, wake them back up, and measure how long the wakeup took)
> > > or for user-reported metrics (e.g., similar statistics from real
> > > users' systems, if they've agreed to automatically report usage
> > > statistics, back to Google). We'd like to know how long it takes for a
> > > system to wake up, so we can detect when there are problems that lead
> > > to a slow system-resume experience. The user experience includes both
> > > time spent in the kernel and time spent after user space has thawed
> > > (and is spending time in potentially complex power and display manager
> > > stacks) before a Chromebook's display lights back up.
> >
> > Thanks Brian for explaining, this is correctly explained how we are
> > using this for measuring resume process duration.
> >
> > > If I understand the whole of Masami's work correctly, I believe we're
> > > taking "timestamps parsed out of dmesg" (or potentially out of ftrace,
> > > trace events, etc.) to measure the kernel side, plus "timestamp
> > > provided here in CLOCK_MONOTONIC" and "timestamp determined in our
> > > power/display managers" to measure user space.
> >
> > Yes, I decided to decouple the kernel and user space because the clock
> > subsystem is adjusted when resuming. So for the kernel, we will use
> > local clock (which is not exposed to user space), and use CLOCK_MONOTONIC
> > for the user space.
>
> The problem with this split is that you cannot know how much time
> elapses between the "successful kernel resume time" and the time when
> user space gets to resume.
Hm, let me check why.
>
> As of this patch, the kernel timestamp is taken when the kernel is
> about to thaw user space and some user space tasks may start running
> right away.
Yes. But note that this just indicates the time when the "kernel"
done the resuming process.
>
> Some other tasks, however, will wait for what happens next in the
> kernel (because it is not done with resuming yet) and some of them
> will wait until explicitly asked to resume by the resume process IIUC.
Yeah, those will be just restarted and may wait for a signal to e.g.
re-draw, re-load etc. I think those are a part of user-space resuming.
>
> Your results depend on which tasks participate in the "user
> experience", so to speak. If they are the tasks that wait to be
> kicked by the resume process, the kernel timestamp taken as per the
"kicked by the resume process" means thawing the tasks??
> above is useless for them, because there is quite some stuff that
> happens in the kernel before they will get kicked.
So your point might be, even after the timestamp, some kernel resume
process will affect? Since the system(kernel+user) resuming state is
continuously changing in parallel, it is hard to split the kernel
and user resume process?
>
> Moreover, some tasks will wait for certain device drivers to get ready
> after the rest of the system resumes and that may still take some more
> time after the kernel has returned to the process driving the system
> suspend-resume.
Oh, I thought driver resume will be done in the kernel before thawing.
Of course some wifi/bt etc. will need to be re-connect again. And
I don't care at this moment.
>
> I'm not sure if there is a single point which can be used as a "user
> space resume start" time for every task, which is why I'm not
> convinced about this patch.
Ah, OK. Yeah, that is a good point. I also think that is not useful
for every cases, but we need some reference timestamp when we start
resuming. And I think this is a better point to do. Of course if we
observe some delay, we still need more comprehensive tracing not only
application log, but this is the first step and I think the step we
can not avoid (because we don't expose the local clock to user space)
>
> BTW, there is a utility called sleepgraph that measures the kernel
> part of the system suspend-resume. It does its best to measure it
> very precisely and uses different techniques for that. Also, it is
> included in the kernel source tree. Can you please have a look at it
> and see how much there is in common between it and your tools? Maybe
> there are some interfaces that can be used in common, or maybe it
> could benefit from some interfaces that you are planning to add.
Thanks for the info! Yeah we also have some tracing tools (perfetto,
trace-cmd etc.) for analysis. Note that this timestamp will kick those
analysis tools if we find any delay based on the timestamp. I mean this
will be used for "monitoring" the trend but not for precise "analysis".
Thank you,
--
Masami Hiramatsu (Google) <[email protected]>