2015-05-03 09:00:37

by Nicholas Mc Guire

[permalink] [raw]
Subject: [PATCH] sched/core: remove unnecessary down/up conversion

rt_period_us is automatically type converted from u64 to long and then cast
back to u64 - this down/up conversion is unnecessary and can be removed to
improve readability.

Signed-off-by: Nicholas Mc Guire <[email protected]>
---

sched_group_set_rt_period is called in one place at
kernel/sched/core.c:cpu_rt_period_write_uint:8318
static int cpu_rt_period_write_uint(struct cgroup_subsys_state *css,
struct cftype *cftype, u64 rt_period_us)
{
return sched_group_set_rt_period(css_tg(css), rt_period_us);
}

here rt_period_us is automatically type converted to long and then cast
back to u64 in sched_group_set_rt_period which should be equivalent to
simply passing it as u64 and dropping the cast.

static int sched_group_set_rt_period(struct task_group *tg, long
rt_period_us)
{
u64 rt_runtime, rt_period;

rt_period = (u64)rt_period_us * NSEC_PER_USEC;
rt_runtime = tg->rt_bandwidth.rt_runtime;

return tg_set_rt_bandwidth(tg, rt_period, rt_runtime);
}

Patch was compile tested for x86_64_defconfig

Patch is against 4.1-rc1 (localversion-next is -next-20150501)

kernel/sched/core.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fe22f75..cf7f327 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7739,11 +7739,11 @@ static long sched_group_rt_runtime(struct task_group *tg)
return rt_runtime_us;
}

-static int sched_group_set_rt_period(struct task_group *tg, long rt_period_us)
+static int sched_group_set_rt_period(struct task_group *tg, u64 rt_period_us)
{
u64 rt_runtime, rt_period;

- rt_period = (u64)rt_period_us * NSEC_PER_USEC;
+ rt_period = rt_period_us * NSEC_PER_USEC;
rt_runtime = tg->rt_bandwidth.rt_runtime;

return tg_set_rt_bandwidth(tg, rt_period, rt_runtime);
--
1.7.10.4


Subject: [tip:sched/core] sched/core: Remove unnecessary down/ up conversion

Commit-ID: ce2f5fe46303d1e1a2ba453753a7e8200d32182c
Gitweb: http://git.kernel.org/tip/ce2f5fe46303d1e1a2ba453753a7e8200d32182c
Author: Nicholas Mc Guire <[email protected]>
AuthorDate: Sun, 3 May 2015 10:51:56 +0200
Committer: Ingo Molnar <[email protected]>
CommitDate: Fri, 8 May 2015 12:10:07 +0200

sched/core: Remove unnecessary down/up conversion

'rt_period_us' is automatically type converted from u64 to long and then cast
back to u64 - this down/up conversion is unnecessary and can be removed to
improve readability.

This will also help us not truncate 'rt_period_us' to 32 bits on 32-bit kernels,
should we ever have so large values. (unlikely, not the least due to procfs.)

Signed-off-by: Nicholas Mc Guire <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched/core.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 527fc28..46a5d6f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7738,11 +7738,11 @@ static long sched_group_rt_runtime(struct task_group *tg)
return rt_runtime_us;
}

-static int sched_group_set_rt_period(struct task_group *tg, long rt_period_us)
+static int sched_group_set_rt_period(struct task_group *tg, u64 rt_period_us)
{
u64 rt_runtime, rt_period;

- rt_period = (u64)rt_period_us * NSEC_PER_USEC;
+ rt_period = rt_period_us * NSEC_PER_USEC;
rt_runtime = tg->rt_bandwidth.rt_runtime;

return tg_set_rt_bandwidth(tg, rt_period, rt_runtime);