2020-10-30 14:51:29

by Hui Su

[permalink] [raw]
Subject: [PATCH] sched/fair: remove the spin_lock operations

Since 'ab93a4bc955b ("sched/fair: Remove
distribute_running fromCFS bandwidth")',there is
nothing to protect between raw_spin_lock_irqsave/store()
in do_sched_cfs_slack_timer().

So remove it.

Signed-off-by: Hui Su <[email protected]>
---
kernel/sched/fair.c | 3 ---
1 file changed, 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 290f9e38378c..5ecbf5e63198 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5105,9 +5105,6 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
return;

distribute_cfs_runtime(cfs_b);
-
- raw_spin_lock_irqsave(&cfs_b->lock, flags);
- raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
}

/*
--
2.29.0



2020-10-30 15:45:26

by Phil Auld

[permalink] [raw]
Subject: Re: [PATCH] sched/fair: remove the spin_lock operations

On Fri, Oct 30, 2020 at 10:46:21PM +0800 Hui Su wrote:
> Since 'ab93a4bc955b ("sched/fair: Remove
> distribute_running fromCFS bandwidth")',there is
> nothing to protect between raw_spin_lock_irqsave/store()
> in do_sched_cfs_slack_timer().
>
> So remove it.
>
> Signed-off-by: Hui Su <[email protected]>
> ---
> kernel/sched/fair.c | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 290f9e38378c..5ecbf5e63198 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5105,9 +5105,6 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
> return;
>
> distribute_cfs_runtime(cfs_b);
> -
> - raw_spin_lock_irqsave(&cfs_b->lock, flags);
> - raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
> }
>
> /*
> --
> 2.29.0
>
>

Nice :)

Reviewed-by: Phil Auld <[email protected]>
--

2020-10-30 18:51:14

by Benjamin Segall

[permalink] [raw]
Subject: Re: [PATCH] sched/fair: remove the spin_lock operations

Hui Su <[email protected]> writes:

> Since 'ab93a4bc955b ("sched/fair: Remove
> distribute_running fromCFS bandwidth")',there is
> nothing to protect between raw_spin_lock_irqsave/store()
> in do_sched_cfs_slack_timer().
>
> So remove it.

Reviewed-by: Ben Segall <[email protected]>

(I might nitpick the subject to be clear that it should be trivial
because the lock area is empty, or call them dead or something, but it's
not all that important)

>
> Signed-off-by: Hui Su <[email protected]>
> ---
> kernel/sched/fair.c | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 290f9e38378c..5ecbf5e63198 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5105,9 +5105,6 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
> return;
>
> distribute_cfs_runtime(cfs_b);
> -
> - raw_spin_lock_irqsave(&cfs_b->lock, flags);
> - raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
> }
>
> /*

2020-10-30 22:19:01

by David Laight

[permalink] [raw]
Subject: RE: [PATCH] sched/fair: remove the spin_lock operations

From: Benjamin Segall
> Sent: 30 October 2020 18:48
>
> Hui Su <[email protected]> writes:
>
> > Since 'ab93a4bc955b ("sched/fair: Remove
> > distribute_running fromCFS bandwidth")',there is
> > nothing to protect between raw_spin_lock_irqsave/store()
> > in do_sched_cfs_slack_timer().
> >
> > So remove it.
>
> Reviewed-by: Ben Segall <[email protected]>
>
> (I might nitpick the subject to be clear that it should be trivial
> because the lock area is empty, or call them dead or something, but it's
> not all that important)

I don't know about this case, but a lock+unlock can be used
to ensure that nothing else holds the lock when acquiring
the lock requires another lock be held.

So if the normal sequence is:
lock(table)
# lookup item
lock(item)
unlock(table)
....
unlock(item)

Then it can make sense to do:
lock(table)
lock(item)
unlock(item)
....
unlock(table)

although that ought to deserve a comment.

avid

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

2020-11-02 13:57:20

by Phil Auld

[permalink] [raw]
Subject: Re: [PATCH] sched/fair: remove the spin_lock operations

On Fri, Oct 30, 2020 at 10:16:29PM +0000 David Laight wrote:
> From: Benjamin Segall
> > Sent: 30 October 2020 18:48
> >
> > Hui Su <[email protected]> writes:
> >
> > > Since 'ab93a4bc955b ("sched/fair: Remove
> > > distribute_running fromCFS bandwidth")',there is
> > > nothing to protect between raw_spin_lock_irqsave/store()
> > > in do_sched_cfs_slack_timer().
> > >
> > > So remove it.
> >
> > Reviewed-by: Ben Segall <[email protected]>
> >
> > (I might nitpick the subject to be clear that it should be trivial
> > because the lock area is empty, or call them dead or something, but it's
> > not all that important)
>
> I don't know about this case, but a lock+unlock can be used
> to ensure that nothing else holds the lock when acquiring
> the lock requires another lock be held.
>
> So if the normal sequence is:
> lock(table)
> # lookup item
> lock(item)
> unlock(table)
> ....
> unlock(item)
>
> Then it can make sense to do:
> lock(table)
> lock(item)
> unlock(item)
> ....
> unlock(table)
>
> although that ought to deserve a comment.
>

Nah, this one used to be like this :

raw_spin_lock_irqsave(&cfs_b->lock, flags);
lsub_positive(&cfs_b->runtime, runtime);
cfs_b->distribute_running = 0;
raw_spin_unlock_irqrestore(&cfs_b->lock, flags);

It's just a leftover. I agree that if it was there for some other
purpose that it would really need a comment. In this case, it's an
artifact of patch-based development I think.


Cheers,
Phil


> avid
>
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)
>

--

2020-11-02 14:13:50

by Hui Su

[permalink] [raw]
Subject: Re: [PATCH] sched/fair: remove the spin_lock operations

On Mon, Nov 02, 2020 at 08:53:41AM -0500, Phil Auld wrote:
> On Fri, Oct 30, 2020 at 10:16:29PM +0000 David Laight wrote:
> > From: Benjamin Segall
> > > Sent: 30 October 2020 18:48
> > >
> > > Hui Su <[email protected]> writes:
> > >
> > > > Since 'ab93a4bc955b ("sched/fair: Remove
> > > > distribute_running fromCFS bandwidth")',there is
> > > > nothing to protect between raw_spin_lock_irqsave/store()
> > > > in do_sched_cfs_slack_timer().
> > > >
> > > > So remove it.
> > >
> > > Reviewed-by: Ben Segall <[email protected]>
> > >
> > > (I might nitpick the subject to be clear that it should be trivial
> > > because the lock area is empty, or call them dead or something, but it's
> > > not all that important)
> >
> > I don't know about this case, but a lock+unlock can be used
> > to ensure that nothing else holds the lock when acquiring
> > the lock requires another lock be held.
> >
> > So if the normal sequence is:
> > lock(table)
> > # lookup item
> > lock(item)
> > unlock(table)
> > ....
> > unlock(item)
> >
> > Then it can make sense to do:
> > lock(table)
> > lock(item)
> > unlock(item)
> > ....
> > unlock(table)
> >
> > although that ought to deserve a comment.
> >
>
> Nah, this one used to be like this :
>
> raw_spin_lock_irqsave(&cfs_b->lock, flags);
> lsub_positive(&cfs_b->runtime, runtime);
> cfs_b->distribute_running = 0;
> raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
>
> It's just a leftover. I agree that if it was there for some other
> purpose that it would really need a comment. In this case, it's an
> artifact of patch-based development I think.
>
>
> Cheers,
> Phil
>

Yeah, thanks for your explanation, Phil.

It is just a leftover.