2022-04-12 07:26:07

by Schspa Shi

[permalink] [raw]
Subject: [PATCH v2] btrfs: zstd: use spin_lock in timer callback

This is an optimization for fix fee13fe96529 ("btrfs:
correct zstd workspace manager lock to use spin_lock_bh()")

The critical region for wsm.lock is only accessed by the process context and
the softirq context.

Because in the soft interrupt, the critical section will not be preempted by the
soft interrupt again, there is no need to call spin_lock_bh(&wsm.lock) to turn
off the soft interrupt, spin_lock(&wsm.lock) is enough for this situation.

Changelog:
v1 -> v2:
- Change the commit message to make it more readable.

[1] https://lore.kernel.org/all/[email protected]/

Signed-off-by: Schspa Shi <[email protected]>
---
fs/btrfs/zstd.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index fc42dd0badd7..faa74306f0b7 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -105,10 +105,10 @@ static void zstd_reclaim_timer_fn(struct timer_list *timer)
unsigned long reclaim_threshold = jiffies - ZSTD_BTRFS_RECLAIM_JIFFIES;
struct list_head *pos, *next;

- spin_lock_bh(&wsm.lock);
+ spin_lock(&wsm.lock);

if (list_empty(&wsm.lru_list)) {
- spin_unlock_bh(&wsm.lock);
+ spin_unlock(&wsm.lock);
return;
}

@@ -137,7 +137,7 @@ static void zstd_reclaim_timer_fn(struct timer_list *timer)
if (!list_empty(&wsm.lru_list))
mod_timer(&wsm.timer, jiffies + ZSTD_BTRFS_RECLAIM_JIFFIES);

- spin_unlock_bh(&wsm.lock);
+ spin_unlock(&wsm.lock);
}

/*
--
2.24.3 (Apple Git-128)


2022-04-13 17:35:39

by David Sterba

[permalink] [raw]
Subject: Re: [PATCH v2] btrfs: zstd: use spin_lock in timer callback

On Wed, Apr 13, 2022 at 05:58:41PM +0300, Nikolay Borisov wrote:
>
>
> On 11.04.22 г. 18:55 ч., Schspa Shi wrote:
> > This is an optimization for fix fee13fe96529 ("btrfs:
> > correct zstd workspace manager lock to use spin_lock_bh()")
> >
> > The critical region for wsm.lock is only accessed by the process context and
> > the softirq context.
> >
> > Because in the soft interrupt, the critical section will not be preempted by the
> > soft interrupt again, there is no need to call spin_lock_bh(&wsm.lock) to turn
> > off the soft interrupt, spin_lock(&wsm.lock) is enough for this situation.
> >
> > Changelog:
> > v1 -> v2:
> > - Change the commit message to make it more readable.
> >
> > [1] https://lore.kernel.org/all/[email protected]/
> >
> > Signed-off-by: Schspa Shi <[email protected]>
>
> Has there been any measurable impact by this change? While it's correct it does mean that
> someone looking at the code would see that in one call site we use plain spinlock and in
> another a _bh version and this is somewhat inconsistent.

I think it would be hard to measure the impact, maybe in some kind of
load the _bh version would be unnecessarily blocking some other threads.

Regarding the used locking primitives, I'll add a comment about that to
the function, it is indeed inconsistent and not obvious from the
context.

2022-04-14 00:25:04

by Nikolay Borisov

[permalink] [raw]
Subject: Re: [PATCH v2] btrfs: zstd: use spin_lock in timer callback



On 11.04.22 г. 18:55 ч., Schspa Shi wrote:
> This is an optimization for fix fee13fe96529 ("btrfs:
> correct zstd workspace manager lock to use spin_lock_bh()")
>
> The critical region for wsm.lock is only accessed by the process context and
> the softirq context.
>
> Because in the soft interrupt, the critical section will not be preempted by the
> soft interrupt again, there is no need to call spin_lock_bh(&wsm.lock) to turn
> off the soft interrupt, spin_lock(&wsm.lock) is enough for this situation.
>
> Changelog:
> v1 -> v2:
> - Change the commit message to make it more readable.
>
> [1] https://lore.kernel.org/all/[email protected]/
>
> Signed-off-by: Schspa Shi <[email protected]>

Has there been any measurable impact by this change? While it's correct it does mean that
someone looking at the code would see that in one call site we use plain spinlock and in
another a _bh version and this is somewhat inconsistent.

What's more I believe this is a noop since when softirqs are executing preemptible() would
be false due to preempt_count() being non-0 and in the bh-disabling code
in the spinlock we have:

/* First entry of a task into a BH disabled section? */
1 if (!current->softirq_disable_cnt) {
167 if (preemptible()) {
1 local_lock(&softirq_ctrl.lock);
2 /* Required to meet the RCU bottomhalf requirements. */
3 rcu_read_lock();
4 } else {
5 DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
6 }
7 }


In this case we'd hit the else branch.
> ---
> fs/btrfs/zstd.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
> index fc42dd0badd7..faa74306f0b7 100644
> --- a/fs/btrfs/zstd.c
> +++ b/fs/btrfs/zstd.c
> @@ -105,10 +105,10 @@ static void zstd_reclaim_timer_fn(struct timer_list *timer)
> unsigned long reclaim_threshold = jiffies - ZSTD_BTRFS_RECLAIM_JIFFIES;
> struct list_head *pos, *next;
>
> - spin_lock_bh(&wsm.lock);
> + spin_lock(&wsm.lock);
>
> if (list_empty(&wsm.lru_list)) {
> - spin_unlock_bh(&wsm.lock);
> + spin_unlock(&wsm.lock);
> return;
> }
>
> @@ -137,7 +137,7 @@ static void zstd_reclaim_timer_fn(struct timer_list *timer)
> if (!list_empty(&wsm.lru_list))
> mod_timer(&wsm.timer, jiffies + ZSTD_BTRFS_RECLAIM_JIFFIES);
>
> - spin_unlock_bh(&wsm.lock);
> + spin_unlock(&wsm.lock);
> }
>
> /*

2022-04-14 07:33:27

by Schspa Shi

[permalink] [raw]
Subject: Re: [PATCH v2] btrfs: zstd: use spin_lock in timer callback

Nikolay Borisov <[email protected]> writes:

> On 11.04.22 г. 18:55 ч., Schspa Shi wrote:
>> This is an optimization for fix fee13fe96529 ("btrfs:
>> correct zstd workspace manager lock to use spin_lock_bh()")
>> The critical region for wsm.lock is only accessed by the process context and
>> the softirq context.
>> Because in the soft interrupt, the critical section will not be preempted by
>> the
>> soft interrupt again, there is no need to call spin_lock_bh(&wsm.lock) to turn
>> off the soft interrupt, spin_lock(&wsm.lock) is enough for this situation.
>> Changelog:
>> v1 -> v2:
>> - Change the commit message to make it more readable.
>> [1] https://lore.kernel.org/all/[email protected]/
>> Signed-off-by: Schspa Shi <[email protected]>
>
> Has there been any measurable impact by this change? While it's correct it does mean that
> someone looking at the code would see that in one call site we use plain spinlock and in
> another a _bh version and this is somewhat inconsistent.
>
Yes, it may seem a little confused. but it's allowed to save some
little peace of CPU times.
and "static inline void red_adaptative_timer(struct timer_list *t) in
net/sched/sch_red.c"
have similar usage.

> What's more I believe this is a noop since when softirqs are executing preemptible() would
> be false due to preempt_count() being non-0 and in the bh-disabling code
> in the spinlock we have:
>
> /* First entry of a task into a BH disabled section? */
> 1 if (!current->softirq_disable_cnt) {
> 167 if (preemptible()) {
> 1 local_lock(&softirq_ctrl.lock);
> 2 /* Required to meet the RCU bottomhalf requirements. */
> 3 rcu_read_lock();
> 4 } else {
> 5 DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
> 6 }
> 7 }
>
>
> In this case we'd hit the else branch.

We won't hit the else branch. because current->softirq_disable_cnt
won't be zero in the origin case.

__do_softirq(void)
softirq_handle_begin(void)
__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
current->softirq_disable_cnt will be > 0 at this time.
......
zstd_reclaim_timer_fn(struct timer_list *timer)
spin_lock_bh(&wsm.lock);
__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
if (!current->softirq_disable_cnt) {
// this if branch won't hit
}

softirq_handle_end();

In this case, the "__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);"
won't do anything useful it only
increase softirq disable depth and decrease it in
"__local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);".

So it's safe to replace spin_lock_bh with spin_lock in a timer
callback function.


For the ksoftirqd, it's all the same.

2022-04-14 16:13:07

by Nikolay Borisov

[permalink] [raw]
Subject: Re: [PATCH v2] btrfs: zstd: use spin_lock in timer callback



On 13.04.22 г. 19:03 ч., Schspa Shi wrote:
> Nikolay Borisov <[email protected]> writes:
>
>> On 11.04.22 г. 18:55 ч., Schspa Shi wrote:
>>> This is an optimization for fix fee13fe96529 ("btrfs:
>>> correct zstd workspace manager lock to use spin_lock_bh()")
>>> The critical region for wsm.lock is only accessed by the process context and
>>> the softirq context.
>>> Because in the soft interrupt, the critical section will not be preempted by
>>> the
>>> soft interrupt again, there is no need to call spin_lock_bh(&wsm.lock) to turn
>>> off the soft interrupt, spin_lock(&wsm.lock) is enough for this situation.
>>> Changelog:
>>> v1 -> v2:
>>> - Change the commit message to make it more readable.
>>> [1] https://lore.kernel.org/all/[email protected]/
>>> Signed-off-by: Schspa Shi <[email protected]>
>>
>> Has there been any measurable impact by this change? While it's correct it does mean that
>> someone looking at the code would see that in one call site we use plain spinlock and in
>> another a _bh version and this is somewhat inconsistent.
>>
> Yes, it may seem a little confused. but it's allowed to save some
> little peace of CPU times.
> and "static inline void red_adaptative_timer(struct timer_list *t) in
> net/sched/sch_red.c"
> have similar usage.
>
>> What's more I believe this is a noop since when softirqs are executing preemptible() would
>> be false due to preempt_count() being non-0 and in the bh-disabling code
>> in the spinlock we have:
>>
>> /* First entry of a task into a BH disabled section? */
>> 1 if (!current->softirq_disable_cnt) {
>> 167 if (preemptible()) {
>> 1 local_lock(&softirq_ctrl.lock);
>> 2 /* Required to meet the RCU bottomhalf requirements. */
>> 3 rcu_read_lock();
>> 4 } else {
>> 5 DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
>> 6 }
>> 7 }
>>
>>
>> In this case we'd hit the else branch.
>
> We won't hit the else branch. because current->softirq_disable_cnt
> won't be zero in the origin case.
>
> __do_softirq(void)
> softirq_handle_begin(void)
> __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
> current->softirq_disable_cnt will be > 0 at this time.

That's only relevant on CONFIG_PREEMPT_RT though, on usual kernels
softirq_handle_being is empty. Furthermore, in case of the non-preempt
rt if preemptible() always returns false this means that even in the
__do_softirq path we'll never increment softirq_disable_cnt. So if
anything this change is only beneficial (theoretically at that in
preempt_rt scenarios).

> ......
> zstd_reclaim_timer_fn(struct timer_list *timer)
> spin_lock_bh(&wsm.lock);
> __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
> if (!current->softirq_disable_cnt) {
> // this if branch won't hit
> }
>
> softirq_handle_end();
>
> In this case, the "__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);"
> won't do anything useful it only
> increase softirq disable depth and decrease it in
> "__local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);".
>
> So it's safe to replace spin_lock_bh with spin_lock in a timer
> callback function.
>
>
> For the ksoftirqd, it's all the same.
>

2022-04-15 17:56:51

by David Sterba

[permalink] [raw]
Subject: Re: [PATCH v2] btrfs: zstd: use spin_lock in timer callback

On Mon, Apr 11, 2022 at 11:55:41PM +0800, Schspa Shi wrote:
> This is an optimization for fix fee13fe96529 ("btrfs:
> correct zstd workspace manager lock to use spin_lock_bh()")
>
> The critical region for wsm.lock is only accessed by the process context and
> the softirq context.
>
> Because in the soft interrupt, the critical section will not be preempted by the
> soft interrupt again, there is no need to call spin_lock_bh(&wsm.lock) to turn
> off the soft interrupt, spin_lock(&wsm.lock) is enough for this situation.
>
> Changelog:
> v1 -> v2:
> - Change the commit message to make it more readable.
>
> [1] https://lore.kernel.org/all/[email protected]/
>
> Signed-off-by: Schspa Shi <[email protected]>

Added to misc-next, thanks.

2022-04-16 02:31:43

by Schspa Shi

[permalink] [raw]
Subject: Re: [PATCH v2] btrfs: zstd: use spin_lock in timer callback

Nikolay Borisov <[email protected]> writes:

> On 13.04.22 г. 19:03 ч., Schspa Shi wrote:
>> Nikolay Borisov <[email protected]> writes:
>>
>>> On 11.04.22 г. 18:55 ч., Schspa Shi wrote:
>>>> This is an optimization for fix fee13fe96529 ("btrfs:
>>>> correct zstd workspace manager lock to use spin_lock_bh()")
>>>> The critical region for wsm.lock is only accessed by the process context and
>>>> the softirq context.
>>>> Because in the soft interrupt, the critical section will not be preempted by
>>>> the
>>>> soft interrupt again, there is no need to call spin_lock_bh(&wsm.lock) to turn
>>>> off the soft interrupt, spin_lock(&wsm.lock) is enough for this situation.
>>>> Changelog:
>>>> v1 -> v2:
>>>> - Change the commit message to make it more readable.
>>>> [1] https://lore.kernel.org/all/[email protected]/
>>>> Signed-off-by: Schspa Shi <[email protected]>
>>>
>>> Has there been any measurable impact by this change? While it's correct it does mean that
>>> someone looking at the code would see that in one call site we use plain spinlock and in
>>> another a _bh version and this is somewhat inconsistent.
>>>
>> Yes, it may seem a little confused. but it's allowed to save some
>> little peace of CPU times.
>> and "static inline void red_adaptative_timer(struct timer_list *t) in
>> net/sched/sch_red.c"
>> have similar usage.
>>
>>> What's more I believe this is a noop since when softirqs are executing preemptible() would
>>> be false due to preempt_count() being non-0 and in the bh-disabling code
>>> in the spinlock we have:
>>>
>>> /* First entry of a task into a BH disabled section? */
>>> 1 if (!current->softirq_disable_cnt) {
>>> 167 if (preemptible()) {
>>> 1 local_lock(&softirq_ctrl.lock);
>>> 2 /* Required to meet the RCU bottomhalf requirements. */
>>> 3 rcu_read_lock();
>>> 4 } else {
>>> 5 DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
>>> 6 }
>>> 7 }
>>>
>>>
>>> In this case we'd hit the else branch.
>> We won't hit the else branch. because current->softirq_disable_cnt
>> won't be zero in the origin case.
>> __do_softirq(void)
>> softirq_handle_begin(void)
>> __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
>> current->softirq_disable_cnt will be > 0 at this time.
>
> That's only relevant on CONFIG_PREEMPT_RT though, on usual kernels
> softirq_handle_being is empty. Furthermore, in case of the non-preempt
> rt if preemptible() always returns false this means that even in the
> __do_softirq path we'll never increment softirq_disable_cnt. So if
> anything this change is only beneficial (theoretically at that in preempt_rt
> scenarios).
>
For either case, __local_bh_disable_ip will add preempt count or something else.
for CONFIG_PREEMPT_RT we have discussed, it will be OK and some beneficial.

In the case of CONFIG_TRACE_IRQFLAGS:

#ifdef CONFIG_TRACE_IRQFLAGS
void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
{
unsigned long flags;

WARN_ON_ONCE(in_hardirq());

raw_local_irq_save(flags);
/*
* The preempt tracer hooks into preempt_count_add and will break
* lockdep because it calls back into lockdep after SOFTIRQ_OFFSET
* is set and before current->softirq_enabled is cleared.
* We must manually increment preempt_count here and manually
* call the trace_preempt_off later.
*/
__preempt_count_add(cnt);
/*
* Were softirqs turned off above:
*/
if (softirq_count() == (cnt & SOFTIRQ_MASK))
lockdep_softirqs_off(ip);
raw_local_irq_restore(flags);

if (preempt_count() == cnt) {
#ifdef CONFIG_DEBUG_PREEMPT
current->preempt_disable_ip = get_lock_parent_ip();
#endif
trace_preempt_off(CALLER_ADDR0, get_lock_parent_ip());
}
}
EXPORT_SYMBOL(__local_bh_disable_ip);
#endif /* CONFIG_TRACE_IRQFLAGS */

There is also __preempt_count_add(cnt), local IRQ disable. which
reduces the system's
corresponding speed.

In another case (usual kernels):

#if defined(CONFIG_PREEMPT_RT) || defined(CONFIG_TRACE_IRQFLAGS)
extern void __local_bh_disable_ip(unsigned long ip, unsigned int cnt);
#else
static __always_inline void __local_bh_disable_ip(unsigned long ip,
unsigned int cnt)
{
preempt_count_add(cnt);
barrier();
}
#endif

There is preempt_count_add(cnt), and it's useless in the timer's callback.

In summary:
There is a benefit for all the cases to replace spin_lock_bh with spin_lock in
timer's callback.

>> ......
>> zstd_reclaim_timer_fn(struct timer_list *timer)
>> spin_lock_bh(&wsm.lock);
>> __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
>> if (!current->softirq_disable_cnt) {
>> // this if branch won't hit
>> }
>> softirq_handle_end();
>> In this case, the "__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);"
>> won't do anything useful it only
>> increase softirq disable depth and decrease it in
>> "__local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);".
>> So it's safe to replace spin_lock_bh with spin_lock in a timer
>> callback function.
>>
>> For the ksoftirqd, it's all the same.
>>