2017-07-11 16:13:20

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v7 06/16] lockdep: Detect and handle hist_lock ring buffer overwrite


ARGH!!! please, if there are known holes in patches, put a comment in.

I now had to independently discover this problem during review of the
last patch.

On Wed, May 24, 2017 at 05:59:39PM +0900, Byungchul Park wrote:
> The ring buffer can be overwritten by hardirq/softirq/work contexts.
> That cases must be considered on rollback or commit. For example,
>
> |<------ hist_lock ring buffer size ----->|
> ppppppppppppiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii
> wrapped > iiiiiiiiiiiiiiiiiiiiiii....................
>
> where 'p' represents an acquisition in process context,
> 'i' represents an acquisition in irq context.
>
> On irq exit, crossrelease tries to rollback idx to original position,
> but it should not because the entry already has been invalid by
> overwriting 'i'. Avoid rollback or commit for entries overwritten.
>
> Signed-off-by: Byungchul Park <[email protected]>
> ---
> include/linux/lockdep.h | 20 +++++++++++
> include/linux/sched.h | 4 +++
> kernel/locking/lockdep.c | 92 +++++++++++++++++++++++++++++++++++++++++-------
> 3 files changed, 104 insertions(+), 12 deletions(-)
>
> diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> index d531097..a03f79d 100644
> --- a/include/linux/lockdep.h
> +++ b/include/linux/lockdep.h
> @@ -284,6 +284,26 @@ struct held_lock {
> */
> struct hist_lock {
> /*
> + * Id for each entry in the ring buffer. This is used to
> + * decide whether the ring buffer was overwritten or not.
> + *
> + * For example,
> + *
> + * |<----------- hist_lock ring buffer size ------->|
> + * pppppppppppppppppppppiiiiiiiiiiiiiiiiiiiiiiiiiiiii
> + * wrapped > iiiiiiiiiiiiiiiiiiiiiiiiiii.......................
> + *
> + * where 'p' represents an acquisition in process
> + * context, 'i' represents an acquisition in irq
> + * context.
> + *
> + * In this example, the ring buffer was overwritten by
> + * acquisitions in irq context, that should be detected on
> + * rollback or commit.
> + */
> + unsigned int hist_id;
> +
> + /*
> * Seperate stack_trace data. This will be used at commit step.
> */
> struct stack_trace trace;
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 5f6d6f4..9e1437c 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1756,6 +1756,10 @@ struct task_struct {
> unsigned int xhlock_idx_soft; /* For restoring at softirq exit */
> unsigned int xhlock_idx_hard; /* For restoring at hardirq exit */
> unsigned int xhlock_idx_work; /* For restoring at work exit */
> + unsigned int hist_id;
> + unsigned int hist_id_soft; /* For overwrite check at softirq exit */
> + unsigned int hist_id_hard; /* For overwrite check at hardirq exit */
> + unsigned int hist_id_work; /* For overwrite check at work exit */
> #endif
> #ifdef CONFIG_UBSAN
> unsigned int in_ubsan;


Right, like I wrote in the comment; I don't think you need quite this
much.

The problem only happens if you rewind more than MAX_XHLOCKS_NR;
although I realize it can be an accumulative rewind, which makes it
slightly more tricky.

We can either make the rewind more expensive and make xhlock_valid()
false for each rewound entry; or we can keep the max_idx and account
from there. If we rewind >= MAX_XHLOCKS_NR from the max_idx we need to
invalidate the entire state, which we can do by invaliding
xhlock_valid() or by re-introduction of the hist_gen_id. When we
invalidate the entire state, we can also clear the max_idx.

Given that rewinding _that_ far should be fairly rare (do we have
numbers?) simply iterating the entire thing and setting
xhlock->hlock.instance = NULL, should work I think.


2017-07-12 02:01:40

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v7 06/16] lockdep: Detect and handle hist_lock ring buffer overwrite

On Tue, Jul 11, 2017 at 06:12:32PM +0200, Peter Zijlstra wrote:
>
> ARGH!!! please, if there are known holes in patches, put a comment in.

The fourth of the last change log is the comment, but it was not enough.
I will try to add more comment in that case.

> I now had to independently discover this problem during review of the
> last patch.
>

...

>
> Right, like I wrote in the comment; I don't think you need quite this
> much.
>
> The problem only happens if you rewind more than MAX_XHLOCKS_NR;
> although I realize it can be an accumulative rewind, which makes it
> slightly more tricky.
>
> We can either make the rewind more expensive and make xhlock_valid()
> false for each rewound entry; or we can keep the max_idx and account

Does max_idx mean the 'original position - 1'?

> from there. If we rewind >= MAX_XHLOCKS_NR from the max_idx we need to
> invalidate the entire state, which we can do by invaliding

Could you explain what the entire state is?

> xhlock_valid() or by re-introduction of the hist_gen_id. When we

What does the re-introduction of the hist_gen_id mean?

> invalidate the entire state, we can also clear the max_idx.

2017-07-12 07:56:29

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v7 06/16] lockdep: Detect and handle hist_lock ring buffer overwrite

On Wed, Jul 12, 2017 at 11:00:53AM +0900, Byungchul Park wrote:
> On Tue, Jul 11, 2017 at 06:12:32PM +0200, Peter Zijlstra wrote:

> > Right, like I wrote in the comment; I don't think you need quite this
> > much.
> >
> > The problem only happens if you rewind more than MAX_XHLOCKS_NR;
> > although I realize it can be an accumulative rewind, which makes it
> > slightly more tricky.
> >
> > We can either make the rewind more expensive and make xhlock_valid()
> > false for each rewound entry; or we can keep the max_idx and account
>
> Does max_idx mean the 'original position - 1'?

orig_idx = current->hist_idx;
current->hist_idx++;
if ((int)(current->hist_idx - orig_idx) > 0)
current->hist_idx_max = current->hist_idx;


I've forgotten if the idx points to the most recent entry or beyond it.

Given the circular nature, and tail being one ahead of head, the max
effectively tracks the tail (I suppose we can also do an explicit tail
tracking, but that might end up more difficult).

This allows rewinds of less than array_size() while still maintaining a
correct tail.

Only once we (cummulative or not) rewind past the tail -- iow, loose the
_entire_ history, do we need to do something drastic.

> > from there. If we rewind >= MAX_XHLOCKS_NR from the max_idx we need to
> > invalidate the entire state, which we can do by invaliding
>
> Could you explain what the entire state is?

All hist_lock[]. Did the above help?

> > xhlock_valid() or by re-introduction of the hist_gen_id. When we
>
> What does the re-introduction of the hist_gen_id mean?

What you used to call work_id or something like that. A generation count
for the hist_lock[].

2017-07-13 02:08:32

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v7 06/16] lockdep: Detect and handle hist_lock ring buffer overwrite

On Wed, Jul 12, 2017 at 09:56:17AM +0200, Peter Zijlstra wrote:
> On Wed, Jul 12, 2017 at 11:00:53AM +0900, Byungchul Park wrote:
> > On Tue, Jul 11, 2017 at 06:12:32PM +0200, Peter Zijlstra wrote:
>
> > > Right, like I wrote in the comment; I don't think you need quite this
> > > much.
> > >
> > > The problem only happens if you rewind more than MAX_XHLOCKS_NR;
> > > although I realize it can be an accumulative rewind, which makes it
> > > slightly more tricky.
> > >
> > > We can either make the rewind more expensive and make xhlock_valid()
> > > false for each rewound entry; or we can keep the max_idx and account
> >
> > Does max_idx mean the 'original position - 1'?
>
> orig_idx = current->hist_idx;
> current->hist_idx++;
> if ((int)(current->hist_idx - orig_idx) > 0)
> current->hist_idx_max = current->hist_idx;
>
>
> I've forgotten if the idx points to the most recent entry or beyond it.
>
> Given the circular nature, and tail being one ahead of head, the max
> effectively tracks the tail (I suppose we can also do an explicit tail
> tracking, but that might end up more difficult).
>
> This allows rewinds of less than array_size() while still maintaining a
> correct tail.
>
> Only once we (cummulative or not) rewind past the tail -- iow, loose the
> _entire_ history, do we need to do something drastic.

I am sorry but I don't understand why we have to do the drastic work.

Does my approach have problems, rewinding to 'original idx' on exit and
deciding whether overwrite or not? I think, this way, no need to do the
drastic work. Or.. does my one get more overhead in usual case?

>
> > > from there. If we rewind >= MAX_XHLOCKS_NR from the max_idx we need to
> > > invalidate the entire state, which we can do by invaliding
> >
> > Could you explain what the entire state is?
>
> All hist_lock[]. Did the above help?
>
> > > xhlock_valid() or by re-introduction of the hist_gen_id. When we
> >
> > What does the re-introduction of the hist_gen_id mean?
>
> What you used to call work_id or something like that. A generation count
> for the hist_lock[].

2017-07-13 08:14:51

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v7 06/16] lockdep: Detect and handle hist_lock ring buffer overwrite

On Thu, Jul 13, 2017 at 11:07:45AM +0900, Byungchul Park wrote:
> Does my approach have problems, rewinding to 'original idx' on exit and
> deciding whether overwrite or not? I think, this way, no need to do the
> drastic work. Or.. does my one get more overhead in usual case?

So I think that invalidating just the one entry doesn't work; the moment
you fill that up the iteration in commit_xhlocks() will again use the
next one etc.. even though you wanted it not to.

So we need to wipe the _entire_ history.

So I _think_ the below should work, but its not been near a compiler.


--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -822,6 +822,7 @@ struct task_struct {
unsigned int xhlock_idx_soft; /* For restoring at softirq exit */
unsigned int xhlock_idx_hard; /* For restoring at hardirq exit */
unsigned int xhlock_idx_hist; /* For restoring at history boundaries */
+ unsigned int xhlock_idX_max;
#endif
#ifdef CONFIG_UBSAN
unsigned int in_ubsan;
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4746,6 +4746,14 @@ EXPORT_SYMBOL_GPL(lockdep_rcu_suspicious
static atomic_t cross_gen_id; /* Can be wrapped */

/*
+ * make xhlock_valid() false.
+ */
+static inline void invalidate_xhlock(struct hist_lock *xhlock)
+{
+ xhlock->hlock.instance = NULL;
+}
+
+/*
* Lock history stacks; we have 3 nested lock history stacks:
*
* Hard IRQ
@@ -4764,28 +4772,58 @@ static atomic_t cross_gen_id; /* Can be
* MAX_XHLOCKS_NR ? Possibly re-instroduce hist_gen_id ?
*/

-void crossrelease_hardirq_start(void)
+static inline void __crossrelease_start(unsigned int *stamp)
{
if (current->xhlocks)
- current->xhlock_idx_hard = current->xhlock_idx;
+ *stamp = current->xhlock_idx;
+}
+
+static void __crossrelease_end(unsigned int *stamp)
+{
+ int i;
+
+ if (!current->xhlocks)
+ return;
+
+ current->xhlock_idx = *stamp;
+
+ /*
+ * If we rewind past the tail; all of history is lost.
+ */
+ if ((current->xhlock_idx_max - *stamp) < MAX_XHLOCKS_NR)
+ return;
+
+ /*
+ * Invalidate the entire history..
+ */
+ for (i = 0; i < MAX_XHLOCKS_NR; i++)
+ invalidate_xhlock(&xhlock(i));
+
+ current->xhlock_idx = 0;
+ current->xhlock_idx_hard = 0;
+ current->xhlock_idx_soft = 0;
+ current->xhlock_idx_hist = 0;
+ current->xhlock_idx_max = 0;
+}
+
+void crossrelease_hardirq_start(void)
+{
+ __crossrelease_start(&current->xhlock_idx_hard);
}

void crossrelease_hardirq_end(void)
{
- if (current->xhlocks)
- current->xhlock_idx = current->xhlock_idx_hard;
+ __crossrelease_end(&current->xhlock_idx_hard);
}

void crossrelease_softirq_start(void)
{
- if (current->xhlocks)
- current->xhlock_idx_soft = current->xhlock_idx;
+ __crossrelease_start(&current->xhlock_idx_soft);
}

void crossrelease_softirq_end(void)
{
- if (current->xhlocks)
- current->xhlock_idx = current->xhlock_idx_soft;
+ __crossrelease_end(&current->xhlock_idx_soft);
}

/*
@@ -4806,14 +4844,12 @@ void crossrelease_softirq_end(void)
*/
void crossrelease_hist_start(void)
{
- if (current->xhlocks)
- current->xhlock_idx_hist = current->xhlock_idx;
+ __crossrelease_start(&current->xhlock_idx_hist);
}

void crossrelease_hist_end(void)
{
- if (current->xhlocks)
- current->xhlock_idx = current->xhlock_idx_hist;
+ __crossrelease_end(&current->xhlock_idx_hist);
}

static int cross_lock(struct lockdep_map *lock)
@@ -4880,6 +4916,9 @@ static void add_xhlock(struct held_lock
unsigned int idx = ++current->xhlock_idx;
struct hist_lock *xhlock = &xhlock(idx);

+ if ((int)(current->xhlock_idx_max - idx) < 0)
+ current->xhlock_idx_max = idx;
+
#ifdef CONFIG_DEBUG_LOCKDEP
/*
* This can be done locklessly because they are all task-local

2017-07-13 08:58:35

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v7 06/16] lockdep: Detect and handle hist_lock ring buffer overwrite

On Thu, Jul 13, 2017 at 10:14:42AM +0200, Peter Zijlstra wrote:
> On Thu, Jul 13, 2017 at 11:07:45AM +0900, Byungchul Park wrote:
> > Does my approach have problems, rewinding to 'original idx' on exit and
> > deciding whether overwrite or not? I think, this way, no need to do the
> > drastic work. Or.. does my one get more overhead in usual case?
>
> So I think that invalidating just the one entry doesn't work; the moment

I think invalidating just the one is enough. After rewinding, the entry
will be invalidated and the ring buffer starts to be filled forward from
the point with valid ones. When commit, it will proceed backward with
valid ones until meeting the invalidated entry and stop.

IOW, in case of (overwritten)

rewind to here
|
ppppppppppiiiiiiiiiiiiiiii
iiiiiiiiiiiiiii

invalidate it on exit_irq
and start to fill from here again
|
pppppppppxiiiiiiiiiiiiiiii
iiiiiiiiiiiiiii

when commit occurs here
|
pppppppppxpppppppppppiiiii

do commit within this range
|<---------|
pppppppppxpppppppppppiiiii

So I think this works and is much simple. Anything I missed?

> you fill that up the iteration in commit_xhlocks() will again use the
> next one etc.. even though you wanted it not to.
>
> So we need to wipe the _entire_ history.
>
> So I _think_ the below should work, but its not been near a compiler.
>
>
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -822,6 +822,7 @@ struct task_struct {
> unsigned int xhlock_idx_soft; /* For restoring at softirq exit */
> unsigned int xhlock_idx_hard; /* For restoring at hardirq exit */
> unsigned int xhlock_idx_hist; /* For restoring at history boundaries */
> + unsigned int xhlock_idX_max;
> #endif
> #ifdef CONFIG_UBSAN
> unsigned int in_ubsan;
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -4746,6 +4746,14 @@ EXPORT_SYMBOL_GPL(lockdep_rcu_suspicious
> static atomic_t cross_gen_id; /* Can be wrapped */
>
> /*
> + * make xhlock_valid() false.
> + */
> +static inline void invalidate_xhlock(struct hist_lock *xhlock)
> +{
> + xhlock->hlock.instance = NULL;
> +}
> +
> +/*
> * Lock history stacks; we have 3 nested lock history stacks:
> *
> * Hard IRQ
> @@ -4764,28 +4772,58 @@ static atomic_t cross_gen_id; /* Can be
> * MAX_XHLOCKS_NR ? Possibly re-instroduce hist_gen_id ?
> */
>
> -void crossrelease_hardirq_start(void)
> +static inline void __crossrelease_start(unsigned int *stamp)
> {
> if (current->xhlocks)
> - current->xhlock_idx_hard = current->xhlock_idx;
> + *stamp = current->xhlock_idx;
> +}
> +
> +static void __crossrelease_end(unsigned int *stamp)
> +{
> + int i;
> +
> + if (!current->xhlocks)
> + return;
> +
> + current->xhlock_idx = *stamp;
> +
> + /*
> + * If we rewind past the tail; all of history is lost.
> + */
> + if ((current->xhlock_idx_max - *stamp) < MAX_XHLOCKS_NR)
> + return;
> +
> + /*
> + * Invalidate the entire history..
> + */
> + for (i = 0; i < MAX_XHLOCKS_NR; i++)
> + invalidate_xhlock(&xhlock(i));
> +
> + current->xhlock_idx = 0;
> + current->xhlock_idx_hard = 0;
> + current->xhlock_idx_soft = 0;
> + current->xhlock_idx_hist = 0;
> + current->xhlock_idx_max = 0;
> +}
> +
> +void crossrelease_hardirq_start(void)
> +{
> + __crossrelease_start(&current->xhlock_idx_hard);
> }
>
> void crossrelease_hardirq_end(void)
> {
> - if (current->xhlocks)
> - current->xhlock_idx = current->xhlock_idx_hard;
> + __crossrelease_end(&current->xhlock_idx_hard);
> }
>
> void crossrelease_softirq_start(void)
> {
> - if (current->xhlocks)
> - current->xhlock_idx_soft = current->xhlock_idx;
> + __crossrelease_start(&current->xhlock_idx_soft);
> }
>
> void crossrelease_softirq_end(void)
> {
> - if (current->xhlocks)
> - current->xhlock_idx = current->xhlock_idx_soft;
> + __crossrelease_end(&current->xhlock_idx_soft);
> }
>
> /*
> @@ -4806,14 +4844,12 @@ void crossrelease_softirq_end(void)
> */
> void crossrelease_hist_start(void)
> {
> - if (current->xhlocks)
> - current->xhlock_idx_hist = current->xhlock_idx;
> + __crossrelease_start(&current->xhlock_idx_hist);
> }
>
> void crossrelease_hist_end(void)
> {
> - if (current->xhlocks)
> - current->xhlock_idx = current->xhlock_idx_hist;
> + __crossrelease_end(&current->xhlock_idx_hist);
> }
>
> static int cross_lock(struct lockdep_map *lock)
> @@ -4880,6 +4916,9 @@ static void add_xhlock(struct held_lock
> unsigned int idx = ++current->xhlock_idx;
> struct hist_lock *xhlock = &xhlock(idx);
>
> + if ((int)(current->xhlock_idx_max - idx) < 0)
> + current->xhlock_idx_max = idx;
> +
> #ifdef CONFIG_DEBUG_LOCKDEP
> /*
> * This can be done locklessly because they are all task-local

2017-07-13 09:51:14

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v7 06/16] lockdep: Detect and handle hist_lock ring buffer overwrite

On Thu, Jul 13, 2017 at 05:57:46PM +0900, Byungchul Park wrote:
> On Thu, Jul 13, 2017 at 10:14:42AM +0200, Peter Zijlstra wrote:
> > On Thu, Jul 13, 2017 at 11:07:45AM +0900, Byungchul Park wrote:
> > > Does my approach have problems, rewinding to 'original idx' on exit and
> > > deciding whether overwrite or not? I think, this way, no need to do the
> > > drastic work. Or.. does my one get more overhead in usual case?
> >
> > So I think that invalidating just the one entry doesn't work; the moment
>
> I think invalidating just the one is enough. After rewinding, the entry
> will be invalidated and the ring buffer starts to be filled forward from
> the point with valid ones. When commit, it will proceed backward with
> valid ones until meeting the invalidated entry and stop.
>
> IOW, in case of (overwritten)
>
> rewind to here
> |
> ppppppppppiiiiiiiiiiiiiiii
> iiiiiiiiiiiiiii
>
> invalidate it on exit_irq
> and start to fill from here again
> |
> pppppppppxiiiiiiiiiiiiiiii
> iiiiiiiiiiiiiii
>
> when commit occurs here
> |
> pppppppppxpppppppppppiiiii
>
> do commit within this range
> |<---------|
> pppppppppxpppppppppppiiiii
>
> So I think this works and is much simple. Anything I missed?


wait_for_completion(&C);
atomic_inc_return();

mutex_lock(A1);
mutex_unlock(A1);


<IRQ>
spin_lock(B1);
spin_unlock(B1);

...

spin_lock(B64);
spin_unlock(B64);
</IRQ>


mutex_lock(A2);
mutex_unlock(A2);

complete(&C);


That gives:

xhist[ 0] = A1
xhist[ 1] = B1
...
xhist[63] = B63

then we wrap and have:

xhist[0] = B64

then we rewind to 1 and invalidate to arrive at:

xhist[ 0] = B64
xhist[ 1] = NULL <-- idx
xhist[ 2] = B2
...
xhist[63] = B63


Then we do A2 and get

xhist[ 0] = B64
xhist[ 1] = A2 <-- idx
xhist[ 2] = B2
...
xhist[63] = B63

and the commit_xhlocks() will happily create links between C and A2,
B2..B64.

The C<->A2 link is desired, the C<->B* are not.

2017-07-13 10:10:43

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v7 06/16] lockdep: Detect and handle hist_lock ring buffer overwrite

On Thu, Jul 13, 2017 at 11:50:52AM +0200, Peter Zijlstra wrote:
> wait_for_completion(&C);
> atomic_inc_return();
>
> mutex_lock(A1);
> mutex_unlock(A1);
>
>
> <IRQ>
> spin_lock(B1);
> spin_unlock(B1);
>
> ...
>
> spin_lock(B64);
> spin_unlock(B64);
> </IRQ>
>
>
> mutex_lock(A2);
> mutex_unlock(A2);
>
> complete(&C);
>
>
> That gives:
>
> xhist[ 0] = A1

We have to rollback here later on irq_exit.

The followings are ones for irq context.

> xhist[ 1] = B1
> ...
> xhist[63] = B63
>
> then we wrap and have:
>
> xhist[0] = B64
>
> then we rewind to 1 and invalidate to arrive at:
>

Now, whether xhist[0] has been overwritten or not is important. If yes,
xhist[0] should be NULL, _not_ xhist[1], which is one for irq context so
not interest at all.

> xhist[ 0] = B64
> xhist[ 1] = NULL <-- idx

Therefore, it should be,

xhist[ 0] = NULL <- invalidate, cannot use it any more
--- <- on returning back from irq context, start from here
xhist[ 1] = B1 <-- obsolete history of irq

> xhist[ 2] = B2
> ...
> xhist[63] = B63
>
>
> Then we do A2 and get
>
> xhist[ 0] = B64
> xhist[ 1] = A2 <-- idx
> xhist[ 2] = B2
> ...
> xhist[63] = B63

So invalidating only one is enough.

2017-07-13 10:29:15

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v7 06/16] lockdep: Detect and handle hist_lock ring buffer overwrite

On Thu, Jul 13, 2017 at 07:09:53PM +0900, Byungchul Park wrote:
> On Thu, Jul 13, 2017 at 11:50:52AM +0200, Peter Zijlstra wrote:
> > wait_for_completion(&C);
> > atomic_inc_return();
> >
> > mutex_lock(A1);
> > mutex_unlock(A1);
> >
> >
> > <IRQ>
> > spin_lock(B1);
> > spin_unlock(B1);
> >
> > ...
> >
> > spin_lock(B64);
> > spin_unlock(B64);
> > </IRQ>
> >
> >
> > mutex_lock(A2);
> > mutex_unlock(A2);
> >
> > complete(&C);
> >
> >
> > That gives:
> >
> > xhist[ 0] = A1
>
> We have to rollback here later on irq_exit.
>
> The followings are ones for irq context.
>
> > xhist[ 1] = B1
> > ...
> > xhist[63] = B63
> >
> > then we wrap and have:
> >
> > xhist[0] = B64
> >
> > then we rewind to 1 and invalidate to arrive at:
> >
>
> Now, whether xhist[0] has been overwritten or not is important. If yes,
> xhist[0] should be NULL, _not_ xhist[1], which is one for irq context so
> not interest at all.
>
> > xhist[ 0] = B64
> > xhist[ 1] = NULL <-- idx
>
> Therefore, it should be,
>
> xhist[ 0] = NULL <- invalidate, cannot use it any more
> --- <- on returning back from irq context, start from here
> xhist[ 1] = B1 <-- obsolete history of irq

Ah, so you rely on the same_context_xhlock() ? That doesn't work for
hist (formerly work).

2017-07-13 11:12:28

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v7 06/16] lockdep: Detect and handle hist_lock ring buffer overwrite

On Thu, Jul 13, 2017 at 12:29:05PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 13, 2017 at 07:09:53PM +0900, Byungchul Park wrote:
> > On Thu, Jul 13, 2017 at 11:50:52AM +0200, Peter Zijlstra wrote:
> > > wait_for_completion(&C);
> > > atomic_inc_return();
> > >
> > > mutex_lock(A1);
> > > mutex_unlock(A1);
> > >
> > >
> > > <IRQ>
> > > spin_lock(B1);
> > > spin_unlock(B1);
> > >
> > > ...
> > >
> > > spin_lock(B64);
> > > spin_unlock(B64);
> > > </IRQ>
> > >
> > >

Also consider the alternative:

<IRQ>
spin_lock(D);
spin_unlock(D);

complete(&C);
</IRQ>

in which case the context test will also not work.

2017-07-13 11:19:17

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v7 06/16] lockdep: Detect and handle hist_lock ring buffer overwrite

On Thu, Jul 13, 2017 at 7:29 PM, Peter Zijlstra <[email protected]> wrote:
> On Thu, Jul 13, 2017 at 07:09:53PM +0900, Byungchul Park wrote:
>> On Thu, Jul 13, 2017 at 11:50:52AM +0200, Peter Zijlstra wrote:
>> > wait_for_completion(&C);
>> > atomic_inc_return();
>> >
>> > mutex_lock(A1);
>> > mutex_unlock(A1);
>> >
>> >
>> > <IRQ>
>> > spin_lock(B1);
>> > spin_unlock(B1);
>> >
>> > ...
>> >
>> > spin_lock(B64);
>> > spin_unlock(B64);
>> > </IRQ>
>> >
>> >
>> > mutex_lock(A2);
>> > mutex_unlock(A2);
>> >
>> > complete(&C);
>> >
>> >
>> > That gives:
>> >
>> > xhist[ 0] = A1
>>
>> We have to rollback here later on irq_exit.
>>
>> The followings are ones for irq context.
>>
>> > xhist[ 1] = B1
>> > ...
>> > xhist[63] = B63
>> >
>> > then we wrap and have:
>> >
>> > xhist[0] = B64
>> >
>> > then we rewind to 1 and invalidate to arrive at:
>> >
>>
>> Now, whether xhist[0] has been overwritten or not is important. If yes,
>> xhist[0] should be NULL, _not_ xhist[1], which is one for irq context so
>> not interest at all.
>>
>> > xhist[ 0] = B64
>> > xhist[ 1] = NULL <-- idx
>>
>> Therefore, it should be,
>>
>> xhist[ 0] = NULL <- invalidate, cannot use it any more
>> --- <- on returning back from irq context, start from here
>> xhist[ 1] = B1 <-- obsolete history of irq
>
> Ah, so you rely on the same_context_xhlock() ? That doesn't work for
> hist (formerly work).

As I mentioned in cover-letter, I applied the rollback mechanism into work
(of workqueue) as well. So it works even for hist.

--
Thanks,
Byungchul

2017-07-13 11:23:36

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v7 06/16] lockdep: Detect and handle hist_lock ring buffer overwrite

On Thu, Jul 13, 2017 at 8:12 PM, Peter Zijlstra <[email protected]> wrote:
> On Thu, Jul 13, 2017 at 12:29:05PM +0200, Peter Zijlstra wrote:
>> On Thu, Jul 13, 2017 at 07:09:53PM +0900, Byungchul Park wrote:
>> > On Thu, Jul 13, 2017 at 11:50:52AM +0200, Peter Zijlstra wrote:
>> > > wait_for_completion(&C);
>> > > atomic_inc_return();
>> > >
>> > > mutex_lock(A1);
>> > > mutex_unlock(A1);
>> > >
>> > >
>> > > <IRQ>
>> > > spin_lock(B1);
>> > > spin_unlock(B1);
>> > >
>> > > ...
>> > >
>> > > spin_lock(B64);
>> > > spin_unlock(B64);
>> > > </IRQ>
>> > >
>> > >
>
> Also consider the alternative:
>
> <IRQ>
> spin_lock(D);
> spin_unlock(D);
>
> complete(&C);
> </IRQ>
>
> in which case the context test will also not work.

Context tests are done on xhlock with the release context, _not_
acquisition context. For example, spin_lock(D) and complete(&C) are
in the same context, so the test would pass in this example.

So it works.


--
Thanks,
Byungchul

2017-07-14 01:42:16

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v7 06/16] lockdep: Detect and handle hist_lock ring buffer overwrite

On Thu, Jul 13, 2017 at 08:23:33PM +0900, Byungchul Park wrote:
> On Thu, Jul 13, 2017 at 8:12 PM, Peter Zijlstra <[email protected]> wrote:
> > On Thu, Jul 13, 2017 at 12:29:05PM +0200, Peter Zijlstra wrote:
> >> On Thu, Jul 13, 2017 at 07:09:53PM +0900, Byungchul Park wrote:
> >> > On Thu, Jul 13, 2017 at 11:50:52AM +0200, Peter Zijlstra wrote:
> >> > > wait_for_completion(&C);
> >> > > atomic_inc_return();
> >> > >
> >> > > mutex_lock(A1);
> >> > > mutex_unlock(A1);
> >> > >
> >> > >
> >> > > <IRQ>
> >> > > spin_lock(B1);
> >> > > spin_unlock(B1);
> >> > >
> >> > > ...
> >> > >
> >> > > spin_lock(B64);
> >> > > spin_unlock(B64);
> >> > > </IRQ>
> >> > >
> >> > >
> >
> > Also consider the alternative:
> >
> > <IRQ>
> > spin_lock(D);
> > spin_unlock(D);
> >
> > complete(&C);
> > </IRQ>
> >
> > in which case the context test will also not work.
>
> Context tests are done on xhlock with the release context, _not_
> acquisition context. For example, spin_lock(D) and complete(&C) are
> in the same context, so the test would pass in this example.

Something wrong?

2017-07-14 06:43:03

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v7 06/16] lockdep: Detect and handle hist_lock ring buffer overwrite

On Thu, Jul 13, 2017 at 08:23:33PM +0900, Byungchul Park wrote:
> On Thu, Jul 13, 2017 at 8:12 PM, Peter Zijlstra <[email protected]> wrote:
> > On Thu, Jul 13, 2017 at 12:29:05PM +0200, Peter Zijlstra wrote:
> >> On Thu, Jul 13, 2017 at 07:09:53PM +0900, Byungchul Park wrote:
> >> > On Thu, Jul 13, 2017 at 11:50:52AM +0200, Peter Zijlstra wrote:
> >> > > wait_for_completion(&C);
> >> > > atomic_inc_return();
> >> > >
> >> > > mutex_lock(A1);
> >> > > mutex_unlock(A1);
> >> > >
> >> > >
> >> > > <IRQ>
> >> > > spin_lock(B1);
> >> > > spin_unlock(B1);
> >> > >
> >> > > ...
> >> > >
> >> > > spin_lock(B64);
> >> > > spin_unlock(B64);
> >> > > </IRQ>
> >> > >
> >> > >
> >
> > Also consider the alternative:
> >
> > <IRQ>
> > spin_lock(D);
> > spin_unlock(D);
> >
> > complete(&C);
> > </IRQ>
> >
> > in which case the context test will also not work.
>
> Context tests are done on xhlock with the release context, _not_
> acquisition context. For example, spin_lock(D) and complete(&C) are
> in the same context, so the test would pass in this example.

I think you got confused. Or do I?

2017-07-18 01:26:47

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v7 06/16] lockdep: Detect and handle hist_lock ring buffer overwrite

On Thu, Jul 13, 2017 at 10:14:42AM +0200, Peter Zijlstra wrote:
> +static void __crossrelease_end(unsigned int *stamp)
> +{

[snip]

> +
> + /*
> + * If we rewind past the tail; all of history is lost.
> + */
> + if ((current->xhlock_idx_max - *stamp) < MAX_XHLOCKS_NR)
> + return;
> +
> + /*
> + * Invalidate the entire history..
> + */
> + for (i = 0; i < MAX_XHLOCKS_NR; i++)
> + invalidate_xhlock(&xhlock(i));
> +
> + current->xhlock_idx = 0;
> + current->xhlock_idx_hard = 0;
> + current->xhlock_idx_soft = 0;
> + current->xhlock_idx_hist = 0;
> + current->xhlock_idx_max = 0;

I don't understand why you introduced this code, yet. Do we need this?

The other of your suggestion looks very good though..

> +}

2017-07-21 13:54:28

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v7 06/16] lockdep: Detect and handle hist_lock ring buffer overwrite

On Fri, Jul 14, 2017 at 03:42:10PM +0900, Byungchul Park wrote:
> On Thu, Jul 13, 2017 at 08:23:33PM +0900, Byungchul Park wrote:
> > On Thu, Jul 13, 2017 at 8:12 PM, Peter Zijlstra <[email protected]> wrote:
> > > On Thu, Jul 13, 2017 at 12:29:05PM +0200, Peter Zijlstra wrote:
> > >> On Thu, Jul 13, 2017 at 07:09:53PM +0900, Byungchul Park wrote:
> > >> > On Thu, Jul 13, 2017 at 11:50:52AM +0200, Peter Zijlstra wrote:
> > >> > > wait_for_completion(&C);
> > >> > > atomic_inc_return();
> > >> > >
> > >> > > mutex_lock(A1);
> > >> > > mutex_unlock(A1);
> > >> > >
> > >> > >
> > >> > > <IRQ>
> > >> > > spin_lock(B1);
> > >> > > spin_unlock(B1);
> > >> > >
> > >> > > ...
> > >> > >
> > >> > > spin_lock(B64);
> > >> > > spin_unlock(B64);
> > >> > > </IRQ>
> > >> > >
> > >> > >
> > >
> > > Also consider the alternative:
> > >
> > > <IRQ>
> > > spin_lock(D);
> > > spin_unlock(D);
> > >
> > > complete(&C);
> > > </IRQ>
> > >
> > > in which case the context test will also not work.
> >
> > Context tests are done on xhlock with the release context, _not_
> > acquisition context. For example, spin_lock(D) and complete(&C) are
> > in the same context, so the test would pass in this example.

The point was, this example will also link C to B*.

(/me copy paste from older email)

That gives:

xhist[ 0] = A1
xhist[ 1] = B1
...
xhist[63] = B63

then we wrap and have:

xhist[0] = B64

then we rewind to 1 and invalidate to arrive at:

xhist[ 0] = B64
xhist[ 1] = NULL <-- idx
xhist[ 2] = B2
...
xhist[63] = B63


Then we do D and get

xhist[ 0] = B64
xhist[ 1] = D <-- idx
xhist[ 2] = B2
...
xhist[63] = B63


And now there is nothing that will invalidate B*, after all, the
gen_id's are all after C's stamp, and the same_context_xhlock() test
will also pass because they're all from IRQ context (albeit not the
same, but it cannot tell).


Does this explain? Or am I still missing something?

2017-07-25 06:30:45

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v7 06/16] lockdep: Detect and handle hist_lock ring buffer overwrite

On Fri, Jul 21, 2017 at 03:54:20PM +0200, Peter Zijlstra wrote:
> On Fri, Jul 14, 2017 at 03:42:10PM +0900, Byungchul Park wrote:
> > On Thu, Jul 13, 2017 at 08:23:33PM +0900, Byungchul Park wrote:
> > > On Thu, Jul 13, 2017 at 8:12 PM, Peter Zijlstra <[email protected]> wrote:
> > > > On Thu, Jul 13, 2017 at 12:29:05PM +0200, Peter Zijlstra wrote:
> > > >> On Thu, Jul 13, 2017 at 07:09:53PM +0900, Byungchul Park wrote:
> > > >> > On Thu, Jul 13, 2017 at 11:50:52AM +0200, Peter Zijlstra wrote:
> > > >> > > wait_for_completion(&C);
> > > >> > > atomic_inc_return();
> > > >> > >
> > > >> > > mutex_lock(A1);
> > > >> > > mutex_unlock(A1);
> > > >> > >
> > > >> > >
> > > >> > > <IRQ>
> > > >> > > spin_lock(B1);
> > > >> > > spin_unlock(B1);
> > > >> > >
> > > >> > > ...
> > > >> > >
> > > >> > > spin_lock(B64);
> > > >> > > spin_unlock(B64);
> > > >> > > </IRQ>
> > > >> > >
> > > >> > >
> > > >
> > > > Also consider the alternative:
> > > >
> > > > <IRQ>
> > > > spin_lock(D);
> > > > spin_unlock(D);
> > > >
> > > > complete(&C);
> > > > </IRQ>
> > > >
> > > > in which case the context test will also not work.
> > >
> > > Context tests are done on xhlock with the release context, _not_
> > > acquisition context. For example, spin_lock(D) and complete(&C) are
> > > in the same context, so the test would pass in this example.
>
> The point was, this example will also link C to B*.

_No_, as I already said.

> (/me copy paste from older email)
>
> That gives:
>
> xhist[ 0] = A1
> xhist[ 1] = B1
> ...
> xhist[63] = B63
>
> then we wrap and have:
>
> xhist[0] = B64
>
> then we rewind to 1 and invalidate to arrive at:

We invalidate xhist[_0_], as I already said.

> xhist[ 0] = B64
> xhist[ 1] = NULL <-- idx
> xhist[ 2] = B2
> ...
> xhist[63] = B63
>
>
> Then we do D and get
>
> xhist[ 0] = B64
> xhist[ 1] = D <-- idx
> xhist[ 2] = B2
> ...
> xhist[63] = B63

We should get

xhist[ 0] = NULL
xhist[ 1] = D <-- idx
xhist[ 2] = B2
...
xhist[63] = B63

By the way, did not you get my reply? I did exactly same answer.
Perhaps You have not received or read my replies.

> And now there is nothing that will invalidate B*, after all, the
> gen_id's are all after C's stamp, and the same_context_xhlock() test
> will also pass because they're all from IRQ context (albeit not the
> same, but it cannot tell).

It will stop at xhist[0] because it has been invalidated.

> Does this explain? Or am I still missing something?

Could you read the following reply? Not enough?

https://lkml.org/lkml/2017/7/13/214

I am sorry if my english makes you hard to understand. But I already
answered all you asked.

2017-07-25 08:46:18

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v7 06/16] lockdep: Detect and handle hist_lock ring buffer overwrite

On Tue, Jul 25, 2017 at 03:29:45PM +0900, Byungchul Park wrote:

> _No_, as I already said.
>
> > (/me copy paste from older email)
> >
> > That gives:
> >
> > xhist[ 0] = A1
> > xhist[ 1] = B1
> > ...
> > xhist[63] = B63
> >
> > then we wrap and have:
> >
> > xhist[0] = B64
> >
> > then we rewind to 1 and invalidate to arrive at:
>
> We invalidate xhist[_0_], as I already said.
>
> > xhist[ 0] = B64
> > xhist[ 1] = NULL <-- idx
> > xhist[ 2] = B2
> > ...
> > xhist[63] = B63
> >
> >
> > Then we do D and get
> >
> > xhist[ 0] = B64
> > xhist[ 1] = D <-- idx
> > xhist[ 2] = B2
> > ...
> > xhist[63] = B63
>
> We should get
>
> xhist[ 0] = NULL
> xhist[ 1] = D <-- idx
> xhist[ 2] = B2
> ...
> xhist[63] = B63
>
> By the way, did not you get my reply? I did exactly same answer.
> Perhaps You have not received or read my replies.
>
> > And now there is nothing that will invalidate B*, after all, the
> > gen_id's are all after C's stamp, and the same_context_xhlock() test
> > will also pass because they're all from IRQ context (albeit not the
> > same, but it cannot tell).
>
> It will stop at xhist[0] because it has been invalidated.
>
> > Does this explain? Or am I still missing something?
>
> Could you read the following reply? Not enough?
>
> https://lkml.org/lkml/2017/7/13/214
>
> I am sorry if my english makes you hard to understand. But I already
> answered all you asked.

Ah, I think I see. It works because you commit backwards and terminate
on the invalidate.

Yes I had seen your emails, but the penny hadn't dropped, the light bulb
didn't switch on, etc.. sometimes I'm a little dense and need a little
more help.

Thanks, I'll go look at your latest posting now.