2015-05-07 13:00:06

by Maxime Ripard

[permalink] [raw]
Subject: Possible RAID6 regression with ASYNC_TX_DMA enabled in 4.1

Hi,

I'm currently trying to add support for the PQ operations on the
marvell XOR engine, in dmaengine, obviously to be able to use async_tx
to offload these operations.

I'm testing these patches with a RAID6 array with 4 disks.

However, since the commit 59fc630b8b5f ("RAID5: batch adjacent full
stripe write", every write to that array fails with the following
stacktrace.

http://code.bulix.org/eh8iew-88342?raw

It seems to be generated by that warning here:

http://lxr.free-electrons.com/source/crypto/async_tx/async_tx.c#L173

And indeed, if we dump the status of depend_tx here, it's already been
acked.

That doesn't happen if ASYNC_TX_DMA is disabled, hence using the
software version of it, instead of relying on our XOR engine. It
doesn't happen on any commit prior to the one mentionned above, with
the exact same changes applied. These changes are meant to be
contributed, so I can definitely push them somewhere if needed.

I don't really know where to look for though, the change that is
causing this is probably the change in ops_run_reconstruct6, but I'm
not sure that this partial revert alone would work with regard to the
rest of the patch.

Maxime

--
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com


Attachments:
(No filename) (1.25 kB)
signature.asc (819.00 B)
Digital signature
Download all attachments

2015-05-07 14:49:28

by Markus Stockhausen

[permalink] [raw]
Subject: AW: Possible RAID6 regression with ASYNC_TX_DMA enabled in 4.1

Hi Maxime,

> Von: [email protected] [[email protected]]" im Auftrag von "Maxime Ripard [[email protected]]
> Gesendet: Donnerstag, 7. Mai 2015 14:57
> An: Neil Brown; Shaohua Li
> Cc: [email protected]; [email protected]; Lior Amsalem; Thomas Petazzoni; Gregory Clement; Boris Brezillon
> Betreff: Possible RAID6 regression with ASYNC_TX_DMA enabled in 4.1
>
> Hi,
>
> I'm currently trying to add support for the PQ operations on the
> marvell XOR engine, in dmaengine, obviously to be able to use async_tx
> to offload these operations.
>
> I'm testing these patches with a RAID6 array with 4 disks.
>
> However, since the commit 59fc630b8b5f ("RAID5: batch adjacent full
> stripe write", every write to that array fails with the following
> stacktrace.
>
> http://code.bulix.org/eh8iew-88342?raw

I don't know if it might be related. I added support for RAID6 Read-Modify-Write
in software XOR with some patches. The following commit mangles some lines in
async_pq.c:

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?
id=584acdd49cd2472ca0f5a06adbe979db82d0b4af

I introduced a new flag ASYNC_TX_PQ_XOR_DST that notifies the async layer
that we want to do a XOR syndrome operation instead of a full calculation.
This will enforce the software path because I guessed that hardware does not
support that case. Without hardware to check I might have missed some
checks in the async layer.

In the upper layer ops_run_reconstruct6 will set the flag if we determined
that rmw is faster than rcw.

Can you check if rmw_level=0 fixes the issue. See:

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?
id=d06f191f8ecaef4d524e765fdb455f96392fbd42

> It seems to be generated by that warning here:
>
> http://lxr.free-electrons.com/source/crypto/async_tx/async_tx.c#L173
>
> And indeed, if we dump the status of depend_tx here, it's already been
> acked.
>
> That doesn't happen if ASYNC_TX_DMA is disabled, hence using the
> software version of it, instead of relying on our XOR engine. It
> doesn't happen on any commit prior to the one mentionned above, with
> the exact same changes applied. These changes are meant to be
> contributed, so I can definitely push them somewhere if needed.
>
> I don't really know where to look for though, the change that is
> causing this is probably the change in ops_run_reconstruct6, but I'm
> not sure that this partial revert alone would work with regard to the
> rest of the patch.
>
> Maxime

Markus


Attachments:
InterScan_Disclaimer.txt (1.61 kB)

2015-05-11 16:12:55

by Shaohua Li

[permalink] [raw]
Subject: Re: Possible RAID6 regression with ASYNC_TX_DMA enabled in 4.1

On Thu, May 07, 2015 at 02:57:02PM +0200, Maxime Ripard wrote:
> Hi,
>
> I'm currently trying to add support for the PQ operations on the
> marvell XOR engine, in dmaengine, obviously to be able to use async_tx
> to offload these operations.
>
> I'm testing these patches with a RAID6 array with 4 disks.
>
> However, since the commit 59fc630b8b5f ("RAID5: batch adjacent full
> stripe write", every write to that array fails with the following
> stacktrace.
>
> http://code.bulix.org/eh8iew-88342?raw
>
> It seems to be generated by that warning here:
>
> http://lxr.free-electrons.com/source/crypto/async_tx/async_tx.c#L173
>
> And indeed, if we dump the status of depend_tx here, it's already been
> acked.
>
> That doesn't happen if ASYNC_TX_DMA is disabled, hence using the
> software version of it, instead of relying on our XOR engine. It
> doesn't happen on any commit prior to the one mentionned above, with
> the exact same changes applied. These changes are meant to be
> contributed, so I can definitely push them somewhere if needed.
>
> I don't really know where to look for though, the change that is
> causing this is probably the change in ops_run_reconstruct6, but I'm
> not sure that this partial revert alone would work with regard to the
> rest of the patch.

I don't have a machine with dmaengine, it's likely there is error in this side.
Could you please make stripe_can_batch() returns false always and check if the
error disappear? This should narrow down if it's related to batch issue.

Thanks,
Shaohua

2015-05-11 09:15:07

by Maxime Ripard

[permalink] [raw]
Subject: Re: Possible RAID6 regression with ASYNC_TX_DMA enabled in 4.1

Hi Markus,

On Thu, May 07, 2015 at 02:39:07PM +0000, Markus Stockhausen wrote:
> Hi Maxime,
>
> > Von: [email protected] [[email protected]]" im Auftrag von "Maxime Ripard [[email protected]]
> > Gesendet: Donnerstag, 7. Mai 2015 14:57
> > An: Neil Brown; Shaohua Li
> > Cc: [email protected]; [email protected]; Lior Amsalem; Thomas Petazzoni; Gregory Clement; Boris Brezillon
> > Betreff: Possible RAID6 regression with ASYNC_TX_DMA enabled in 4.1
> >
> > Hi,
> >
> > I'm currently trying to add support for the PQ operations on the
> > marvell XOR engine, in dmaengine, obviously to be able to use async_tx
> > to offload these operations.
> >
> > I'm testing these patches with a RAID6 array with 4 disks.
> >
> > However, since the commit 59fc630b8b5f ("RAID5: batch adjacent full
> > stripe write", every write to that array fails with the following
> > stacktrace.
> >
> > http://code.bulix.org/eh8iew-88342?raw
>
> I don't know if it might be related. I added support for RAID6 Read-Modify-Write
> in software XOR with some patches. The following commit mangles some lines in
> async_pq.c:
>
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?
> id=584acdd49cd2472ca0f5a06adbe979db82d0b4af
>
> I introduced a new flag ASYNC_TX_PQ_XOR_DST that notifies the async layer
> that we want to do a XOR syndrome operation instead of a full calculation.
> This will enforce the software path because I guessed that hardware does not
> support that case. Without hardware to check I might have missed some
> checks in the async layer.
>
> In the upper layer ops_run_reconstruct6 will set the flag if we determined
> that rmw is faster than rcw.
>
> Can you check if rmw_level=0 fixes the issue. See:
>
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?
> id=d06f191f8ecaef4d524e765fdb455f96392fbd42

I just gave this a try, and it doesn't fix anything.

One thing I forgot to mention is that our hardware doesn't support the
PQ multiplications and product sums, so one of the patches we have is
to add a new ASYNC_TX flag to be able to identify and bail out of such
transfers.

The patch is here:
https://github.com/MISL-EBU-System-SW/mainline-public/commit/9964fe4a79da10162f83bd527b3fe44da60d7e0f

There might be some interaction between your patch and this one, even
though the async_tx code itself looks to be untouched by your patches.

Thanks!
Maxime

--
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com


Attachments:
(No filename) (2.54 kB)
signature.asc (819.00 B)
Digital signature
Download all attachments

2015-05-12 20:55:15

by Shaohua Li

[permalink] [raw]
Subject: Re: Possible RAID6 regression with ASYNC_TX_DMA enabled in 4.1

On Tue, May 12, 2015 at 02:55:46PM +0200, Maxime Ripard wrote:
> Hi Shaohua,
>
> On Sun, May 10, 2015 at 11:26:38PM -0700, Shaohua Li wrote:
> > On Thu, May 07, 2015 at 02:57:02PM +0200, Maxime Ripard wrote:
> > > Hi,
> > >
> > > I'm currently trying to add support for the PQ operations on the
> > > marvell XOR engine, in dmaengine, obviously to be able to use async_tx
> > > to offload these operations.
> > >
> > > I'm testing these patches with a RAID6 array with 4 disks.
> > >
> > > However, since the commit 59fc630b8b5f ("RAID5: batch adjacent full
> > > stripe write", every write to that array fails with the following
> > > stacktrace.
> > >
> > > http://code.bulix.org/eh8iew-88342?raw
> > >
> > > It seems to be generated by that warning here:
> > >
> > > http://lxr.free-electrons.com/source/crypto/async_tx/async_tx.c#L173
> > >
> > > And indeed, if we dump the status of depend_tx here, it's already been
> > > acked.
> > >
> > > That doesn't happen if ASYNC_TX_DMA is disabled, hence using the
> > > software version of it, instead of relying on our XOR engine. It
> > > doesn't happen on any commit prior to the one mentionned above, with
> > > the exact same changes applied. These changes are meant to be
> > > contributed, so I can definitely push them somewhere if needed.
> > >
> > > I don't really know where to look for though, the change that is
> > > causing this is probably the change in ops_run_reconstruct6, but I'm
> > > not sure that this partial revert alone would work with regard to the
> > > rest of the patch.
> >
> > I don't have a machine with dmaengine, it's likely there is error in this side.
> > Could you please make stripe_can_batch() returns false always and check if the
> > error disappear? This should narrow down if it's related to batch issue.
>
> The error indeed disappears if stripe_can_batch always returns false.

Does this fix it?


diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 77dfd72..5e820fc 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -1825,7 +1825,7 @@ ops_run_reconstruct6(struct stripe_head *sh, struct raid5_percpu *percpu,
} else
init_async_submit(&submit, 0, tx, NULL, NULL,
to_addr_conv(sh, percpu, j));
- async_gen_syndrome(blocks, 0, count+2, STRIPE_SIZE, &submit);
+ tx = async_gen_syndrome(blocks, 0, count+2, STRIPE_SIZE, &submit);
if (!last_stripe) {
j++;
sh = list_first_entry(&sh->batch_list, struct stripe_head,

2015-05-12 13:00:11

by Maxime Ripard

[permalink] [raw]
Subject: Re: Possible RAID6 regression with ASYNC_TX_DMA enabled in 4.1

Hi Shaohua,

On Sun, May 10, 2015 at 11:26:38PM -0700, Shaohua Li wrote:
> On Thu, May 07, 2015 at 02:57:02PM +0200, Maxime Ripard wrote:
> > Hi,
> >
> > I'm currently trying to add support for the PQ operations on the
> > marvell XOR engine, in dmaengine, obviously to be able to use async_tx
> > to offload these operations.
> >
> > I'm testing these patches with a RAID6 array with 4 disks.
> >
> > However, since the commit 59fc630b8b5f ("RAID5: batch adjacent full
> > stripe write", every write to that array fails with the following
> > stacktrace.
> >
> > http://code.bulix.org/eh8iew-88342?raw
> >
> > It seems to be generated by that warning here:
> >
> > http://lxr.free-electrons.com/source/crypto/async_tx/async_tx.c#L173
> >
> > And indeed, if we dump the status of depend_tx here, it's already been
> > acked.
> >
> > That doesn't happen if ASYNC_TX_DMA is disabled, hence using the
> > software version of it, instead of relying on our XOR engine. It
> > doesn't happen on any commit prior to the one mentionned above, with
> > the exact same changes applied. These changes are meant to be
> > contributed, so I can definitely push them somewhere if needed.
> >
> > I don't really know where to look for though, the change that is
> > causing this is probably the change in ops_run_reconstruct6, but I'm
> > not sure that this partial revert alone would work with regard to the
> > rest of the patch.
>
> I don't have a machine with dmaengine, it's likely there is error in this side.
> Could you please make stripe_can_batch() returns false always and check if the
> error disappear? This should narrow down if it's related to batch issue.

The error indeed disappears if stripe_can_batch always returns false.

Maxime

--
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com


Attachments:
(No filename) (1.81 kB)
signature.asc (819.00 B)
Digital signature
Download all attachments

2015-05-13 07:50:07

by Maxime Ripard

[permalink] [raw]
Subject: Re: Possible RAID6 regression with ASYNC_TX_DMA enabled in 4.1

Hi,

On Tue, May 12, 2015 at 03:59:07AM -0700, Shaohua Li wrote:
> On Tue, May 12, 2015 at 02:55:46PM +0200, Maxime Ripard wrote:
> > Hi Shaohua,
> >
> > On Sun, May 10, 2015 at 11:26:38PM -0700, Shaohua Li wrote:
> > > On Thu, May 07, 2015 at 02:57:02PM +0200, Maxime Ripard wrote:
> > > > Hi,
> > > >
> > > > I'm currently trying to add support for the PQ operations on the
> > > > marvell XOR engine, in dmaengine, obviously to be able to use async_tx
> > > > to offload these operations.
> > > >
> > > > I'm testing these patches with a RAID6 array with 4 disks.
> > > >
> > > > However, since the commit 59fc630b8b5f ("RAID5: batch adjacent full
> > > > stripe write", every write to that array fails with the following
> > > > stacktrace.
> > > >
> > > > http://code.bulix.org/eh8iew-88342?raw
> > > >
> > > > It seems to be generated by that warning here:
> > > >
> > > > http://lxr.free-electrons.com/source/crypto/async_tx/async_tx.c#L173
> > > >
> > > > And indeed, if we dump the status of depend_tx here, it's already been
> > > > acked.
> > > >
> > > > That doesn't happen if ASYNC_TX_DMA is disabled, hence using the
> > > > software version of it, instead of relying on our XOR engine. It
> > > > doesn't happen on any commit prior to the one mentionned above, with
> > > > the exact same changes applied. These changes are meant to be
> > > > contributed, so I can definitely push them somewhere if needed.
> > > >
> > > > I don't really know where to look for though, the change that is
> > > > causing this is probably the change in ops_run_reconstruct6, but I'm
> > > > not sure that this partial revert alone would work with regard to the
> > > > rest of the patch.
> > >
> > > I don't have a machine with dmaengine, it's likely there is error in this side.
> > > Could you please make stripe_can_batch() returns false always and check if the
> > > error disappear? This should narrow down if it's related to batch issue.
> >
> > The error indeed disappears if stripe_can_batch always returns false.
>
> Does this fix it?
>
>
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 77dfd72..5e820fc 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -1825,7 +1825,7 @@ ops_run_reconstruct6(struct stripe_head *sh, struct raid5_percpu *percpu,
> } else
> init_async_submit(&submit, 0, tx, NULL, NULL,
> to_addr_conv(sh, percpu, j));
> - async_gen_syndrome(blocks, 0, count+2, STRIPE_SIZE, &submit);
> + tx = async_gen_syndrome(blocks, 0, count+2, STRIPE_SIZE, &submit);
> if (!last_stripe) {
> j++;
> sh = list_first_entry(&sh->batch_list, struct stripe_head,

It does, thanks!

Feel free to add my Tested-by if you submit this patch.

Maxime

--
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com


Attachments:
(No filename) (2.76 kB)
signature.asc (819.00 B)
Digital signature
Download all attachments