2013-09-26 13:45:09

by Mischa Jonker

[permalink] [raw]
Subject: [PATCH 1/2] ARC: Handle zero-overhead-loop in unaligned access handler

If a load or store is the last instruction in a zero-overhead-loop, and
it's misaligned, the loop would execute only once.

This fixes that problem.

Signed-off-by: Mischa Jonker <[email protected]>
---
arch/arc/kernel/unaligned.c | 6 ++++++
1 file changed, 6 insertions(+)

diff --git a/arch/arc/kernel/unaligned.c b/arch/arc/kernel/unaligned.c
index c0f832f..00ad070 100644
--- a/arch/arc/kernel/unaligned.c
+++ b/arch/arc/kernel/unaligned.c
@@ -233,6 +233,12 @@ int misaligned_fixup(unsigned long address, struct pt_regs *regs,
regs->status32 &= ~STATUS_DE_MASK;
} else {
regs->ret += state.instr_len;
+
+ /* handle zero-overhead-loop */
+ if ((regs->ret == regs->lp_end) && (regs->lp_count)) {
+ regs->ret = regs->lp_start;
+ regs->lp_count--;
+ }
}

return 0;
--
1.7.9.5


2013-09-26 14:33:10

by Vineet Gupta

[permalink] [raw]
Subject: Re: [PATCH 1/2] ARC: Handle zero-overhead-loop in unaligned access handler

On 09/26/2013 07:34 PM, Mischa Jonker wrote:
> If a load or store is the last instruction in a zero-overhead-loop, and
> it's misaligned, the loop would execute only once.
>
> This fixes that problem.
>
> Signed-off-by: Mischa Jonker <[email protected]>

Applied to for-curr for 3.12-rcX

Thx,
-Vineet

> ---
> arch/arc/kernel/unaligned.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/arch/arc/kernel/unaligned.c b/arch/arc/kernel/unaligned.c
> index c0f832f..00ad070 100644
> --- a/arch/arc/kernel/unaligned.c
> +++ b/arch/arc/kernel/unaligned.c
> @@ -233,6 +233,12 @@ int misaligned_fixup(unsigned long address, struct pt_regs *regs,
> regs->status32 &= ~STATUS_DE_MASK;
> } else {
> regs->ret += state.instr_len;
> +
> + /* handle zero-overhead-loop */
> + if ((regs->ret == regs->lp_end) && (regs->lp_count)) {
> + regs->ret = regs->lp_start;
> + regs->lp_count--;
> + }
> }
>
> return 0;