2023-04-12 04:20:35

by Kunkun Jiang

[permalink] [raw]
Subject: [PATCH] irqchipi/gic-v4: Ensure accessing the correct RD when and writing INVLPIR

commit f3a059219bc7 ("irqchip/gic-v4.1: Ensure mutual exclusion between
vPE affinity change and RD access") tried to address the race
between the RD accesses and the vPE affinity change, but somehow
forgot to take GICR_INVLPIR into account. Let's take the vpe_lock
before evaluating vpe->col_idx to fix it.

Fixes: f3a059219bc7 ("irqchip/gic-v4.1: Ensure mutual exclusion between vPE affinity change and RD access")
Signed-off-by: Kunkun Jiang <[email protected]>
Signed-off-by: Xiang Chen <[email protected]>
Signed-off-by: Nianyao Tang <[email protected]>
---
drivers/irqchip/irq-gic-v3-its.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index 586271b8aa39..041f06922587 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -3943,13 +3943,17 @@ static void its_vpe_send_inv(struct irq_data *d)

if (gic_rdists->has_direct_lpi) {
void __iomem *rdbase;
+ unsigned long flags;
+ int cpu;

/* Target the redistributor this VPE is currently known on */
- raw_spin_lock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock);
- rdbase = per_cpu_ptr(gic_rdists->rdist, vpe->col_idx)->rd_base;
+ cpu = vpe_to_cpuid_lock(vpe, &flags);
+ raw_spin_lock(&gic_data_rdist_cpu(cpu)->rd_lock);
+ rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
gic_write_lpir(d->parent_data->hwirq, rdbase + GICR_INVLPIR);
wait_for_syncr(rdbase);
- raw_spin_unlock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock);
+ raw_spin_unlock(&gic_data_rdist_cpu(cpu)->rd_lock);
+ vpe_to_cpuid_unlock(vpe, flags);
} else {
its_vpe_send_cmd(vpe, its_send_inv);
}
--
2.27.0


2023-04-12 10:01:39

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH] irqchipi/gic-v4: Ensure accessing the correct RD when and writing INVLPIR

On Wed, 12 Apr 2023 05:15:10 +0100,
Kunkun Jiang <[email protected]> wrote:
>
> commit f3a059219bc7 ("irqchip/gic-v4.1: Ensure mutual exclusion between
> vPE affinity change and RD access") tried to address the race
> between the RD accesses and the vPE affinity change, but somehow
> forgot to take GICR_INVLPIR into account. Let's take the vpe_lock
> before evaluating vpe->col_idx to fix it.
>
> Fixes: f3a059219bc7 ("irqchip/gic-v4.1: Ensure mutual exclusion between vPE affinity change and RD access")
> Signed-off-by: Kunkun Jiang <[email protected]>
> Signed-off-by: Xiang Chen <[email protected]>
> Signed-off-by: Nianyao Tang <[email protected]>

Yup, nice catch. A few remarks though:

- the subject looks odd: there is a spurious 'and' there, and it
doesn't say this is all about VPE doorbell invalidation (the code
that deals with direct LPI is otherwise fine)

- the SoB chain is also odd. You should be last in the chain, and all
the others have Co-developed-by tags in addition to the SoB, unless
you wanted another tag

- I'm curious about how you triggered the issue. Could you please
elaborate on that>

Finally, I think we can fix it in a better way, see below:

> ---
> drivers/irqchip/irq-gic-v3-its.c | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
> index 586271b8aa39..041f06922587 100644
> --- a/drivers/irqchip/irq-gic-v3-its.c
> +++ b/drivers/irqchip/irq-gic-v3-its.c
> @@ -3943,13 +3943,17 @@ static void its_vpe_send_inv(struct irq_data *d)
>
> if (gic_rdists->has_direct_lpi) {
> void __iomem *rdbase;
> + unsigned long flags;
> + int cpu;
>
> /* Target the redistributor this VPE is currently known on */
> - raw_spin_lock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock);
> - rdbase = per_cpu_ptr(gic_rdists->rdist, vpe->col_idx)->rd_base;
> + cpu = vpe_to_cpuid_lock(vpe, &flags);
> + raw_spin_lock(&gic_data_rdist_cpu(cpu)->rd_lock);
> + rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
> gic_write_lpir(d->parent_data->hwirq, rdbase + GICR_INVLPIR);
> wait_for_syncr(rdbase);
> - raw_spin_unlock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock);
> + raw_spin_unlock(&gic_data_rdist_cpu(cpu)->rd_lock);
> + vpe_to_cpuid_unlock(vpe, flags);
> } else {
> its_vpe_send_cmd(vpe, its_send_inv);
> }

The main reason this bug crept in is that we have a some pretty silly
code duplication going on.

Wouldn't it be nice if irq_to_cpuid() could work out whether it is
dealing with a LPI or a VLPI like it does today, but also directly
with a VPE? We could then use the same code as derect_lpi_inv(). I
came up with this the hack below, which is totally untested as I don't
have access to GICv4.1 HW.

Could you give it a spin?

Thanks,

M.

diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index 586271b8aa39..cfb8be3e17d6 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -271,13 +271,24 @@ static void vpe_to_cpuid_unlock(struct its_vpe *vpe, unsigned long flags)
raw_spin_unlock_irqrestore(&vpe->vpe_lock, flags);
}

+static struct irq_chip its_vpe_irq_chip;
+
static int irq_to_cpuid_lock(struct irq_data *d, unsigned long *flags)
{
- struct its_vlpi_map *map = get_vlpi_map(d);
+ struct its_vpe *vpe = NULL;
int cpu;

- if (map) {
- cpu = vpe_to_cpuid_lock(map->vpe, flags);
+ if (d->chip == &its_vpe_irq_chip)
+ vpe = irq_data_get_irq_chip_data(d);
+
+ if (!vpe) {
+ struct its_vlpi_map *map = get_vlpi_map(d);
+ if (map)
+ vpe = map->vpe;
+ }
+
+ if (vpe) {
+ cpu = vpe_to_cpuid_lock(vpe, flags);
} else {
/* Physical LPIs are already locked via the irq_desc lock */
struct its_device *its_dev = irq_data_get_irq_chip_data(d);
@@ -291,9 +302,18 @@ static int irq_to_cpuid_lock(struct irq_data *d, unsigned long *flags)

static void irq_to_cpuid_unlock(struct irq_data *d, unsigned long flags)
{
- struct its_vlpi_map *map = get_vlpi_map(d);
+ struct its_vpe *vpe = NULL;
+
+ if (d->chip == &its_vpe_irq_chip)
+ vpe = irq_data_get_irq_chip_data(d);
+
+ if (vpe) {
+ struct its_vlpi_map *map = get_vlpi_map(d);
+ if (map)
+ vpe = map->vpe;
+ }

- if (map)
+ if (vpe)
vpe_to_cpuid_unlock(map->vpe, flags);
}

@@ -1431,14 +1451,29 @@ static void wait_for_syncr(void __iomem *rdbase)
cpu_relax();
}

-static void direct_lpi_inv(struct irq_data *d)
+static void __direct_lpi_inv(struct irq_data *d, u64 val)
{
- struct its_vlpi_map *map = get_vlpi_map(d);
void __iomem *rdbase;
unsigned long flags;
- u64 val;
int cpu;

+ /* Target the redistributor this LPI is currently routed to */
+ cpu = irq_to_cpuid_lock(d, &flags);
+ raw_spin_lock(&gic_data_rdist_cpu(cpu)->rd_lock);
+
+ rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
+ gic_write_lpir(val, rdbase + GICR_INVLPIR);
+ wait_for_syncr(rdbase);
+
+ raw_spin_unlock(&gic_data_rdist_cpu(cpu)->rd_lock);
+ irq_to_cpuid_unlock(d, flags);
+}
+
+static void direct_lpi_inv(struct irq_data *d)
+{
+ struct its_vlpi_map *map = get_vlpi_map(d);
+ u64 val;
+
if (map) {
struct its_device *its_dev = irq_data_get_irq_chip_data(d);

@@ -1451,15 +1486,7 @@ static void direct_lpi_inv(struct irq_data *d)
val = d->hwirq;
}

- /* Target the redistributor this LPI is currently routed to */
- cpu = irq_to_cpuid_lock(d, &flags);
- raw_spin_lock(&gic_data_rdist_cpu(cpu)->rd_lock);
- rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
- gic_write_lpir(val, rdbase + GICR_INVLPIR);
-
- wait_for_syncr(rdbase);
- raw_spin_unlock(&gic_data_rdist_cpu(cpu)->rd_lock);
- irq_to_cpuid_unlock(d, flags);
+ __direct_lpi_inv(d, val);
}

static void lpi_update_config(struct irq_data *d, u8 clr, u8 set)
@@ -3941,18 +3968,10 @@ static void its_vpe_send_inv(struct irq_data *d)
{
struct its_vpe *vpe = irq_data_get_irq_chip_data(d);

- if (gic_rdists->has_direct_lpi) {
- void __iomem *rdbase;
-
- /* Target the redistributor this VPE is currently known on */
- raw_spin_lock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock);
- rdbase = per_cpu_ptr(gic_rdists->rdist, vpe->col_idx)->rd_base;
- gic_write_lpir(d->parent_data->hwirq, rdbase + GICR_INVLPIR);
- wait_for_syncr(rdbase);
- raw_spin_unlock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock);
- } else {
+ if (gic_rdists->has_direct_lpi)
+ __direct_lpi_inv(d, d->parent_data->hwirq);
+ else
its_vpe_send_cmd(vpe, its_send_inv);
- }
}

static void its_vpe_mask_irq(struct irq_data *d)


--
Without deviation from the norm, progress is not possible.

2023-04-13 07:40:48

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH] irqchipi/gic-v4: Ensure accessing the correct RD when and writing INVLPIR

On Thu, 13 Apr 2023 04:57:17 +0100,
Kunkun Jiang <[email protected]> wrote:
>
> Hi Marc,
>
> On 2023/4/12 17:42, Marc Zyngier wrote:
> > On Wed, 12 Apr 2023 05:15:10 +0100,
> > Kunkun Jiang <[email protected]> wrote:
> >> commit f3a059219bc7 ("irqchip/gic-v4.1: Ensure mutual exclusion between
> >> vPE affinity change and RD access") tried to address the race
> >> between the RD accesses and the vPE affinity change, but somehow
> >> forgot to take GICR_INVLPIR into account. Let's take the vpe_lock
> >> before evaluating vpe->col_idx to fix it.
> >>
> >> Fixes: f3a059219bc7 ("irqchip/gic-v4.1: Ensure mutual exclusion between vPE affinity change and RD access")
> >> Signed-off-by: Kunkun Jiang <[email protected]>
> >> Signed-off-by: Xiang Chen <[email protected]>
> >> Signed-off-by: Nianyao Tang <[email protected]>
> > Yup, nice catch. A few remarks though:
> >
> > - the subject looks odd: there is a spurious 'and' there, and it
> > doesn't say this is all about VPE doorbell invalidation (the code
> > that deals with direct LPI is otherwise fine)
> >
> > - the SoB chain is also odd. You should be last in the chain, and all
> > the others have Co-developed-by tags in addition to the SoB, unless
> > you wanted another tag
> Thanks for your guidance.
> >
> > - I'm curious about how you triggered the issue. Could you please
> > elaborate on that>
>
> I will detail it here:
> 1. a multi-core VM with a pass-through device, enable GICv4
> 2. insmod the device driver in guest
> then,the following two call trace information is displayed occasionally:
>
> > [ 1055.683978] BUG: spinlock already unlocked on CPU#1,

[...]

> After analysis, I found that when insmod the device driver in guest,
> its_map_vm will be executed on the host and map all vPEs to the
> first possible CPU(pCPU0). When dealing with WFI, its_vpe_send_inv
> will access RD, obtain and release the 'rd_lock' through 'vpe->col_idx'.

[...]

> So something like this happens:
> After obtaining the 'rd_lock' through 'vpe->col_idx', its_map_vm
> modifies 'vpe->col_idx'.
> This is also the cause of the preceding call trace.

Right. I figured this out, but the key information is that you get it
at boot time due to moving the VPEs away from CPU0

> There's a little question I haven't figured out here. Why map all vPEs
> to the first possible CPU in its_map_vm? In addition, vpe_lock is not
> used here. Will concurrency problems occur in the future?

Where would you like to map them? Any physical CPU would do the
trick. We could take the current CPU, but the fact is that we don't
always know where the VPEs are running.

As for holding vpe_lock, I don't think we need to. This only happens
on the first VLPI being forwarded, so there should be no real GIC
context for anything.

But I must admit that I haven't looked at this code in a long time,
and have no way to test it anymore.

>
> >
> > Finally, I think we can fix it in a better way, see below:
> >
> >> ---
> >> drivers/irqchip/irq-gic-v3-its.c | 10 +++++++---
> >> 1 file changed, 7 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
> >> index 586271b8aa39..041f06922587 100644
> >> --- a/drivers/irqchip/irq-gic-v3-its.c
> >> +++ b/drivers/irqchip/irq-gic-v3-its.c
> >> @@ -3943,13 +3943,17 @@ static void its_vpe_send_inv(struct irq_data *d)
> >> if (gic_rdists->has_direct_lpi) {
> >> void __iomem *rdbase;
> >> + unsigned long flags;
> >> + int cpu;
> >> /* Target the redistributor this VPE is currently
> >> known on */
> >> - raw_spin_lock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock);
> >> - rdbase = per_cpu_ptr(gic_rdists->rdist, vpe->col_idx)->rd_base;
> >> + cpu = vpe_to_cpuid_lock(vpe, &flags);
> >> + raw_spin_lock(&gic_data_rdist_cpu(cpu)->rd_lock);
> >> + rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
> >> gic_write_lpir(d->parent_data->hwirq, rdbase + GICR_INVLPIR);
> >> wait_for_syncr(rdbase);
> >> - raw_spin_unlock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock);
> >> + raw_spin_unlock(&gic_data_rdist_cpu(cpu)->rd_lock);
> >> + vpe_to_cpuid_unlock(vpe, flags);
> >> } else {
> >> its_vpe_send_cmd(vpe, its_send_inv);
> >> }
> > The main reason this bug crept in is that we have a some pretty silly
> > code duplication going on.
> >
> > Wouldn't it be nice if irq_to_cpuid() could work out whether it is
> > dealing with a LPI or a VLPI like it does today, but also directly
> > with a VPE? We could then use the same code as derect_lpi_inv(). I
> > came up with this the hack below, which is totally untested as I don't
> > have access to GICv4.1 HW.
> >
> > Could you give it a spin?
>
> Nice, I will test it as soon as possible.

Cool, please let me know when you get a chance.

Thanks,

M.

--
Without deviation from the norm, progress is not possible.

2023-05-16 10:34:16

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH] irqchipi/gic-v4: Ensure accessing the correct RD when and writing INVLPIR

On Thu, 13 Apr 2023 04:57:17 +0100,
Kunkun Jiang <[email protected]> wrote:
>
> > Wouldn't it be nice if irq_to_cpuid() could work out whether it is
> > dealing with a LPI or a VLPI like it does today, but also directly
> > with a VPE? We could then use the same code as derect_lpi_inv(). I
> > came up with this the hack below, which is totally untested as I don't
> > have access to GICv4.1 HW.
> >
> > Could you give it a spin?
>
> Nice, I will test it as soon as possible.

Did you ever managed to test this?

Thanks,

M.

--
Without deviation from the norm, progress is not possible.

2023-05-16 12:27:59

by Kunkun Jiang

[permalink] [raw]
Subject: Re: [PATCH] irqchipi/gic-v4: Ensure accessing the correct RD when and writing INVLPIR

Hi Marc,

On 2023/5/16 18:15, Marc Zyngier wrote:
> On Thu, 13 Apr 2023 04:57:17 +0100,
> Kunkun Jiang <[email protected]> wrote:
>>> Wouldn't it be nice if irq_to_cpuid() could work out whether it is
>>> dealing with a LPI or a VLPI like it does today, but also directly
>>> with a VPE? We could then use the same code as derect_lpi_inv(). I
>>> came up with this the hack below, which is totally untested as I don't
>>> have access to GICv4.1 HW.
>>>
>>> Could you give it a spin?
>> Nice, I will test it as soon as possible.
> Did you ever managed to test this?
Sorry,I've only been coordinating the GICv4.1 environment in the last
few days. I have tested it with GICv4 many times, and it doesn't recur.
However, another call trace occurs with GICv4.1 when the device driver
is loaded in Guest. I haven't found out why. Maybe you can help analyze it.
????
> [ 7853.520985] Call trace:
> [ 7853.523418]  dump_backtrace.part.0+0xc4/0xd0
> [ 7853.527679]  show_stack+0x20/0x30
> [ 7853.530980]  dump_stack_lvl+0x68/0x84
> [ 7853.534630]  dump_stack+0x18/0x34
> [ 7853.537932]  __schedule_bug+0x5c/0x74
> [ 7853.541583]  __schedule+0xe24/0x1180
> [ 7853.545146]  schedule+0x64/0x104
> [ 7853.548362]  schedule_timeout+0x184/0x1c4
> [ 7853.552358]  wait_for_completion+0x7c/0x160
> [ 7853.556528]  __synchronize_srcu.part.0+0x80/0xf4
> [ 7853.561133]  synchronize_srcu_expedited+0x38/0x4c
> [ 7853.565823]  kvm_set_irq_routing+0x268/0x2d0
> [ 7853.570081]  kvm_vm_ioctl+0x624/0x13f0
> [ 7853.573817]  __arm64_sys_ioctl+0xb0/0xf4
> [ 7853.577728]  invoke_syscall+0x50/0x120
> [ 7853.581464]  el0_svc_common.constprop.0+0x168/0x190
> [ 7853.586330]  do_el0_svc+0x40/0xc4
> [ 7853.589632]  el0_svc+0x2c/0xb4
> [ 7853.592673]  el0t_64_sync_handler+0xb8/0xbc
> [ 7853.596843]  el0t_64_sync+0x1a0/0x1a4
> [ 7874.512000] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
> [ 7874.518087] rcu:     9-...0: (507 ticks this GP)
> idle=9054/1/0x4000000000000000 softirq=3223/3226 fqs=2255
> [ 7874.527467] rcu:     14-...0: (1182 ticks this GP)
> idle=97fc/1/0x4000000000000000 softirq=3480/3481 fqs=2255
> [ 7874.537020] rcu:     16-...0: (1183 ticks this GP)
> idle=af4c/1/0x4000000000000000 softirq=3355/3357 fqs=2255
> [ 7874.546573] rcu:     20-...0: (1189 ticks this GP)
> idle=8f9c/1/0x4000000000000000 softirq=3749/3751 fqs=2255
> [ 7874.556125] rcu:     38-...0: (1192 ticks this GP)
> idle=0204/1/0x4000000000000000 softirq=3515/3517 fqs=2255
> [ 7874.565677] rcu:     50-...0: (1194 ticks this GP)
> idle=4224/1/0x4000000000000000 softirq=3281/3283 fqs=2255
> [ 7874.575229] rcu:     56-...0: (573 ticks this GP)
> idle=309c/1/0x4000000000000000 softirq=3400/3400 fqs=2255
> [ 7874.584694] rcu:     58-...0: (1195 ticks this GP)
> idle=2094/1/0x4000000000000000 softirq=3565/3567 fqs=2255
> [ 7874.594246] rcu:     60-...0: (757 ticks this GP)
> idle=2ac4/1/0x4000000000000000 softirq=3343/3343 fqs=2255
> [ 7874.603711] rcu:     62-...0: (1183 ticks this GP)
> idle=1da4/1/0x4000000000000000 softirq=3403/3406 fqs=2255
> [ 7874.613263] rcu:     65-...0: (214 ticks this GP)
> idle=e40c/1/0x4000000000000000 softirq=3731/3731 fqs=2255
> [ 7874.622728] rcu:     66-...0: (1174 ticks this GP)
> idle=b3ec/1/0x4000000000000000 softirq=3783/3784 fqs=2255
> [ 7874.632280] rcu:     68-...0: (1194 ticks this GP)
> idle=60dc/1/0x4000000000000000 softirq=4373/4375 fqs=2255
> [ 7874.641832] rcu:     72-...0: (1183 ticks this GP)
> idle=a61c/1/0x4000000000000000 softirq=4001/4003 fqs=2255
> [ 7874.651383] rcu:     74-...0: (1182 ticks this GP)
> idle=ae2c/1/0x4000000000000000 softirq=4200/4202 fqs=2255
> [ 7874.660935] rcu:     79-...0: (1195 ticks this GP)
> idle=196c/1/0x4000000000000000 softirq=3598/3600 fqs=2255
> [ 7874.670487] rcu:     84-...0: (1183 ticks this GP)
> idle=5bac/1/0x4000000000000000 softirq=3302/3304 fqs=2255
> [ 7874.680039] rcu:     86-...0: (1183 ticks this GP)
> idle=17bc/1/0x4000000000000000 softirq=6017/6019 fqs=2255
> [ 7874.689591] rcu:     92-...0: (1183 ticks this GP)
> idle=8484/1/0x4000000000000000 softirq=3653/3655 fqs=2255
> [ 7874.699142] rcu:     96-...0: (1183 ticks this GP)
> idle=4dcc/1/0x4000000000000000 softirq=3348/3350 fqs=2255
> [ 7874.708694] rcu:     102-...0: (1180 ticks this GP)
> idle=a4bc/1/0x4000000000000000 softirq=3414/3416 fqs=2255
> [ 7874.718333] rcu:     108-...0: (1183 ticks this GP)
> idle=5c6c/1/0x4000000000000000 softirq=3490/3492 fqs=2255
> [ 7874.727972] rcu:     118-...0: (1195 ticks this GP)
> idle=08c4/1/0x4000000000000000 softirq=3637/3639 fqs=2255
> [ 7874.737610] rcu:     122-...0: (1181 ticks this GP)
> idle=68dc/1/0x4000000000000000 softirq=3863/3865 fqs=2255
> [ 7874.747251]     (detected by 90, t=5308 jiffies, g=16833, q=38
> ncpus=128)
> [ 7874.753765] Task dump for CPU 9:
> [ 7874.756980] task:qemu-system-aar state:R  running task stack:0    
> pid:8183  ppid:7426   flags:0x0000020a
> [ 7874.766881] Call trace:
> [ 7874.769314]  __switch_to+0xf0/0x170
> [ 7874.772794]  __per_cpu_offset+0x0/0x2000
> [ 7874.776705] Task dump for CPU 14:
> [ 7874.780006] task:qemu-system-aar state:R  running task stack:0    
> pid:8205  ppid:7426   flags:0x0000020a
> [ 7874.789906] Call trace:
> [ 7874.792338]  __switch_to+0xf0/0x170
> [ 7874.795814]  __per_cpu_offset+0x0/0x2000
> [ 7874.799724] Task dump for CPU 16:
> [ 7874.803024] task:qemu-system-aar state:R  running task stack:0    
> pid:8196  ppid:7426   flags:0x0000020a
> [ 7874.812924] Call trace:
> [ 7874.815357]  __switch_to+0xf0/0x170
> [ 7874.818833]  __per_cpu_offset+0x0/0x2000
> [ 7874.822742] Task dump for CPU 20:
> [ 7874.826042] task:qemu-system-aar state:R  running task stack:0    
> pid:8201  ppid:7426   flags:0x0000020a
> [ 7874.835942] Call trace:
> [ 7874.838374]  __switch_to+0xf0/0x170
> [ 7874.841850]  __per_cpu_offset+0x0/0x2000
> [ 7874.845760] Task dump for CPU 38:
> [ 7874.849061] task:qemu-system-aar state:R  running task stack:0    
> pid:8193  ppid:7426   flags:0x0000020a
> [ 7874.858960] Call trace:
> [ 7874.861393]  __switch_to+0xf0/0x170
> [ 7874.864868]  __per_cpu_offset+0x0/0x2000
> [ 7874.868778] Task dump for CPU 50:
> [ 7874.872078] task:qemu-system-aar state:R  running task stack:0    
> pid:8199  ppid:7426   flags:0x0000020a
> [ 7874.881978] Call trace:
> [ 7874.884411]  __switch_to+0xf0/0x170
> [ 7874.887887]  __per_cpu_offset+0x0/0x2000
> [ 7874.891796] Task dump for CPU 56:
> [ 7874.895097] task:qemu-system-aar state:R  running task stack:0    
> pid:8202  ppid:7426   flags:0x0000020a
> [ 7874.904996] Call trace:
> [ 7874.907429]  __switch_to+0xf0/0x170
> [ 7874.910904]  __per_cpu_offset+0x0/0x2000
> [ 7874.914814] Task dump for CPU 58:
> [ 7874.918114] task:qemu-system-aar state:R  running task stack:0    
> pid:8200  ppid:7426   flags:0x0000020a
> [ 7874.928014] Call trace:
> [ 7874.930446]  __switch_to+0xf0/0x170
> [ 7874.933922]  __per_cpu_offset+0x0/0x2000
> [ 7874.937831] Task dump for CPU 60:
> [ 7874.941132] task:qemu-system-aar state:R  running task stack:0    
> pid:8192  ppid:7426   flags:0x0000020a
> [ 7874.951032] Call trace:
> [ 7874.953464]  __switch_to+0xf0/0x170
> [ 7874.956940]  __per_cpu_offset+0x0/0x2000
> [ 7874.960849] Task dump for CPU 62:
> [ 7874.964150] task:qemu-system-aar state:R  running task stack:0    
> pid:8195  ppid:7426   flags:0x0000020a
> [ 7874.974049] Call trace:
> [ 7874.976482]  __switch_to+0xf0/0x170
> [ 7874.979958]  __per_cpu_offset+0x0/0x2000
> [ 7874.983867] Task dump for CPU 65:
> [ 7874.987167] task:qemu-system-aar state:R  running task stack:0    
> pid:8186  ppid:7426   flags:0x0000020a
> [ 7874.997067] Call trace:
> [ 7874.999499]  __switch_to+0xf0/0x170
> [ 7875.002975]  __per_cpu_offset+0x0/0x2000
> [ 7875.006885] Task dump for CPU 66:
> [ 7875.010185] task:qemu-system-aar state:R  running task stack:0    
> pid:8190  ppid:7426   flags:0x0000020a
> [ 7875.020085] Call trace:
> [ 7875.022517]  __switch_to+0xf0/0x170
> [ 7875.025993]  __per_cpu_offset+0x0/0x2000
> [ 7875.029902] Task dump for CPU 68:
> [ 7875.033203] task:qemu-system-aar state:R  running task stack:0    
> pid:8185  ppid:7426   flags:0x0000020a
> [ 7875.043103] Call trace:
> [ 7875.045535]  __switch_to+0xf0/0x170
> [ 7875.049011]  __per_cpu_offset+0x0/0x2000
> [ 7875.052920] Task dump for CPU 72:
> [ 7875.056221] task:qemu-system-aar state:R  running task stack:0    
> pid:8188  ppid:7426   flags:0x0000020a
> [ 7875.066120] Call trace:
> [ 7875.068553]  __switch_to+0xf0/0x170
> [ 7875.072029]  __per_cpu_offset+0x0/0x2000
> [ 7875.075938] Task dump for CPU 74:
> [ 7875.079238] task:qemu-system-aar state:R  running task stack:0    
> pid:8207  ppid:7426   flags:0x0000020a
> [ 7875.089138] Call trace:
> [ 7875.091570]  __switch_to+0xf0/0x170
> [ 7875.095046]  __per_cpu_offset+0x0/0x2000
> [ 7875.098955] Task dump for CPU 79:
> [ 7875.102256] task:qemu-system-aar state:R  running task stack:0    
> pid:8179  ppid:7426   flags:0x0000020a
> [ 7875.112155] Call trace:
> [ 7875.114588]  __switch_to+0xf0/0x170
> [ 7875.118064]  __per_cpu_offset+0x0/0x2000
> [ 7875.121973] Task dump for CPU 84:
> [ 7875.125273] task:qemu-system-aar state:R  running task stack:0    
> pid:8203  ppid:7426   flags:0x0000020a
> [ 7875.135173] Call trace:
> [ 7875.137605]  __switch_to+0xf0/0x170
> [ 7875.141081]  __per_cpu_offset+0x0/0x2000
> [ 7875.144990] Task dump for CPU 86:
> [ 7875.148291] task:qemu-system-aar state:R  running task stack:0    
> pid:8198  ppid:7426   flags:0x0000020a
> [ 7875.158191] Call trace:
> [ 7875.160623]  __switch_to+0xf0/0x170
> [ 7875.164099]  __per_cpu_offset+0x0/0x2000
> [ 7875.168008] Task dump for CPU 92:
> [ 7875.171309] task:qemu-system-aar state:R  running task stack:0    
> pid:8187  ppid:7426   flags:0x00000202
> [ 7875.181208] Call trace:
> [ 7875.183641]  __switch_to+0xf0/0x170
> [ 7875.187117]  __per_cpu_offset+0x0/0x2000
> [ 7875.191026] Task dump for CPU 96:
> [ 7875.194326] task:qemu-system-aar state:R  running task stack:0    
> pid:8177  ppid:7426   flags:0x0000020a
> [ 7875.204226] Call trace:
> [ 7875.206659]  __switch_to+0xf0/0x170
> [ 7875.210134]  __per_cpu_offset+0x0/0x2000
> [ 7875.214044] Task dump for CPU 102:
> [ 7875.217431] task:qemu-system-aar state:R  running task stack:0    
> pid:8176  ppid:7426   flags:0x00000202
> [ 7875.227331] Call trace:
> [ 7875.229763]  __switch_to+0xf0/0x170
> [ 7875.233239]  __per_cpu_offset+0x0/0x2000
> [ 7875.237148] Task dump for CPU 108:
> [ 7875.240536] task:qemu-system-aar state:R  running task stack:0    
> pid:8184  ppid:7426   flags:0x00000202
> [ 7875.250435] Call trace:
> [ 7875.252868]  __switch_to+0xf0/0x170
> [ 7875.256343]  __per_cpu_offset+0x0/0x2000
> [ 7875.260253] Task dump for CPU 118:
> [ 7875.263640] task:qemu-system-aar state:R  running task stack:0    
> pid:8206  ppid:7426   flags:0x0000020a
> [ 7875.273540] Call trace:
> [ 7875.275972]  __switch_to+0xf0/0x170
> [ 7875.279448]  __per_cpu_offset+0x0/0x2000
> [ 7875.283357] Task dump for CPU 122:
> [ 7875.286745] task:qemu-system-aar state:R  running task stack:0    
> pid:8189  ppid:7426   flags:0x0000020a
> [ 7875.296644] Call trace:
> [ 7875.299077]  __switch_to+0xf0/0x170
> [ 7875.302553]  __per_cpu_offset+0x0/0x2000
Thanks,
Kunkun Jiang
>
> Thanks,
>
> M.
>

2023-06-17 07:22:19

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH] irqchipi/gic-v4: Ensure accessing the correct RD when and writing INVLPIR

On Tue, 16 May 2023 13:01:59 +0100,
Kunkun Jiang <[email protected]> wrote:
>
> Hi Marc,
>
> On 2023/5/16 18:15, Marc Zyngier wrote:
> > On Thu, 13 Apr 2023 04:57:17 +0100,
> > Kunkun Jiang <[email protected]> wrote:
> >>> Wouldn't it be nice if irq_to_cpuid() could work out whether it is
> >>> dealing with a LPI or a VLPI like it does today, but also directly
> >>> with a VPE? We could then use the same code as derect_lpi_inv(). I
> >>> came up with this the hack below, which is totally untested as I don't
> >>> have access to GICv4.1 HW.
> >>>
> >>> Could you give it a spin?
> >> Nice, I will test it as soon as possible.
> > Did you ever managed to test this?
> Sorry,I've only been coordinating the GICv4.1 environment in the last
> few days. I have tested it with GICv4 many times, and it doesn't recur.
> However, another call trace occurs with GICv4.1 when the device driver
> is loaded in Guest. I haven't found out why. Maybe you can help analyze it.

Which driver is loaded in the guest?

As I said, I don't have the HW at all, so I cannot test things, and
this stack-trace is not enough to analyse anything (you're not even
saying what you are doing).

M.

--
Without deviation from the norm, progress is not possible.

2023-06-17 08:27:36

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH] irqchipi/gic-v4: Ensure accessing the correct RD when and writing INVLPIR

On Tue, 16 May 2023 13:01:59 +0100,
Kunkun Jiang <[email protected]> wrote:
>
> Hi Marc,
>
> On 2023/5/16 18:15, Marc Zyngier wrote:
> > On Thu, 13 Apr 2023 04:57:17 +0100,
> > Kunkun Jiang <[email protected]> wrote:
> >>> Wouldn't it be nice if irq_to_cpuid() could work out whether it is
> >>> dealing with a LPI or a VLPI like it does today, but also directly
> >>> with a VPE? We could then use the same code as derect_lpi_inv(). I
> >>> came up with this the hack below, which is totally untested as I don't
> >>> have access to GICv4.1 HW.
> >>>
> >>> Could you give it a spin?
> >> Nice, I will test it as soon as possible.
> > Did you ever managed to test this?
> Sorry,I've only been coordinating the GICv4.1 environment in the last
> few days. I have tested it with GICv4 many times, and it doesn't recur.
> However, another call trace occurs with GICv4.1 when the device driver
> is loaded in Guest. I haven't found out why. Maybe you can help analyze it.

I also went back to my patch, and it is a bit bogus (it doesn't even
compile to start with). I've now posted new version[1] that should at
least fix the bug you initially reported.

Can you please test it and reply to it?

Thanks,

M.

[1] https://lore.kernel.org/r/[email protected]

--
Without deviation from the norm, progress is not possible.

2023-06-21 13:13:14

by Kunkun Jiang

[permalink] [raw]
Subject: Re: [PATCH] irqchipi/gic-v4: Ensure accessing the correct RD when and writing INVLPIR

Hi Mrac,

On 2023/6/17 15:35, Marc Zyngier wrote:
> On Tue, 16 May 2023 13:01:59 +0100,
> Kunkun Jiang <[email protected]> wrote:
>> Hi Marc,
>>
>> On 2023/5/16 18:15, Marc Zyngier wrote:
>>> On Thu, 13 Apr 2023 04:57:17 +0100,
>>> Kunkun Jiang <[email protected]> wrote:
>>>>> Wouldn't it be nice if irq_to_cpuid() could work out whether it is
>>>>> dealing with a LPI or a VLPI like it does today, but also directly
>>>>> with a VPE? We could then use the same code as derect_lpi_inv(). I
>>>>> came up with this the hack below, which is totally untested as I don't
>>>>> have access to GICv4.1 HW.
>>>>>
>>>>> Could you give it a spin?
>>>> Nice, I will test it as soon as possible.
>>> Did you ever managed to test this?
>> Sorry,I've only been coordinating the GICv4.1 environment in the last
>> few days. I have tested it with GICv4 many times, and it doesn't recur.
>> However, another call trace occurs with GICv4.1 when the device driver
>> is loaded in Guest. I haven't found out why. Maybe you can help analyze it.
> I also went back to my patch, and it is a bit bogus (it doesn't even
> compile to start with). I've now posted new version[1] that should at
> least fix the bug you initially reported.
>
> Can you please test it and reply to it?
I haven't coordinated the environment well these days. I'll test it out
after Dragon Boat Festival.

Thanks,
Kunkun Jiang
> Thanks,
>
> M.
>
> [1] https://lore.kernel.org/r/[email protected]
>

2023-06-30 11:04:56

by Kunkun Jiang

[permalink] [raw]
Subject: Re: [PATCH] irqchipi/gic-v4: Ensure accessing the correct RD when and writing INVLPIR

Hi Marc,

On 2023/6/21 20:30, Kunkun Jiang wrote:
> Hi Mrac,
>
> On 2023/6/17 15:35, Marc Zyngier wrote:
>> On Tue, 16 May 2023 13:01:59 +0100,
>> Kunkun Jiang <[email protected]> wrote:
>>> Hi Marc,
>>>
>>> On 2023/5/16 18:15, Marc Zyngier wrote:
>>>> On Thu, 13 Apr 2023 04:57:17 +0100,
>>>> Kunkun Jiang <[email protected]> wrote:
>>>>>> Wouldn't it be nice if irq_to_cpuid() could work out whether it is
>>>>>> dealing with a LPI or a VLPI like it does today, but also directly
>>>>>> with a VPE? We could then use the same code as derect_lpi_inv(). I
>>>>>> came up with this the hack below, which is totally untested as I
>>>>>> don't
>>>>>> have access to GICv4.1 HW.
>>>>>>
>>>>>> Could you give it a spin?
>>>>> Nice, I will test it as soon as possible.
>>>> Did you ever managed to test this?
>>> Sorry,I've only been coordinating the GICv4.1 environment in the last
>>> few days. I have tested it with GICv4 many times, and it doesn't recur.
>>> However, another call trace occurs with GICv4.1 when the device driver
>>> is loaded in Guest. I haven't found out why. Maybe you can help
>>> analyze it.
>> I also went back to my patch, and it is a bit bogus (it doesn't even
>> compile to start with). I've now posted new version[1] that should at
>> least fix the bug you initially reported.
>>
>> Can you please test it and reply to it?
> I haven't coordinated the environment well these days. I'll test it out
> after Dragon Boat Festival.
I coordinated the hardware environment and verified the new patch 20+ times.
It works fine. Thanks for the fix.

Thanks,
Kunkun Jiang
>
> Thanks,
> Kunkun Jiang
>> Thanks,
>>
>>     M.
>>
>> [1] https://lore.kernel.org/r/[email protected]
>>