If ap_list is longer than 256, merge_final() in sort_list() will call
comparison function with the same element just as below:
do {
/*
* If the merge is highly unbalanced (e.g. the input is
* already sorted), this loop may run many iterations.
* Continue callbacks to the client even though no
* element comparison is needed, so the client's cmp()
* routine can invoke cond_resched() periodically.
*/
if (unlikely(!++count))
cmp(priv, b, b);
This will definitely cause deadlock in vgic_irq_cmp() and the call trace
is:
[ 2667.130283] Call trace:
[ 2667.130284] queued_spin_lock_slowpath+0x64/0x2a8
[ 2667.130284] vgic_irq_cmp+0xfc/0x130
[ 2667.130284] list_sort.part.0+0x1c0/0x268
[ 2667.130285] list_sort+0x18/0x28
[ 2667.130285] vgic_flush_lr_state+0x158/0x518
[ 2667.130285] kvm_vgic_flush_hwstate+0x70/0x108
[ 2667.130286] kvm_arch_vcpu_ioctl_run+0x114/0xa50
[ 2667.130286] kvm_vcpu_ioctl+0x490/0x8c8
[ 2667.130286] do_vfs_ioctl+0xc4/0x8c0
[ 2667.130287] ksys_ioctl+0x8c/0xa0
[ 2667.130287] __arm64_sys_ioctl+0x28/0x38
[ 2667.130287] el0_svc_common+0x78/0x130
[ 2667.130288] el0_svc_handler+0x38/0x78
[ 2667.130288] el0_svc+0x8/0xc
So return 0 immediately when a==b.
Signed-off-by: Zenghui Yu <[email protected]>
Signed-off-by: Heyi Guo <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: James Morse <[email protected]>
Cc: Julien Thierry <[email protected]>
Cc: Suzuki K Poulose <[email protected]>
---
virt/kvm/arm/vgic/vgic.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index 13d4b38..64ed0dc 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -254,6 +254,13 @@ static int vgic_irq_cmp(void *priv, struct list_head *a, struct list_head *b)
bool penda, pendb;
int ret;
+ /*
+ * list_sort may call this function with the same element when the list
+ * is farely long.
+ */
+ if (unlikely(a == b))
+ return 0;
+
raw_spin_lock(&irqa->irq_lock);
raw_spin_lock_nested(&irqb->irq_lock, SINGLE_DEPTH_NESTING);
--
1.8.3.1
On 2019/8/27 0:39, Heyi Guo wrote:
> If ap_list is longer than 256, merge_final() in sort_list() will call
> comparison function with the same element just as below:
>
> do {
> /*
> * If the merge is highly unbalanced (e.g. the input is
> * already sorted), this loop may run many iterations.
> * Continue callbacks to the client even though no
> * element comparison is needed, so the client's cmp()
> * routine can invoke cond_resched() periodically.
> */
> if (unlikely(!++count))
> cmp(priv, b, b);
>
> This will definitely cause deadlock in vgic_irq_cmp() and the call trace
> is:
>
> [ 2667.130283] Call trace:
> [ 2667.130284] queued_spin_lock_slowpath+0x64/0x2a8
> [ 2667.130284] vgic_irq_cmp+0xfc/0x130
> [ 2667.130284] list_sort.part.0+0x1c0/0x268
> [ 2667.130285] list_sort+0x18/0x28
> [ 2667.130285] vgic_flush_lr_state+0x158/0x518
> [ 2667.130285] kvm_vgic_flush_hwstate+0x70/0x108
> [ 2667.130286] kvm_arch_vcpu_ioctl_run+0x114/0xa50
> [ 2667.130286] kvm_vcpu_ioctl+0x490/0x8c8
> [ 2667.130286] do_vfs_ioctl+0xc4/0x8c0
> [ 2667.130287] ksys_ioctl+0x8c/0xa0
> [ 2667.130287] __arm64_sys_ioctl+0x28/0x38
> [ 2667.130287] el0_svc_common+0x78/0x130
> [ 2667.130288] el0_svc_handler+0x38/0x78
> [ 2667.130288] el0_svc+0x8/0xc
>
> So return 0 immediately when a==b.
>
> Signed-off-by: Zenghui Yu <[email protected]>
> Signed-off-by: Heyi Guo <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: James Morse <[email protected]>
> Cc: Julien Thierry <[email protected]>
> Cc: Suzuki K Poulose <[email protected]>
> ---
> virt/kvm/arm/vgic/vgic.c | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
> index 13d4b38..64ed0dc 100644
> --- a/virt/kvm/arm/vgic/vgic.c
> +++ b/virt/kvm/arm/vgic/vgic.c
> @@ -254,6 +254,13 @@ static int vgic_irq_cmp(void *priv, struct list_head *a, struct list_head *b)
> bool penda, pendb;
> int ret;
>
> + /*
> + * list_sort may call this function with the same element when the list
> + * is farely long.
Sorry, s/farely/fairly/ :)
HG
> + */
> + if (unlikely(a == b))
> + return 0;
> +
> raw_spin_lock(&irqa->irq_lock);
> raw_spin_lock_nested(&irqb->irq_lock, SINGLE_DEPTH_NESTING);
>