Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753198AbcKSTcs (ORCPT ); Sat, 19 Nov 2016 14:32:48 -0500 Received: from mx1.redhat.com ([209.132.183.28]:59432 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752414AbcKSTcq (ORCPT ); Sat, 19 Nov 2016 14:32:46 -0500 Date: Sat, 19 Nov 2016 20:32:41 +0100 From: Radim =?utf-8?B?S3LEjW3DocWZ?= To: Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Dmitry Vyukov Subject: Re: [PATCH] KVM: async_pf: avoid recursive flushing of work items Message-ID: <20161119193241.GF26213@potion> References: <1479394547-15249-1-git-send-email-pbonzini@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1479394547-15249-1-git-send-email-pbonzini@redhat.com> X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Sat, 19 Nov 2016 19:32:45 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2617 Lines: 52 2016-11-17 15:55+0100, Paolo Bonzini: > This was reported by syzkaller: > > [ INFO: possible recursive locking detected ] > 4.9.0-rc4+ #49 Not tainted > --------------------------------------------- > kworker/2:1/5658 is trying to acquire lock: > ([ 1644.769018] (&work->work) > [< inline >] list_empty include/linux/compiler.h:243 > [] flush_work+0x0/0x660 kernel/workqueue.c:1511 > > but task is already holding lock: > ([ 1644.769018] (&work->work) > [] process_one_work+0x94b/0x1900 kernel/workqueue.c:2093 > > stack backtrace: > CPU: 2 PID: 5658 Comm: kworker/2:1 Not tainted 4.9.0-rc4+ #49 > Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 > Workqueue: events async_pf_execute > ffff8800676ff630 ffffffff81c2e46b ffffffff8485b930 ffff88006b1fc480 > 0000000000000000 ffffffff8485b930 ffff8800676ff7e0 ffffffff81339b27 > ffff8800676ff7e8 0000000000000046 ffff88006b1fcce8 ffff88006b1fccf0 > Call Trace: > ... > [] flush_work+0x93/0x660 kernel/workqueue.c:2846 > [] __cancel_work_timer+0x17a/0x410 kernel/workqueue.c:2916 > [] cancel_work_sync+0x17/0x20 kernel/workqueue.c:2951 > [] kvm_clear_async_pf_completion_queue+0xd7/0x400 virt/kvm/async_pf.c:126 > [< inline >] kvm_free_vcpus arch/x86/kvm/x86.c:7841 > [] kvm_arch_destroy_vm+0x23d/0x620 arch/x86/kvm/x86.c:7946 > [< inline >] kvm_destroy_vm virt/kvm/kvm_main.c:731 > [] kvm_put_kvm+0x40e/0x790 virt/kvm/kvm_main.c:752 > [] async_pf_execute+0x23d/0x4f0 virt/kvm/async_pf.c:111 > [] process_one_work+0x9fc/0x1900 kernel/workqueue.c:2096 > [] worker_thread+0xef/0x1480 kernel/workqueue.c:2230 > [] kthread+0x244/0x2d0 kernel/kthread.c:209 > [] ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:433 > > The reason is that kvm_put_kvm is causing the destruction of the VM, but > the page fault is still on the ->queue list. The ->queue list is owned > by the VCPU, not by the work items, so we cannot just add list_del to > the work item. > > Instead, use work->vcpu to note async page faults that have been resolved > and will be processed through the done list. There is no need to flush > those. > > Cc: Dmitry Vyukov > Signed-off-by: Paolo Bonzini > --- Applied to kvm/master, thanks.