Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936172Ab3DPUJU (ORCPT ); Tue, 16 Apr 2013 16:09:20 -0400 Received: from userp1040.oracle.com ([156.151.31.81]:22389 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S936062Ab3DPUJR (ORCPT ); Tue, 16 Apr 2013 16:09:17 -0400 From: Konrad Rzeszutek Wilk To: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com Cc: Konrad Rzeszutek Wilk , stable@vger.kernel.org Subject: [PATCH 2/9] xen/smp/spinlock: Fix leakage of the spinlock interrupt line for every CPU online/offline Date: Tue, 16 Apr 2013 16:09:00 -0400 Message-Id: <1366142947-18655-3-git-send-email-konrad.wilk@oracle.com> X-Mailer: git-send-email 1.8.1.4 In-Reply-To: <1366142947-18655-1-git-send-email-konrad.wilk@oracle.com> References: <1366142947-18655-1-git-send-email-konrad.wilk@oracle.com> X-Source-IP: ucsinet22.oracle.com [156.151.31.94] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2910 Lines: 72 While we don't use the spinlock interrupt line (see for details commit f10cd522c5fbfec9ae3cc01967868c9c2401ed23 - xen: disable PV spinlocks on HVM) - we should still do the proper init / deinit sequence. We did not do that correctly and for the CPU init for PVHVM guest we would allocate an interrupt line - but failed to deallocate the old interrupt line. This resulted in leakage of an irq_desc but more importantly this splat as we online an offlined CPU: genirq: Flags mismatch irq 71. 0002cc20 (spinlock1) vs. 0002cc20 (spinlock1) Pid: 2542, comm: init.late Not tainted 3.9.0-rc6upstream #1 Call Trace: [] __setup_irq+0x23e/0x4a0 [] ? kmem_cache_alloc_trace+0x221/0x250 [] request_threaded_irq+0xfb/0x160 [] ? xen_spin_trylock+0x20/0x20 [] bind_ipi_to_irqhandler+0xa3/0x160 [] ? kasprintf+0x38/0x40 [] ? xen_spin_trylock+0x20/0x20 [] ? update_max_interval+0x15/0x40 [] xen_init_lock_cpu+0x3c/0x78 [] xen_hvm_cpu_notify+0x29/0x33 [] notifier_call_chain+0x4d/0x70 [] __raw_notifier_call_chain+0x9/0x10 [] __cpu_notify+0x1b/0x30 [] _cpu_up+0xa0/0x14b [] cpu_up+0xd9/0xec [] store_online+0x94/0xd0 [] dev_attr_store+0x1b/0x20 [] sysfs_write_file+0xf4/0x170 [] vfs_write+0xb4/0x130 [] sys_write+0x5a/0xa0 [] system_call_fastpath+0x16/0x1b cpu 1 spinlock event irq -16 smpboot: Booting Node 0 Processor 1 APIC 0x2 And if one looks at the /proc/interrupts right after offlining (CPU1): 70: 0 0 xen-percpu-ipi spinlock0 71: 0 0 xen-percpu-ipi spinlock1 77: 0 0 xen-percpu-ipi spinlock2 There is the oddity of the 'spinlock1' still being present. CC: stable@vger.kernel.org Signed-off-by: Konrad Rzeszutek Wilk --- arch/x86/xen/smp.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c index f80e69c..22c800a 100644 --- a/arch/x86/xen/smp.c +++ b/arch/x86/xen/smp.c @@ -662,6 +662,7 @@ static void xen_hvm_cpu_die(unsigned int cpu) unbind_from_irqhandler(per_cpu(xen_debug_irq, cpu), NULL); unbind_from_irqhandler(per_cpu(xen_callfuncsingle_irq, cpu), NULL); unbind_from_irqhandler(per_cpu(xen_irq_work, cpu), NULL); + xen_uninit_lock_cpu(cpu); xen_teardown_timer(cpu); native_cpu_die(cpu); } -- 1.8.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/