Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1764982AbYBWKkv (ORCPT ); Sat, 23 Feb 2008 05:40:51 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1750780AbYBWKkn (ORCPT ); Sat, 23 Feb 2008 05:40:43 -0500 Received: from rv-out-0910.google.com ([209.85.198.188]:8676 "EHLO rv-out-0910.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750736AbYBWKkm (ORCPT ); Sat, 23 Feb 2008 05:40:42 -0500 DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=l1HE5+sQqvZQhyKnpzobBmdhh31huU5b86M2CPyM5xknolNOHSArlikp5kJA5yPpveDmBur1E98Fx9hFJ/xZ4fMCoVPSuPqwslqobY7Q3CappuBU5yQyel32VlEwqw7Kd3PoN5r3yYWLsWKLlTUUuyOlBc7AeT7v6fOCwS60sas= Message-ID: Date: Sat, 23 Feb 2008 11:40:42 +0100 From: "Zdenek Kabelac" To: linux-kernel@vger.kernel.org Subject: copy_user_generic_string and trace_hardirqs_on_thunk loop in qemu ?? MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3464 Lines: 75 Hi I'm looking for some help with this problem - while doing some other work my testscript seem to get looped in a really weird place. Everything happens with kernel running in qemu - When I run dmsetup status - occasionally dmsetup starts to take 100%CPU and cannot be killed. Happens only when there is high disk load generated by some other code. This code has some bugs in locks and semaphores. But from the task trace I always get this Call Trace: for runnig dmsetup task [ 140.106102] Call Trace: [ 140.106102] [] ? thread_return+0x8a/0x50c [ 140.106102] [] ? trace_hardirqs_on_thunk+0x35/0x3a [ 140.106102] [] ? restore_args+0x0/0x30 [ 140.106102] [] ? copy_user_generic_string+0x17/0x40 [ 140.106102] [] ? :dm_mod:copy_params+0x94/0xf0 [ 140.106102] [] ? __capable+0x11/0x30 [ 140.106102] [] ? :dm_mod:ctl_ioctl+0x169/0x260 [ 140.106102] [] ? trace_hardirqs_on_thunk+0x35/0x3a [ 140.106102] [] ? :dm_mod:dm_compat_ctl_ioctl+0xd/0x20 [ 141.324472] Call Trace: [ 141.324472] [] ? thread_return+0x8a/0x50c [ 141.324472] [] ? trace_hardirqs_on_thunk+0x35/0x3a [ 141.324472] [] ? :dm_mod:table_status+0x0/0x90 [ 141.324472] [] ? restore_args+0x0/0x30 [ 141.324472] [] ? copy_user_generic_string+0x17/0x40 [ 141.324472] [] ? :dm_mod:copy_params+0x94/0xf0 [ 141.324472] [] ? __capable+0x11/0x30 [ 141.324472] [] ? :dm_mod:ctl_ioctl+0x169/0x260 [ 141.324472] [] ? trace_hardirqs_on_thunk+0x35/0x3a [ 141.324472] [] ? :dm_mod:dm_compat_ctl_ioctl+0xd/0x20 [ 141.324472] [] ? compat_sys_ioctl+0x182/0x3d0 [ 417.993881] Call Trace: [ 417.993881] [] ? thread_return+0x8a/0x50c [ 417.993881] [] ? trace_hardirqs_on_thunk+0x35/0x3a [ 417.993881] [] ? error_exit+0x0/0xb8 [ 417.993881] [] ? copy_user_generic_string+0x17/0x40 [ 417.993881] [] ? :dm_mod:copy_params+0x94/0xf0 [ 417.993881] [] ? __capable+0x11/0x30 [ 417.993881] [] ? :dm_mod:ctl_ioctl+0x169/0x260 [ 417.993881] [] ? trace_hardirqs_on_thunk+0x35/0x3a When I put there printk before calling copy_params in dm-ioctl.c to print passed parameters (for copy there is 16KB) and then I add printk after the call is fnished it really stops between these two printk the call of copy_from_user. When I run second dmsetup command in parallel - the first copy is finished and I can see 'done' printed and I could start running dmsetup status loop again. Machine is 64bit - qemu kernel is 64bit - dmsetup application is 32bit (but checked with 64bit - no difference) Kernel is compiled with CONFIG_PREEMPT_NOTIFIERS=y CONFIG_PREEMPT_RCU=y CONFIG_PREEMPT=y CONFIG_DEBUG_PREEMPT=y Does anyone have any explanation for this weird behaviour? Is this problem in qemu-kvm, kvm, or linux kernel ? Zdenek -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/