Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755479AbbLWQ1e (ORCPT ); Wed, 23 Dec 2015 11:27:34 -0500 Received: from mail-oi0-f50.google.com ([209.85.218.50]:32775 "EHLO mail-oi0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753057AbbLWQ1b (ORCPT ); Wed, 23 Dec 2015 11:27:31 -0500 MIME-Version: 1.0 In-Reply-To: <20151219171342.GA2437@debian> References: <1450472361-426-1-git-send-email-mathieu.poirier@linaro.org> <1450472361-426-5-git-send-email-mathieu.poirier@linaro.org> <20151219171342.GA2437@debian> Date: Wed, 23 Dec 2015 09:27:28 -0700 Message-ID: Subject: Re: [PATCH V7 04/24] coresight: moving PM runtime operations to core framework From: Mathieu Poirier To: Rabin Vincent Cc: Greg KH , Alexander Shishkin , Al Grant , linux-doc@vger.kernel.org, fainelli@broadcom.com, "linux-kernel@vger.kernel.org" , "Jeremiassen, Tor" , Mike Leach , Chunyan Zhang , "linux-arm-kernel@lists.infradead.org" Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3622 Lines: 66 On 19 December 2015 at 10:13, Rabin Vincent wrote: > On Fri, Dec 18, 2015 at 01:59:00PM -0700, Mathieu Poirier wrote: >> @@ -415,9 +418,13 @@ struct list_head *coresight_build_path(struct coresight_device *csdev) >> */ >> void coresight_release_path(struct list_head *path) >> { >> + struct coresight_device *csdev; >> struct coresight_node *nd, *next; >> >> list_for_each_entry_safe(nd, next, path, link) { >> + csdev = nd->csdev; >> + >> + pm_runtime_put_sync(csdev->dev.parent); >> list_del(&nd->link); >> kfree(nd); >> } > > This leads to the following splat: > > BUG: sleeping function called from invalid context at /home/rabin/dev/linux/drivers/base/power/runtime.c:892 > in_atomic(): 1, irqs_disabled(): 128, pid: 763, name: perf > 2 locks held by perf/763: > #0: (&mm->mmap_sem){++++++}, at: [] vm_munmap+0x2c/0x50 > #1: (&event->mmap_mutex){+.+.+.}, at: [] atomic_dec_and_mutex_lock+0x58/0x98 > irq event stamp: 63152 > hardirqs last enabled at (63151): [] _raw_spin_unlock_irqrestore+0x30/0x5c > hardirqs last disabled at (63152): [] __irq_svc+0x48/0x78 > softirqs last enabled at (61242): [] __do_softirq+0x408/0x4fc > softirqs last disabled at (61223): [] irq_exit+0xcc/0x130 > CPU: 1 PID: 763 Comm: perf Not tainted 4.4.0-rc5-00224-ge461459-dirty #152 > Hardware name: Generic OMAP4 (Flattened Device Tree) > [] (unwind_backtrace) from [] (show_stack+0x10/0x14) > [] (show_stack) from [] (dump_stack+0x90/0xb8) > [] (dump_stack) from [] (__pm_runtime_idle+0xa4/0xa8) > [] (__pm_runtime_idle) from [] (coresight_release_path+0x38/0x7c) > [] (coresight_release_path) from [] (free_event_data+0x84/0x9c) > [] (free_event_data) from [] (rb_irq_work+0x4c/0xcc) > [] (rb_irq_work) from [] (irq_work_run_list+0x7c/0xb4) > [] (irq_work_run_list) from [] (irq_work_run+0x20/0x34) > [] (irq_work_run) from [] (handle_IPI+0x1cc/0x334) > [] (handle_IPI) from [] (gic_handle_irq+0x84/0x88) > [] (gic_handle_irq) from [] (__irq_svc+0x58/0x78) > Exception stack(0xed865e98 to 0xed865ee0) > 5e80: 00000001 00000110 > 5ea0: 00000000 ee2f1080 20000113 c0784808 edbe4e1c b6ae5000 edb882f0 ed9b1e04 > 5ec0: edb88298 c075648c 00000002 ed865ee8 c008724c c04f7ee0 20000113 ffffffff > [] (__irq_svc) from [] (_raw_spin_unlock_irqrestore+0x34/0x5c) > [] (_raw_spin_unlock_irqrestore) from [] (irq_work_queue+0xac/0xb4) > [] (irq_work_queue) from [] (perf_mmap_close+0x370/0x3c8) > [] (perf_mmap_close) from [] (remove_vma+0x40/0x6c) > [] (remove_vma) from [] (do_munmap+0x210/0x35c) > [] (do_munmap) from [] (vm_munmap+0x3c/0x50) > [] (vm_munmap) from [] (ret_fast_syscall+0x0/0x1c) > > It should presumably be using pm_runtime_put() instead. That's a first - what platform did you test on? If I send you fixes will you be able to help me with the verification? Thanks, Mathieu -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/