Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp323124pxa; Fri, 21 Aug 2020 08:10:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzuaOtLmMwB/vBI4WO+L1JDjWUA9vbK9dh9TsY7Wye84BJRaVoJGAgcGLL4gYyqtDa4Kcwl X-Received: by 2002:a17:906:5796:: with SMTP id k22mr3591238ejq.77.1598022622913; Fri, 21 Aug 2020 08:10:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598022622; cv=none; d=google.com; s=arc-20160816; b=BVOo9XH6z88j+fidkYAFeNXR72mciusWdWBTScJNEbf/ILea+P+6s/lftdfokQ3pAo 1bDhkE5sVCkKsEZnTa7P/V30+AgYvMayvv3cCJx+3qa5zNOJNPh9nRwZ391r2NmavOl2 9t92M0zG0kD0rj9Z9z/G5hgqLr8+DV+2iVgo28zhPG3vB/nFQmMVAZt+Wu6bx4C2WyE3 vbQU456scDULNy3SWQ6C0cNjcNFkFQ8QzMlRVOnA478/INBmlRJK4ZMjjBilvOp4kBox JvJHnGSoRaOk6DWFAdNwGW1nI4wlGwOq0h3/0Zzu+NkJ9SZOJ1BrbeftgRArsKprRoSP TBGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date; bh=N8wKBoBPBld5sDjcrtjjt2XvGB0b3fFxqwIgg2v/U54=; b=idcIE7tIRVmx9nlKMfx81RBnc2Kvr0b4SqSqOblYFAkFzNNk3lhXjYi/6HVhb7OAAX FTiDz0OyABvCj/5lSONdjNjalPa3IwLUZSIseThd0Xjb/7b3ZY0C15cf+mrHLyTbE4tI k825OoF7Mm67xKmDs63r2CxYREK/iUlGA4coeuNaMD9XuFpEqUV0sLZMV2yZK3qkk4pb mDDwAlvXx9TVZftxw8O0FmxbFG09o7GQ6GZsnRZ9pvYlAvmkfeDisKpObZUCc1SaVXG7 TvN8eo0FFsYeQ9ODRCNs+mDWzaoke0gFGB9nq4znDwumnxNFy/9F2jBquMVYU8+Dj6F7 HwxA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cy23si1416081edb.311.2020.08.21.08.09.58; Fri, 21 Aug 2020 08:10:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727849AbgHUPIv convert rfc822-to-8bit (ORCPT + 99 others); Fri, 21 Aug 2020 11:08:51 -0400 Received: from mail.kernel.org ([198.145.29.99]:44768 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725873AbgHUPIv (ORCPT ); Fri, 21 Aug 2020 11:08:51 -0400 Received: from oasis.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 62FEC20656; Fri, 21 Aug 2020 15:08:50 +0000 (UTC) Date: Fri, 21 Aug 2020 11:08:48 -0400 From: Steven Rostedt To: Joerg Vehlow Cc: Andrew Morton , Thomas Gleixner , Sebastian Andrzej Siewior , Huang Ying , linux-kernel@vger.kernel.org, Joerg Vehlow Subject: Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context Message-ID: <20200821110848.6c3183d1@oasis.local.home> In-Reply-To: References: <2c243f59-6d10-7abb-bab4-e7b1796cd54f@jv-coder.de> <20200528084614.0c949e8d@gandalf.local.home> <20200727163655.8c94c8e245637b62311f5053@linux-foundation.org> X-Mailer: Claws Mail 3.17.3 (GTK+ 2.24.32; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 21 Aug 2020 12:25:33 +0200 Joerg Vehlow wrote: > Hi Andrew and Others (please read at least the part with @RT developers), > > > Yup, mutex_trylock() from interrupt is improper. Well dang, that's a > > bit silly. Presumably the 2006 spin_lock_mutex() wasn't taken with > > irqs-off. > > > > Ho hum, did you look at switching the kexec code back to the xchg > > approach? > > > I looked into reverting to the xchg approach, but that seems to be > not a good solution anymore, because the mutex is used in many places, > a lot with waiting locks and I guess that would require spinning now, > if we do this with bare xchg. > > Instead I thought about using a spinlock, because they are supposed > to be used in interrupt context as well, if I understand the documentation > correctly ([1]). > @RT developers > Unfortunately the rt patches seem to interpret it a bit different and > spin_trylock uses __rt_mutex_trylock again, with the same consequences as > with the current code. > > I tried raw_spinlocks, but it looks like they result in a deadlock at > least in the rt kernel. Thiy may be because of memory allocations in the > critical sections, that are not allowed if I understand it correctly. > > I have no clue how to fix it at this point. > > Jörg > > [1] https://kernel.readthedocs.io/en/sphinx-samples/kernel-locking.html There's only two places that wait on the mutex, and all other places try to get it, and if it fails, it simply exits. What I would do is introduce a kexec_busy counter, and have something like this: For the two locations that actually wait on the mutex: loop: mutex_lock(&kexec_mutex); ret = atomic_inc_return(&kexec_busy); if (ret > 1) { /* Atomic context is busy on this counter, spin */ atomic_dec(&kexec_busy); mutex_unlock(&kexec_mutex); goto loop; } [..] atomic_dec(&kexec_busy); mutex_unlock(&kexec_mutex); And then all the other places that do the trylock: cant_sleep(); ret = atomic_inc_return(&kexec_busy); if (ret > 1) { atomic_dec(&kexec_busy); return; } [..] atomic_dec(&kexec_busy); -- Steve