Received: by 2002:a05:6358:a55:b0:ec:fcf4:3ecf with SMTP id 21csp6874851rwb; Wed, 18 Jan 2023 10:24:04 -0800 (PST) X-Google-Smtp-Source: AMrXdXuf314dnXTq6to7DFEw/conaZplFWcsVDnH9UDXgVyqqvm6zElaM11hAPd9ye6F01EnilgN X-Received: by 2002:a17:90b:688:b0:225:c65f:3550 with SMTP id m8-20020a17090b068800b00225c65f3550mr8296276pjz.9.1674066244312; Wed, 18 Jan 2023 10:24:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674066244; cv=none; d=google.com; s=arc-20160816; b=ptTuuIEBmzwy5CGLk8bJLXAxMqvk3ePziv6sp9w5EDiKbvpBQzNjlEssiFhtLKh/iy 5xW5SAb4dHwQs+kmm5pDwKrufuJSk/y733wwmRqWJ3YUIMmhhWpVamRTKLg2tu4ln7Te Ie6L6NR9YraAf9AfMcJAaj00h9SKu0nCB34ib3J0k8AmxCJSYe7B0P78h9mTl0abjKMa hnbcECUnXb/AfUmfQOS3bY4gy9AntI7sAMA8jkBXJZR6ffYVjwcJl69bQUcAJx62hvBb xgXZ5NXSc/kbxgTZO60RNvQZOYxKwvExxrWq6UdFRudU3FwkZ/za0T8XfWwwVKjyB+nw /K9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=TG9A4pKMx4FWHMqvc0rFgYATTf6YdsAMBbm5Vph84P8=; b=Gbev5SVMB0FSJ2LzCgbumMihmTVp2MePOJ3oxcMBg+5rSIzgcr75KRDjdH3R3BWzUU 7qphdNSFKg4XcMOO29IUWiygS+Cnl916/ZGKJdxpUgxYkrq57520Cj62BKmSL2RhNFjs wiZHRdquhdQw0BmYWtrYiHvwEfpObTEU9O7IywNf/t64ZfFA/dCRyCrJW592EOmZClcS W+kFzu3q4LcnTl05OBAkI7R16wrD58H2tWfrIxRzT7GjaTwgmtl68mZEhXT4gtmKIYvO QVWCt/69q90y0QbI1rNyC7kLHK/W3d3xmiDeHCe50Yr0RFPwIklYo8QbMMB4UHd7PisV JKyA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="XreLFk/F"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 11-20020a17090a034b00b00226632a14bbsi2719520pjf.71.2023.01.18.10.23.58; Wed, 18 Jan 2023 10:24:04 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="XreLFk/F"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230361AbjARSLx (ORCPT + 45 others); Wed, 18 Jan 2023 13:11:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230400AbjARSLi (ORCPT ); Wed, 18 Jan 2023 13:11:38 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8D9F55290 for ; Wed, 18 Jan 2023 10:11:35 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 37ECBB81E13 for ; Wed, 18 Jan 2023 18:11:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C680DC433D2; Wed, 18 Jan 2023 18:11:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1674065492; bh=gr6AG1xtmIKls/C6W3LPutwiHESNaUcxHVkX4DZbsZc=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=XreLFk/FgMeXNYJSoGZ/LIwfMUrSzm6RhNMqDrADHCeE+HlpL0uu8rfCbJE1TDdFo vEI18sb59X8CCiXSwar2vTJIAYJbXSULjK/y5Qv9Wo/tKxae8mJCg2FBD90JugLenn V5LOqJ3vEjm7p78Qhd2VppGs27NGyHrBDw7kkg6tIsxcvAXfv72gCNLA4B0L7BUxt8 sCMZV65CP8rZO+tBnjswRvthHZiDQ1QdYxengtPICMOpxhFA+gO0xz0+CuQLCLYsXP UHrkyvPTvrjfc2P63ACpdnXSZjuclYzVanNYr1M5tRhzf5JcxAWZXLc0aEaMVG83pA J+/tLoBu9blqQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 6D8435C0920; Wed, 18 Jan 2023 10:11:32 -0800 (PST) Date: Wed, 18 Jan 2023 10:11:32 -0800 From: "Paul E. McKenney" To: Valentin Schneider Cc: Wander Lairson Costa , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , "open list:SCHEDULER" , Thomas Gleixner Subject: Re: [PATCH] sched/deadline: fix inactive_task_timer splat with CONFIG_PREEMPT_RT Message-ID: <20230118181132.GF2948950@paulmck-ThinkPad-P17-Gen-1> Reply-To: paulmck@kernel.org References: <20230104181701.43224-1-wander@redhat.com> <20230110013333.GH4028633@paulmck-ThinkPad-P17-Gen-1> <20230110222725.GT4028633@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 18, 2023 at 03:57:38PM +0000, Valentin Schneider wrote: > On 10/01/23 14:27, Paul E. McKenney wrote: > > On Tue, Jan 10, 2023 at 05:52:03PM -0300, Wander Lairson Costa wrote: > >> On Mon, Jan 9, 2023 at 10:40 PM Paul E. McKenney wrote: > >> > > >> > On Wed, Jan 04, 2023 at 03:17:01PM -0300, Wander Lairson Costa wrote: > >> > > inactive_task_timer() executes in interrupt (atomic) context. It calls > >> > > put_task_struct(), which indirectly acquires sleeping locks under > >> > > PREEMPT_RT. > >> > > > >> > > Below is an example of a splat that happened in a test environment: > >> > > > >> > > CPU: 1 PID: 2848 Comm: life Kdump: loaded Tainted: G W --------- > >> > > Hardware name: HP ProLiant DL388p Gen8, BIOS P70 07/15/2012 > >> > > Call Trace: > >> > > dump_stack_lvl+0x57/0x7d > >> > > mark_lock_irq.cold+0x33/0xba > >> > > ? stack_trace_save+0x4b/0x70 > >> > > ? save_trace+0x55/0x150 > >> > > mark_lock+0x1e7/0x400 > >> > > mark_usage+0x11d/0x140 > >> > > __lock_acquire+0x30d/0x930 > >> > > lock_acquire.part.0+0x9c/0x210 > >> > > ? refill_obj_stock+0x3d/0x3a0 > >> > > ? rcu_read_lock_sched_held+0x3f/0x70 > >> > > ? trace_lock_acquire+0x38/0x140 > >> > > ? lock_acquire+0x30/0x80 > >> > > ? refill_obj_stock+0x3d/0x3a0 > >> > > rt_spin_lock+0x27/0xe0 > >> > > ? refill_obj_stock+0x3d/0x3a0 > >> > > refill_obj_stock+0x3d/0x3a0 > >> > > ? inactive_task_timer+0x1ad/0x340 > >> > > kmem_cache_free+0x357/0x560 > >> > > inactive_task_timer+0x1ad/0x340 > >> > > ? switched_from_dl+0x2d0/0x2d0 > >> > > __run_hrtimer+0x8a/0x1a0 > >> > > __hrtimer_run_queues+0x91/0x130 > >> > > hrtimer_interrupt+0x10f/0x220 > >> > > __sysvec_apic_timer_interrupt+0x7b/0xd0 > >> > > sysvec_apic_timer_interrupt+0x4f/0xd0 > >> > > ? asm_sysvec_apic_timer_interrupt+0xa/0x20 > >> > > asm_sysvec_apic_timer_interrupt+0x12/0x20 > >> > > RIP: 0033:0x7fff196bf6f5 > >> > > > >> > > Instead of calling put_task_struct() directly, we defer it using > >> > > call_rcu(). A more natural approach would use a workqueue, but since > >> > > in PREEMPT_RT, we can't allocate dynamic memory from atomic context, > >> > > the code would become more complex because we would need to put the > >> > > work_struct instance in the task_struct and initialize it when we > >> > > allocate a new task_struct. > >> > > > >> > > Signed-off-by: Wander Lairson Costa > >> > > Cc: Paul McKenney > >> > > Cc: Thomas Gleixner > >> > > --- > >> > > kernel/sched/build_policy.c | 1 + > >> > > kernel/sched/deadline.c | 24 +++++++++++++++++++++++- > >> > > 2 files changed, 24 insertions(+), 1 deletion(-) > >> > > > >> > > diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c > >> > > index d9dc9ab3773f..f159304ee792 100644 > >> > > --- a/kernel/sched/build_policy.c > >> > > +++ b/kernel/sched/build_policy.c > >> > > @@ -28,6 +28,7 @@ > >> > > #include > >> > > #include > >> > > #include > >> > > +#include > >> > > > >> > > #include > >> > > > >> > > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c > >> > > index 9ae8f41e3372..ab9301d4cc24 100644 > >> > > --- a/kernel/sched/deadline.c > >> > > +++ b/kernel/sched/deadline.c > >> > > @@ -1405,6 +1405,13 @@ static void update_curr_dl(struct rq *rq) > >> > > } > >> > > } > >> > > > >> > > +static void delayed_put_task_struct(struct rcu_head *rhp) > >> > > +{ > >> > > + struct task_struct *task = container_of(rhp, struct task_struct, rcu); > >> > > + > >> > > + __put_task_struct(task); > >> > > >> > Please note that BH is disabled here. Don't you therefore > >> > need to schedule a workqueue handler? Perhaps directly from > >> > inactive_task_timer(), or maybe from this point. If the latter, one > >> > way to skip the extra step is to use queue_rcu_work(). > >> > > >> > >> My initial work was using a workqueue [1,2]. However, I realized I > >> could reach a much simpler code with call_rcu(). > >> I am afraid my ignorance doesn't allow me to get your point. Does > >> disabling softirq imply atomic context? > > > > Given that this problem occurred in PREEMPT_RT, I am assuming that the > > appropriate definition of "atomic context" is "cannot call schedule()". > > And you are in fact not permitted to call schedule() from a bh-disabled > > region. > > > > This also means that you cannot acquire a non-raw spinlock in a > > bh-disabled region of code in a PREEMPT_RT kernel, because doing > > so can invoke schedule. > > But per the PREEMPT_RT lock "replacement", non-raw spinlocks end up > invoking schedule_rtlock(), which should be safe vs BH disabled > (local_lock() + rcu_read_lock()): > > 6991436c2b5d ("sched/core: Provide a scheduling point for RT locks") > > Unless I'm missing something else? No, you miss nothing. Apologies for my confusion! (I could have sworn that someone else corrected me on this earlier, but I don't see it right off hand.) Thanx, Paul