Received: by 2002:ad5:4acb:0:0:0:0:0 with SMTP id n11csp675651imw; Fri, 8 Jul 2022 09:40:22 -0700 (PDT) X-Google-Smtp-Source: AGRyM1uwSf2GdcbldpTt66/K7+ZrE57z36xWO2GinlQJumBeBRafIZEHIiQlZc08VaLM3zsGC4wh X-Received: by 2002:a17:907:d26:b0:726:6d3a:a336 with SMTP id gn38-20020a1709070d2600b007266d3aa336mr4435404ejc.469.1657298422278; Fri, 08 Jul 2022 09:40:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1657298422; cv=none; d=google.com; s=arc-20160816; b=dG8q3DuWRoaYSec1wPYGFTD1/BFEqMXbIAwgch4wg3925SFJ0wkWO00I4WBMh79NYr Fto1lCdneI0jfdTxAKWooLdW5FVOFmJmh+kGvFguo63D/cLexnCPiFxYlbsVK445/wVh T32jsgcZuAs7BP9/yxsmY1MKPFQLJlvDjkOohwUGZ76/desefyxZ0DLMSwBgnynXtj4L jrmvL96KmtWBDxxttzL3bsxZGJFZ6XQzzK60qYPf6byYlH4D+IY1TwyIfORrcFZJdAml zVd0ffOe0iYphNxwYl15wdF5PaqVMpGY9u0pBhatbpNhIVrBo/JBWCYiXbTsxzjlgg7q Vrzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=dqnvtOM9WRzBsACI8H2L3qMUFDa4G14/jQNmiIiim1o=; b=siJgZBNH0ZW7lf56YBjWggraRNHWGQ1vyDtVtSCHZF5WCTAuQBujvoa6W/D3DdFKGl P7KgczmoT4y3OkhyCNHEmA5f6cSSd/OU/FFzlXMZHu0ctFAoi1fTR0CEa1a4L7cvHmEq kc146+suunhqag28Hx7bZgC9WFJjqd9/2CxGCDPEb5ou1R+Bi0owxpJ6jidq+tOiZj6x 925kGDGjAr/Asr4V2/djF44HVf615GPl7W7ZFL0PisQ0kAwlIHLlsajezitOR+EcjQD2 6aYlVH8z8Ur9v2TjHjelync07rw7dvW3et2ng3J1gw9ib7yis0y7VET7gbu/rzlyq/86 LDwA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass (test mode) header.i=@metanate.com header.s=stronger header.b=0qVJ7nyL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=metanate.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b7-20020a056402278700b004381b6af19csi1504208ede.155.2022.07.08.09.39.56; Fri, 08 Jul 2022 09:40:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass (test mode) header.i=@metanate.com header.s=stronger header.b=0qVJ7nyL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=metanate.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239245AbiGHQ2R (ORCPT + 99 others); Fri, 8 Jul 2022 12:28:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239210AbiGHQ1k (ORCPT ); Fri, 8 Jul 2022 12:27:40 -0400 Received: from metanate.com (unknown [IPv6:2001:8b0:1628:5005::111]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA8A684EE6; Fri, 8 Jul 2022 09:27:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=metanate.com; s=stronger; h=Content-Transfer-Encoding:Message-Id:Date: Subject:Cc:To:From:Content-Type:Reply-To:Content-ID:Content-Description: In-Reply-To:References; bh=dqnvtOM9WRzBsACI8H2L3qMUFDa4G14/jQNmiIiim1o=; b=0q VJ7nyLpqBFZuGWKuZU5pwnNicPDxJKiXcxXJXvR5Ine7DIjHx2uvYOD4gdFAj/ppkPlnpT94lK96/ Hr7kyUV0teLKteVzyHEuUGgcRkW7m4ZxJNSTofHAzR63W/LhNaNMtnxu20/AnNBHFv89bxD44jEAQ 6dEt7fW7H/iECjix3qjMaNuX4Zuy1p8zBKYVkoS71hjNkWPZxJSZqyPiG4dhKuurkkGiPmydFgHWH LBSOfcbMbPwfF8qnbvg3i6oF/hi1/GmvpEGaBxZVhMe/YmzP6YKZAxmvEddm/LmK3lyh9I5i6i47d 1+DpClV3+lqWwsY89AamjLP6eu03nWPw==; Received: from [81.174.171.191] (helo=donbot.metanate.com) by email.metanate.com with esmtpsa (TLS1.3) tls TLS_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1o9qow-0002UW-2C; Fri, 08 Jul 2022 17:27:11 +0100 From: John Keeping To: linux-kernel@vger.kernel.org Cc: linux-rt-users@vger.kernel.org, John Keeping , Sebastian Andrzej Siewior , Peter Zijlstra , Thomas Gleixner , Steven Rostedt , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider Subject: [PATCH v3] sched/core: Always flush pending blk_plug Date: Fri, 8 Jul 2022 17:27:02 +0100 Message-Id: <20220708162702.1758865-1-john@metanate.com> X-Mailer: git-send-email 2.37.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Authenticated: YES X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RDNS_NONE,SPF_HELO_PASS, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org With CONFIG_PREEMPT_RT, it is possible to hit a deadlock between two normal priority tasks (SCHED_OTHER, nice level zero): INFO: task kworker/u8:0:8 blocked for more than 491 seconds. Not tainted 5.15.49-rt46 #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/u8:0 state:D stack: 0 pid: 8 ppid: 2 flags:0x00000000 Workqueue: writeback wb_workfn (flush-7:0) [] (__schedule) from [] (schedule+0xdc/0x134) [] (schedule) from [] (rt_mutex_slowlock_block.constprop.0+0xb8/0x174) [] (rt_mutex_slowlock_block.constprop.0) from [] +(rt_mutex_slowlock.constprop.0+0xac/0x174) [] (rt_mutex_slowlock.constprop.0) from [] (fat_write_inode+0x34/0x54) [] (fat_write_inode) from [] (__writeback_single_inode+0x354/0x3ec) [] (__writeback_single_inode) from [] (writeback_sb_inodes+0x250/0x45c) [] (writeback_sb_inodes) from [] (__writeback_inodes_wb+0x7c/0xb8) [] (__writeback_inodes_wb) from [] (wb_writeback+0x2c8/0x2e4) [] (wb_writeback) from [] (wb_workfn+0x1a4/0x3e4) [] (wb_workfn) from [] (process_one_work+0x1fc/0x32c) [] (process_one_work) from [] (worker_thread+0x22c/0x2d8) [] (worker_thread) from [] (kthread+0x16c/0x178) [] (kthread) from [] (ret_from_fork+0x14/0x38) Exception stack(0xc10e3fb0 to 0xc10e3ff8) 3fa0: 00000000 00000000 00000000 00000000 3fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 3fe0: 00000000 00000000 00000000 00000000 00000013 00000000 INFO: task tar:2083 blocked for more than 491 seconds. Not tainted 5.15.49-rt46 #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:tar state:D stack: 0 pid: 2083 ppid: 2082 flags:0x00000000 [] (__schedule) from [] (schedule+0xdc/0x134) [] (schedule) from [] (io_schedule+0x14/0x24) [] (io_schedule) from [] (bit_wait_io+0xc/0x30) [] (bit_wait_io) from [] (__wait_on_bit_lock+0x54/0xa8) [] (__wait_on_bit_lock) from [] (out_of_line_wait_on_bit_lock+0x84/0xb0) [] (out_of_line_wait_on_bit_lock) from [] (fat_mirror_bhs+0xa0/0x144) [] (fat_mirror_bhs) from [] (fat_alloc_clusters+0x138/0x2a4) [] (fat_alloc_clusters) from [] (fat_alloc_new_dir+0x34/0x250) [] (fat_alloc_new_dir) from [] (vfat_mkdir+0x58/0x148) [] (vfat_mkdir) from [] (vfs_mkdir+0x68/0x98) [] (vfs_mkdir) from [] (do_mkdirat+0xb0/0xec) [] (do_mkdirat) from [] (ret_fast_syscall+0x0/0x1c) Exception stack(0xc2e1bfa8 to 0xc2e1bff0) bfa0: 01ee42f0 01ee4208 01ee42f0 000041ed 00000000 00004000 bfc0: 01ee42f0 01ee4208 00000000 00000027 01ee4302 00000004 000dcb00 01ee4190 bfe0: 000dc368 bed11924 0006d4b0 b6ebddfc Here the kworker is waiting on msdos_sb_info::s_lock which is held by tar which is in turn waiting for a buffer which is locked waiting to be flushed, but this operation is plugged in the kworker. The lock is a normal struct mutex, so tsk_is_pi_blocked() will always return false on !RT and thus the behaviour changes for RT. It seems that the intent here is to skip blk_flush_plug() in the case where a non-preemptible lock (such as a spinlock) has been converted to a rtmutex on RT, which is the case covered by the SM_RTLOCK_WAIT schedule flag. But sched_submit_work() is only called from schedule() which is never called in this scenario, so the check can simply be deleted. Looking at the history of the -rt patchset, in fact this change was present from v5.9.1-rt20 until being dropped in v5.13-rt1 as it was part of a larger patch [1] most of which was replaced by commit b4bfa3fcfe3b ("sched/core: Rework the __schedule() preempt argument"). As described in [1]: The schedule process must distinguish between blocking on a regular sleeping lock (rwsem and mutex) and a RT-only sleeping lock (spinlock and rwlock): - rwsem and mutex must flush block requests (blk_schedule_flush_plug()) even if blocked on a lock. This can not deadlock because this also happens for non-RT. There should be a warning if the scheduling point is within a RCU read section. - spinlock and rwlock must not flush block requests. This will deadlock if the callback attempts to acquire a lock which is already acquired. Similarly to being preempted, there should be no warning if the scheduling point is within a RCU read section. and with the tsk_is_pi_blocked() in the scheduler path, we hit the first issue. [1] https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git/tree/patches/0022-locking-rtmutex-Use-custom-scheduling-function-for-s.patch?h=linux-5.10.y-rt-patches Cc: Sebastian Andrzej Siewior Cc: Peter Zijlstra Cc: Thomas Gleixner Reviewed-by: Steven Rostedt (Google) Signed-off-by: John Keeping --- v3: - Add SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT) as suggested by Peter v2: - Add Steven's R-b and update the commit message with his suggested quote from [1] include/linux/sched/rt.h | 8 -------- kernel/sched/core.c | 8 ++++++-- 2 files changed, 6 insertions(+), 10 deletions(-) diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h index e5af028c08b49..994c25640e156 100644 --- a/include/linux/sched/rt.h +++ b/include/linux/sched/rt.h @@ -39,20 +39,12 @@ static inline struct task_struct *rt_mutex_get_top_task(struct task_struct *p) } extern void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task); extern void rt_mutex_adjust_pi(struct task_struct *p); -static inline bool tsk_is_pi_blocked(struct task_struct *tsk) -{ - return tsk->pi_blocked_on != NULL; -} #else static inline struct task_struct *rt_mutex_get_top_task(struct task_struct *task) { return NULL; } # define rt_mutex_adjust_pi(p) do { } while (0) -static inline bool tsk_is_pi_blocked(struct task_struct *tsk) -{ - return false; -} #endif extern void normalize_rt_tasks(void); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 1d4660a1915b3..71d6385ece83f 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6578,8 +6578,12 @@ static inline void sched_submit_work(struct task_struct *tsk) io_wq_worker_sleeping(tsk); } - if (tsk_is_pi_blocked(tsk)) - return; + /* + * spinlock and rwlock must not flush block requests. This will + * deadlock if the callback attempts to acquire a lock which is + * already acquired. + */ + SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT); /* * If we are going to sleep and we have plugged IO queued, -- 2.37.0