Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp1039524pxb; Tue, 26 Oct 2021 01:32:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwlSyBLgkVOojMrhCOSMsA8vLfsuM49R3odpE1UkPXE7g/KpiVEeIB3KYmqFTfOcIheQ0OO X-Received: by 2002:a63:2a88:: with SMTP id q130mr17621842pgq.169.1635237139198; Tue, 26 Oct 2021 01:32:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1635237139; cv=none; d=google.com; s=arc-20160816; b=pP/UkoAqVSBVfsfnVJQASZ1DjlNj7Tn1qrPSbvdbj02wFVvSgMpgHxzUz3uxM1krRE m9XZougDg9NLNoACLSLvr6v2M/JKkM5Xlbctr7vdMFRqNWT3L9zEfbJY3NGTy7uaD/Fl 1OzX1e9naPThheIfphMfnTaPAV/Elm18p4i5UAJY2bV/WUfu9zVg8LjnbkmtqLKQKX+W e89g3MGUUgnQiidXGvQMT8k5BhyOS1s08w12dms8fV0KR01FHfgjw8SQ3tduAX15pxuT QZAq0qdHh23EdKCxEb4GpDnLKHHUdEy0heztMQ7M7j2VcTy46GsVdxA3TFIzG6SDzXMM X62w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=oBsGXv55wWpplGjyxCfLIopyx/VwLnsEwvC3TZf8Ruo=; b=qiBXGey0sDichvzwYiEstM+wFr86FHnlWSNsJy/9IYriXLPnHpkDzjiLW0AVRN4bC5 SGV6XVni3GBCUn/BzYgSHiQa7yVKn0LxPLxIgX76DSamiB9v19Za7cDd6xFoQwyNMrXb X5cFYpxq2qsAyXxjFtt4zOyrWMxhTtPJDVn2e4YGtsNDrJB/TgdLsR4ZmcTibGuw9KCR xlI6uOHq/ugRGRGEwd701bIzZu5LTRnbIRnHG/wceNONwFEBu6M28qFxoIb1ngXCm5zh Rhp1sB14XY1M5j6r7tnv6BkCcbyUqYV2LevsLgnlGTCXmDH9R5Y3PAbIjrsPr8FCRO0X /Wgg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=NPtkJC08; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d5si21921691pfl.168.2021.10.26.01.32.06; Tue, 26 Oct 2021 01:32:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=NPtkJC08; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236123AbhJZBbc (ORCPT + 99 others); Mon, 25 Oct 2021 21:31:32 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:50056 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233561AbhJZBbU (ORCPT ); Mon, 25 Oct 2021 21:31:20 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1635211737; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=oBsGXv55wWpplGjyxCfLIopyx/VwLnsEwvC3TZf8Ruo=; b=NPtkJC08RpkXsEX6baBApRCE6SclQemhLMEKQiXg4m1IqFAVZVP3RH4yDNEwGkOZW6ncBM PviR1d1DNFfXPMkOktKB69D5YHGncMd31Yf6DeMbP7pGFV1ElThkAs/z1dk4+ggEzmfyTr M0R3AlOhnKBNZbE9XnFPmdNPm6QsEPQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-59-Usvy8dpZOTeXQXE67Jz7Jg-1; Mon, 25 Oct 2021 21:28:53 -0400 X-MC-Unique: Usvy8dpZOTeXQXE67Jz7Jg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 114CB872FF3; Tue, 26 Oct 2021 01:28:52 +0000 (UTC) Received: from T590 (ovpn-8-17.pek2.redhat.com [10.72.8.17]) by smtp.corp.redhat.com (Postfix) with ESMTPS id A2ED15D6BA; Tue, 26 Oct 2021 01:28:44 +0000 (UTC) Date: Tue, 26 Oct 2021 09:28:39 +0800 From: Ming Lei To: Dmitry Osipenko Cc: Stephen Rothwell , Linux Next Mailing List , Ulf Hansson , Adrian Hunter , Jens Axboe , Linux Kernel Mailing List , linux-mmc , linux-block Subject: Re: linux-next: Tree for Oct 25 Message-ID: References: <20211025204921.73cb3011@canb.auug.org.au> <82bbf33e-918f-da01-95e6-9b2cc1b8b610@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <82bbf33e-918f-da01-95e6-9b2cc1b8b610@gmail.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 26, 2021 at 01:11:07AM +0300, Dmitry Osipenko wrote: > Hello, > > Recent -next has this new warning splat coming from MMC, please take a look. > > ------------[ cut here ]------------ > WARNING: CPU: 0 PID: 525 at kernel/sched/core.c:9477 __might_sleep+0x65/0x68 > do not call blocking ops when !TASK_RUNNING; state=2 set at [<4316eb02>] prepare_to_wait+0x2e/0xb8 > Modules linked in: > CPU: 0 PID: 525 Comm: Xorg Tainted: G W 5.15.0-rc6-next-20211025-00226-g89ccd6948ec3 #5 > Hardware name: NVIDIA Tegra SoC (Flattened Device Tree) > (unwind_backtrace) from [] (show_stack+0x11/0x14) > (show_stack) from [] (dump_stack_lvl+0x2b/0x34) > (dump_stack_lvl) from [] (__warn+0xa1/0xe8) > (__warn) from [] (warn_slowpath_fmt+0x65/0x7c) > (warn_slowpath_fmt) from [] (__might_sleep+0x65/0x68) > (__might_sleep) from [] (mmc_blk_rw_wait+0x2f/0x118) > (mmc_blk_rw_wait) from [] (mmc_blk_mq_issue_rq+0x219/0x71c) > (mmc_blk_mq_issue_rq) from [] (mmc_mq_queue_rq+0xf9/0x200) > (mmc_mq_queue_rq) from [] (__blk_mq_try_issue_directly+0xcb/0x100) > (__blk_mq_try_issue_directly) from [] (blk_mq_request_issue_directly+0x2d/0x48) > (blk_mq_request_issue_directly) from [] (blk_mq_flush_plug_list+0x14f/0x1f4) > (blk_mq_flush_plug_list) from [] (blk_flush_plug+0x83/0xb8) > (blk_flush_plug) from [] (io_schedule+0x2b/0x3c) > (io_schedule) from [] (bit_wait_io+0xf/0x48) The following patch should fix the issue: diff --git a/block/blk-mq.c b/block/blk-mq.c index a71aeed7b987..bee9cb2a44cb 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2223,7 +2223,7 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) return; plug->rq_count = 0; - if (!plug->multiple_queues && !plug->has_elevator) { + if (!plug->multiple_queues && !plug->has_elevator && !from_schedule) { blk_mq_plug_issue_direct(plug, from_schedule); if (rq_list_empty(plug->mq_list)) return; -- Ming