Received: by 2002:ac0:946b:0:0:0:0:0 with SMTP id j40csp1177737imj; Thu, 14 Feb 2019 02:29:10 -0800 (PST) X-Google-Smtp-Source: AHgI3IZ6yA52LNA5mfLfXUvDOopF4xqqXLAZbYvJf/6zEF1gaZH/yYGcvXgmIAJC7CInV0ukosZr X-Received: by 2002:a17:902:1:: with SMTP id 1mr3210433pla.276.1550140150860; Thu, 14 Feb 2019 02:29:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550140150; cv=none; d=google.com; s=arc-20160816; b=H77lSjGTmLHn51nInEj1mY59iBgG/oKgjge/JPzvq90WjJhQWW9gUhyca1N+wCQvMS 4IGVKkNtvKNX+I3fbKItSvV5hcWF7eHMSbHUgCrDj33nn3819bFTKjFk4FNb9+NCWAu6 mUzjh9ElrPTNKgNIKj2D1UG1yMb8E0965opS0iWs+3HpSiDujdKu8HNZuiZgxTEvzbDI 0nAdbC4rUML+3EexlFzd+l4KikpcZDDREhvusu3gRlHwP50x21J+zKWOvVK1vno27j3O B+VZWlOmcJob0QsNplUTlrIlHQjjK7L0wJvvau5lhft6XMIRILKigg+jWYsKyZQAHL4t JCbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=r67y/y3ArYxCRSCB1iB1KgMk/R3y2d7ndUMTvhRf60w=; b=JVfq7TonNTYDRpPCkUJiFp9vnHRoe9a+99hFHkvkD98BWcN3k2sPUOtQNh6+Ru6mRz q7+sMb4ZToLHBcjMmIQ3jdE/2H4XxOIN8vI7HIqQqzWxegJj+JcnM2K04ic9sxqrTiGY rcgrpwdnVZp5s4qzceA1m+EbDFvZz+P4xvbrWMykb1NF566w4hUKSG6tZU8f3KWE+AFP YfTBHNpEYtYDohbRw/E8C+E/qeRgQLwrkgnJaMqgntpniDiaUEZYrsbRIdTG+tlAb+Pk h6+ag7EU3k32sxAq2ot70VQx/ThN0cGFEQB90Smo26JkP8Xjc7SfeGs2nxrxguMxxHhW MbAQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l30si2215118plg.113.2019.02.14.02.28.55; Thu, 14 Feb 2019 02:29:10 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388220AbfBNCKJ (ORCPT + 99 others); Wed, 13 Feb 2019 21:10:09 -0500 Received: from smtp2.provo.novell.com ([137.65.250.81]:43213 "EHLO smtp2.provo.novell.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726295AbfBNCKI (ORCPT ); Wed, 13 Feb 2019 21:10:08 -0500 Received: from linux-fcij.suse (prv-ext-foundry1int.gns.novell.com [137.65.251.240]) by smtp2.provo.novell.com with ESMTP (TLS encrypted); Wed, 13 Feb 2019 19:10:00 -0700 Subject: Re: linux 4.19.19: md0_raid:1317 blocked for more than 120 seconds. To: Wolfgang Walter Cc: Jens Axboe , NeilBrown , linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org References: <2131016.q2kFhguZXe@stwm.de> <0ee180ac-bb43-6c2f-4084-5cc452a18c9d@suse.com> <3057098.nBgIypvgED@stwm.de> From: Guoqing Jiang Message-ID: <0c832f67-de10-8872-d3db-6a9f11c97454@suse.com> Date: Thu, 14 Feb 2019 10:09:56 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.0 MIME-Version: 1.0 In-Reply-To: <3057098.nBgIypvgED@stwm.de> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2/12/19 7:20 PM, Wolfgang Walter wrote: > Am Dienstag, 12. Februar 2019, 16:20:11 schrieb Guoqing Jiang: >> On 2/11/19 11:12 PM, Wolfgang Walter wrote: >>> With 4.19.19 we see sometimes the following issue (practically only with >>> blk_mq, though): >>> >>> Feb 4 20:04:46 tettnang kernel: [252300.060165] INFO: task md0_raid1:317 >>> blocked for more than 120 seconds. Feb 4 20:04:46 tettnang kernel: >>> [252300.060188] Not tainted 4.19.19-debian64.all+1.1 #1 Feb 4 >>> 20:04:46 tettnang kernel: [252300.060197] "echo 0 > >>> /proc/sys/kernel/hung_task_timeout_secs" disables this message. Feb 4 >>> 20:04:46 tettnang kernel: [252300.060207] md0_raid1 D 0 317 >>> 2 0x80000000 Feb 4 20:04:46 tettnang kernel: [252300.060211] Call >>> Trace: >>> Feb 4 20:04:46 tettnang kernel: [252300.060222] ? __schedule+0x2a2/0x8c0 >>> Feb 4 20:04:46 tettnang kernel: [252300.060226] ? >>> _raw_spin_unlock_irqrestore+0x20/0x40 Feb 4 20:04:46 tettnang kernel: >>> [252300.060229] schedule+0x32/0x90 Feb 4 20:04:46 tettnang kernel: >>> [252300.060241] md_super_wait+0x69/0xa0 [md_mod] Feb 4 20:04:46 >>> tettnang kernel: [252300.060247] ? finish_wait+0x80/0x80 Feb 4 20:04:46 >>> tettnang kernel: [252300.060255] md_bitmap_wait_writes+0x8e/0xa0 >>> [md_mod] Feb 4 20:04:46 tettnang kernel: [252300.060263] ? >>> md_bitmap_get_counter+0x42/0xd0 [md_mod] Feb 4 20:04:46 tettnang kernel: >>> [252300.060271] md_bitmap_daemon_work+0x1e8/0x380 [md_mod] Feb 4 >>> 20:04:46 tettnang kernel: [252300.060278] ? md_rdev_init+0xb0/0xb0 >>> [md_mod] Feb 4 20:04:46 tettnang kernel: [252300.060285] >>> md_check_recovery+0x26/0x540 [md_mod] Feb 4 20:04:46 tettnang kernel: >>> [252300.060290] raid1d+0x5c/0xf00 [raid1] Feb 4 20:04:46 tettnang >>> kernel: [252300.060294] ? preempt_count_add+0x79/0xb0 Feb 4 20:04:46 >>> tettnang kernel: [252300.060298] ? lock_timer_base+0x67/0x80 Feb 4 >>> 20:04:46 tettnang kernel: [252300.060302] ? >>> _raw_spin_unlock_irqrestore+0x20/0x40 Feb 4 20:04:46 tettnang kernel: >>> [252300.060304] ? try_to_del_timer_sync+0x4d/0x80 Feb 4 20:04:46 >>> tettnang kernel: [252300.060306] ? del_timer_sync+0x35/0x40 Feb 4 >>> 20:04:46 tettnang kernel: [252300.060309] ? schedule_timeout+0x17a/0x3b0 >>> Feb 4 20:04:46 tettnang kernel: [252300.060312] ? >>> preempt_count_add+0x79/0xb0 Feb 4 20:04:46 tettnang kernel: >>> [252300.060315] ? _raw_spin_lock_irqsave+0x25/0x50 Feb 4 20:04:46 >>> tettnang kernel: [252300.060321] ? md_rdev_init+0xb0/0xb0 [md_mod] Feb >>> 4 20:04:46 tettnang kernel: [252300.060327] ? md_thread+0xf9/0x160 >>> [md_mod] Feb 4 20:04:46 tettnang kernel: [252300.060330] ? >>> r1bio_pool_alloc+0x20/0x20 [raid1] Feb 4 20:04:46 tettnang kernel: >>> [252300.060336] md_thread+0xf9/0x160 [md_mod] Feb 4 20:04:46 tettnang >>> kernel: [252300.060340] ? finish_wait+0x80/0x80 Feb 4 20:04:46 tettnang >>> kernel: [252300.060344] kthread+0x112/0x130 Feb 4 20:04:46 tettnang >>> kernel: [252300.060346] ? kthread_create_worker_on_cpu+0x70/0x70 Feb 4 >>> 20:04:46 tettnang kernel: [252300.060350] ret_from_fork+0x35/0x40 >>> >>> I saw that there was a similar problem with raid10 and an upstream patch >>> >>> e820d55cb99dd93ac2dc949cf486bb187e5cd70d >>> md: fix raid10 hang issue caused by barrier >>> by Guoqing Jiang >>> >>> I wonder if there is a similar fix needed for raid1? >> Seems not, the calltrace tells the previous write superblock IO was not >> finish as expected, >> there is a report for raid5 which has similar problem with md_super_wait >> in the link [1]. Maybe >> you can disable blk-mq to narrow down the issue as well. > I already did for 4 weeks. I didn't saw this with blk-mq disabled (for scsi > and md), though this may be by luck. Then I guess it maybe related to blk-mq, which scheduler are you used with blk-mq? And maybe you can switch it to see if it is caused by specified scheduler or not. >> [1] |https://bbs.archlinux.org/viewtopic.php?id=243520 > I found this bug report in debian: > > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=904822 Thanks, the bug report also said it didn't happen after disable blk-mq. Regards, Guoqing