Received: by 2002:a05:6358:489b:b0:bb:da1:e618 with SMTP id x27csp2277812rwn; Fri, 9 Sep 2022 11:01:52 -0700 (PDT) X-Google-Smtp-Source: AA6agR77heavYx4OdleBio9nKpYhoSdMCm8EySj74Xd70jZLxL52qJPEE+dqdfb6u/ZrBU6g0p3r X-Received: by 2002:aa7:8607:0:b0:53b:13b5:2b6a with SMTP id p7-20020aa78607000000b0053b13b52b6amr15388832pfn.52.1662746512234; Fri, 09 Sep 2022 11:01:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1662746512; cv=none; d=google.com; s=arc-20160816; b=njf2seHeVb1Bd1Uh2VEQ5sOhjaHgFj40h9hrYuFKikY91zP+q1FUXTVyXyMiEouva5 Y/AC+6poYqdx6DjA92qhpYxyyPdP/6uX6vUMjsRAfuQFXuzlh4jzi/qWu7QWti1yaNlG 3hF1RrEceMnXTVzYC/cXK3Y1y2WjSdd3J5V+qY3NXpperS/Spt2xQPh7OVoUJTrPF5hO ClE0qH5W1v7DoZDbly04xhLyJcp0J/clHe6Vwp0yTAjvdWeMMvTBimmPyTUKQgHQ164g CYe4lTFnHuDnLHAhbhHSk/pqNwqsANAXuYGFJwEWNyFM3DBbqCkNFWPigXS6tQM2Qxmx scFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=Tz5yD0RJuOyAZR5ePYyGiDWmggQdhn+DZL0QzTAC0Is=; b=kUIg3n33aHy23WyN0xBQEtGTq4HUW9So4PPE9PH3M21wJofCJAfTfrInF7Xn1L66yx AdtCp+AfYuR7Onc4Khzts2xT0jIVq5k56enn0UJ5yon2x+5lgZt3LZGwOVssd8dKyjv3 8ndKFJeF/gXZp/lOwYx0p8BIEDj4uhjwXso2EKDwiSwtU1aCcmpeKJK7rNc4djMrWr9R gKG7yIgYv0x0FztMEyz490YkC2zMupxx4C4oNE0xNJJURqOBT8tiY77of1GfAzlJZOad S4pMh/ggQyi9t9+s2lDfIZ0Y9v0/JM9Q3Gnz3g77tP7p238ntUUBqdSlQCFIDahRkpUg MiZw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@umich.edu header.s=google-2016-06-03 header.b=GKKBFVCk; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=umich.edu Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h5-20020a170902f7c500b00177e7f3aefcsi1001209plw.488.2022.09.09.11.01.25; Fri, 09 Sep 2022 11:01:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@umich.edu header.s=google-2016-06-03 header.b=GKKBFVCk; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=umich.edu Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229806AbiIISAw (ORCPT + 99 others); Fri, 9 Sep 2022 14:00:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229989AbiIISAY (ORCPT ); Fri, 9 Sep 2022 14:00:24 -0400 Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com [IPv6:2a00:1450:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CA62C5F7CB for ; Fri, 9 Sep 2022 11:00:10 -0700 (PDT) Received: by mail-ej1-x635.google.com with SMTP id r18so5752295eja.11 for ; Fri, 09 Sep 2022 11:00:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=umich.edu; s=google-2016-06-03; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date; bh=Tz5yD0RJuOyAZR5ePYyGiDWmggQdhn+DZL0QzTAC0Is=; b=GKKBFVCkzcZLXMTGIsd6yz9VsN683y3D5sukWn/uTWoDqwSZrTHIqT3NkEUm3zRVY9 7w7rRWVMUCNuAPaFFvR9wOAbbX+AtJwctHUI+gOeIwWGlY81QfrhxHRGiVHZuXRESdXK yVufBP8DGKKGBK/tdH/sCz9nRG2zzBmw4ey47huGXsqkdr9G2j5hAz0T4TaKNwbgcuGg LSStK5rLMsoZEL/TbvDQV9QDaSDYSjfRWPeCnRI9ECL2izgSjAjU2Nw16F/wzSq4yUJm in9tZa7cA7SMM29FBAocLW2EM1d6EEjvbET+PJZPAMmkLDjn2FXWmddUhPMSEEG+Rp0i BDRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date; bh=Tz5yD0RJuOyAZR5ePYyGiDWmggQdhn+DZL0QzTAC0Is=; b=SuPK4z5e39dCm3Pz1YNX5bPvNUMiDbq5ZH9hTcPerOHMY9b9s7GT0c7ncGYZsZqvjT b/s66fMyKk2dRgYOR++Bh35JH+5KG7CTQjh0BDNMsGa8j1NDTsx4SfoH9OYcXGGOuFm+ rf7eiP/2ZruWg0/v17f3P/nRAP6ajCCw6t/S9tHi+v7ze0qgVCwPETz3qZBTszf9IVws ZoyWDCV5pa3qWdO9d77exeyWZHMQOpJBk1Puvqh60WXisGBNIgMIHZ3+WFgQB/64Rtwv kszK7q+9jsfDb6tuCkFcMCfu00LF4AFbkV84uqYZmN1m/YJYZj8wXYU8elMWxThMJJK8 vAFQ== X-Gm-Message-State: ACgBeo0cKBee1gTlE02OuRZKSWDFoVM/RDNwln4+1sDiR0frPD1LLNA6 p2lWsKAPj0C0de1gd9e9hJWmhu9je426s1KsL3eR4wETXZE= X-Received: by 2002:a17:907:d2a:b0:741:4f42:df74 with SMTP id gn42-20020a1709070d2a00b007414f42df74mr10119555ejc.535.1662746408677; Fri, 09 Sep 2022 11:00:08 -0700 (PDT) MIME-Version: 1.0 References: <28bbec15d3a631e0a9047f4a5895bd42db364dba.camel@hammerspace.com> <6beb46a169e675c560871ca54748481522ecbaec.camel@hammerspace.com> In-Reply-To: From: Olga Kornievskaia Date: Fri, 9 Sep 2022 13:59:57 -0400 Message-ID: Subject: Re: Regression: deadlock in io_schedule / nfs_writepage_locked To: Trond Myklebust Cc: "igor@gooddata.com" , "anna@kernel.org" , "linux-nfs@vger.kernel.org" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-1.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Fri, Sep 9, 2022 at 12:52 PM Trond Myklebust w= rote: > > On Fri, 2022-09-09 at 16:47 +0000, Trond Myklebust wrote: > > This looks like it might be the root cause issue. It looks like > > you're using pNFS: > > > > /proc/3278822/stack: > > [<0>] pnfs_update_layout+0x603/0xed0 [nfsv4] > > [<0>] fl_pnfs_update_layout.constprop.18+0x23/0x1e0 > > [nfs_layout_nfsv41_files] > > [<0>] filelayout_pg_init_write+0x3a/0x70 [nfs_layout_nfsv41_files] > > [<0>] __nfs_pageio_add_request+0x294/0x470 [nfs] > > [<0>] nfs_pageio_add_request_mirror+0x2f/0x40 [nfs] > > [<0>] nfs_pageio_add_request+0x200/0x2d0 [nfs] > > [<0>] nfs_page_async_flush+0x120/0x310 [nfs] > > [<0>] nfs_writepages_callback+0x5b/0xc0 [nfs] > > [<0>] write_cache_pages+0x187/0x4d0 > > [<0>] nfs_writepages+0xe1/0x200 [nfs] > > [<0>] do_writepages+0xd2/0x1b0 > > [<0>] __writeback_single_inode+0x41/0x360 > > [<0>] writeback_sb_inodes+0x1f0/0x460 > > [<0>] __writeback_inodes_wb+0x5f/0xd0 > > [<0>] wb_writeback+0x235/0x2d0 > > [<0>] wb_workfn+0x312/0x4a0 > > [<0>] process_one_work+0x1c5/0x390 > > [<0>] worker_thread+0x30/0x360 > > [<0>] kthread+0xd7/0x100 > > [<0>] ret_from_fork+0x1f/0x30 > > > > What is the pNFS server you are running against? I see you're using > > the files pNFS layout type, so is this a NetApp? This reminds me of the problem that was supposed to be fixed by the patches that went into 5.19-rc3?. pNFS: Don't keep retrying if the server replied NFS4ERR_LAYOUTUNAVAIL= ABLE pNFS: Avoid a live lock condition in pnfs_update_layout() Igor, Is the server constantly returning LAYOUT_UNAVAILABLE? And does this happen to be co-located with a volume move operation? > > > > Sorry for the HTML spam... Resending with all that crap stripped out. > > > From: Igor Raits > > Sent: Friday, September 9, 2022 11:09 > > To: Trond Myklebust > > Cc: anna@kernel.org ; linux-nfs@vger.kernel.org > > > > Subject: Re: Regression: deadlock in io_schedule / > > nfs_writepage_locked > > > > Hello Trond, > > > > On Mon, Aug 22, 2022 at 5:01 PM Trond Myklebust > > wrote: > > > > > > On Mon, 2022-08-22 at 16:43 +0200, Igor Raits wrote: > > > > [You don't often get email from igor@gooddata.com. Learn why this > > > > is > > > > important at https://aka.ms/LearnAboutSenderIdentification ] > > > > > > > > Hello Trond, > > > > > > > > On Mon, Aug 22, 2022 at 4:02 PM Trond Myklebust > > > > wrote: > > > > > > > > > > On Mon, 2022-08-22 at 10:16 +0200, Igor Raits wrote: > > > > > > [You don't often get email from igor@gooddata.com. Learn why > > > > > > this > > > > > > is > > > > > > important at https://aka.ms/LearnAboutSenderIdentification ] > > > > > > > > > > > > Hello everyone, > > > > > > > > > > > > Hopefully I'm sending this to the right place=E2=80=A6 > > > > > > We recently started to see the following stacktrace quite > > > > > > often > > > > > > on > > > > > > our > > > > > > VMs that are using NFS extensively (I think after upgrading > > > > > > to > > > > > > 5.18.11+, but not sure when exactly. For sure it happens on > > > > > > 5.18.15): > > > > > > > > > > > > INFO: task kworker/u36:10:377691 blocked for more than 122 > > > > > > seconds. > > > > > > Tainted: G E 5.18.15-1.gdc.el8.x86_64 #1 > > > > > > "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables > > > > > > this > > > > > > message. > > > > > > task:kworker/u36:10 state:D stack: 0 pid:377691 ppid: > > > > > > 2 > > > > > > flags:0x00004000 > > > > > > Workqueue: writeback wb_workfn (flush-0:308) > > > > > > Call Trace: > > > > > > > > > > > > __schedule+0x38c/0x7d0 > > > > > > schedule+0x41/0xb0 > > > > > > io_schedule+0x12/0x40 > > > > > > __folio_lock+0x110/0x260 > > > > > > ? filemap_alloc_folio+0x90/0x90 > > > > > > write_cache_pages+0x1e3/0x4d0 > > > > > > ? nfs_writepage_locked+0x1d0/0x1d0 [nfs] > > > > > > nfs_writepages+0xe1/0x200 [nfs] > > > > > > do_writepages+0xd2/0x1b0 > > > > > > ? check_preempt_curr+0x47/0x70 > > > > > > ? ttwu_do_wakeup+0x17/0x180 > > > > > > __writeback_single_inode+0x41/0x360 > > > > > > writeback_sb_inodes+0x1f0/0x460 > > > > > > __writeback_inodes_wb+0x5f/0xd0 > > > > > > wb_writeback+0x235/0x2d0 > > > > > > wb_workfn+0x348/0x4a0 > > > > > > ? put_prev_task_fair+0x1b/0x30 > > > > > > ? pick_next_task+0x84/0x940 > > > > > > ? __update_idle_core+0x1b/0xb0 > > > > > > process_one_work+0x1c5/0x390 > > > > > > worker_thread+0x30/0x360 > > > > > > ? process_one_work+0x390/0x390 > > > > > > kthread+0xd7/0x100 > > > > > > ? kthread_complete_and_exit+0x20/0x20 > > > > > > ret_from_fork+0x1f/0x30 > > > > > > > > > > > > > > > > > > I see that something very similar was fixed in btrfs > > > > > > ( > > > > > > https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.gi= t/commi > > > > > > t/?h=3Dlinux- > > > > > > 5.18.y&id=3D9535ec371d741fa037e37eddc0a5b25ba82d0027) > > > > > > but I could not find anything similar for NFS. > > > > > > > > > > > > Do you happen to know if this is already fixed? If so, would > > > > > > you > > > > > > mind > > > > > > sharing some commits? If not, could you help getting this > > > > > > addressed? > > > > > > > > > > > > > > > > The stack trace you show above isn't particularly helpful for > > > > > diagnosing what the problem is. > > > > > > > > > > All it is saying is that 'thread A' is waiting to take a page > > > > > lock > > > > > that > > > > > is being held by a different 'thread B'. Without information on > > > > > what > > > > > 'thread B' is doing, and why it isn't releasing the lock, there > > > > > is > > > > > nothing we can conclude. > > > > > > > > Do you have some hint how to debug this issue further (when it > > > > happens > > > > again)? Would `virsh dump` to get a memory dump and then some > > > > kind of > > > > "bt all" via crash help to get more information? > > > > Or something else? > > > > > > > > Thanks in advance! > > > > -- > > > > Igor Raits > > > > > > Please try running the following two lines of 'bash' script as > > > root: > > > > > > (for tt in $(grep -l 'nfs[^d]' /proc/*/stack); do echo "${tt}:"; > > > cat ${tt}; echo; done) >/tmp/nfs_threads.txt > > > > > > cat /sys/kernel/debug/sunrpc/rpc_clnt/*/tasks > /tmp/rpc_tasks.txt > > > > > > and then send us the output from the two files /tmp/nfs_threads.txt > > > and > > > /tmp/rpc_tasks.txt. > > > > > > The file nfs_threads.txt gives us a full set of stack traces from > > > all > > > processes that are currently in the NFS client code. So it should > > > contain both the stack trace from your 'thread A' above, and the > > > traces > > > from all candidates for the 'thread B' process that is causing the > > > blockage. > > > The file rpc_tasks.txt gives us the status of any RPC calls that > > > might > > > be outstanding and might help diagnose any issues with the TCP > > > connection. > > > > > > That should therefore give us a better starting point for root > > > causing > > > the problem. > > > > The rpc_tasks is empty but I got nfs_threads from the moment it is > > stuck (see attached file). > > > > It still happens with 5.19.3, 5.19.6. > > -- > Trond Myklebust > Linux NFS client maintainer, Hammerspace > trond.myklebust@hammerspace.com > >