Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp35024665rwd; Mon, 10 Jul 2023 01:11:01 -0700 (PDT) X-Google-Smtp-Source: APBJJlGu5MrnDHRFcWowBcz56jTb1eqAEvVnydev6UbSp+bs7rOUsfr4G9jAST307qmCxnFlGOEZ X-Received: by 2002:a2e:9b44:0:b0:2b7:a64:91bd with SMTP id o4-20020a2e9b44000000b002b70a6491bdmr8818522ljj.35.1688976660907; Mon, 10 Jul 2023 01:11:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688976660; cv=none; d=google.com; s=arc-20160816; b=EIIxDWXTOW/p9BlyF4HX0wpEFXhfD799bPR4CeTzDnvNT+VZ6CB6ItWrFcB2Ws30Q/ RyAL7E5Ee6ftGkL+OF3dWVnmtvJVRQUd9w4iJkaS8152zeUpXFL4OBC/K8Yd914NPW0V GaxaOL5/+7ymT3N1o6OM142ca83fXRlzn7kgyOwVkel3UZ7MXV4Ui4eU5q7dv1/wEIiX bbPogI/xivY2gN44BOkXYQ2YRp09q4E/9YzhN5BOtUT3nil1XSY9uJGjQMRwnsKIXyyI Nro25RaFWGSo2UDoA7APi2RZ+p/QJhlvsPQWfMzOnouJjsZZYtI6tvV56SRYem6M91JJ QPcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=JHz4T0dAHsWXv1HU54No7ncdNYB03ZDaYV83A59pR58=; fh=Nta6/b0pfa8Eui5frTSHRk0NOTxN6SiTdw63YDL2Ohw=; b=d/sEZ9sLpKjP/FJU4x1QK45aDiK6MMS8CfemhK572FLwDuekKFxn3IxeczHH/Vbgnz VSP95w9fIFwiDB8EuDzv7jji3HicTiCoQOzJcVX8XsWRxD5xHHgYMqs6mR4xFa3xT6VG f4xuzRGFxzAPmiLDTkTcNw33/2iImSi20e6jnsb4xX6N5iOpm0i5ge/0pvsCyDfUEfd8 agIcmqLCdORRr5xJN5lSM13M3oSuU9CK6QJT5TwMYz/o9KFhDyW1JF8j9JDFMli6HxDl gDS/nM3HSnpbrGhl2mCTYXsbk4MfXhBJNHU1ovvyW0xTaY2xpoOuQMddyGSqyJd+pQbJ 3uOg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jx16-20020a170906ca5000b0098e044f3701si7716067ejb.201.2023.07.10.01.10.27; Mon, 10 Jul 2023 01:11:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232286AbjGJH4n (ORCPT + 99 others); Mon, 10 Jul 2023 03:56:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232294AbjGJH4k (ORCPT ); Mon, 10 Jul 2023 03:56:40 -0400 Received: from verein.lst.de (verein.lst.de [213.95.11.211]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8AB9106; Mon, 10 Jul 2023 00:56:38 -0700 (PDT) Received: by verein.lst.de (Postfix, from userid 2407) id 6454D67373; Mon, 10 Jul 2023 09:56:35 +0200 (CEST) Date: Mon, 10 Jul 2023 09:56:34 +0200 From: Christoph Hellwig To: Chuck Lever III Cc: Jens Axboe , Christoph Hellwig , "linux-block@vger.kernel.org" , Linux NFS Mailing List , Chuck Lever Subject: Re: NFS workload leaves nfsd threads in D state Message-ID: <20230710075634.GA30120@lst.de> References: <7A57C7AE-A51A-4254-888B-FE15CA21F9E9@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <7A57C7AE-A51A-4254-888B-FE15CA21F9E9@oracle.com> User-Agent: Mutt/1.5.17 (2007-11-01) X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Sat, Jul 08, 2023 at 06:30:26PM +0000, Chuck Lever III wrote: > Hi - > > I have a "standard" test of running the git regression suite with > many threads against an NFS mount. I found that with 6.5-rc, the > test stalled and several nfsd threads on the server were stuck > in D state. Can you paste the exact reproducer here? > I can reproduce this stall 100% with both an xfs and an ext4 > export, so I bisected with both, and both bisects landed on the > same commit: > On system 1: the exports are on top of /dev/mapper and reside on > an "INTEL SSDSC2BA400G3" SATA device. > > On system 2: the exports are on top of /dev/mapper and reside on > an "INTEL SSDSC2KB240G8" SATA device. > > System 1 was where I discovered the stall. System 2 is where I ran > the bisects. Ok. I'd be curious if this reproducers without either device mapper or on a non-SATA device. If you have an easy way to run it in a VM that'd be great. Otherwise I'll try to recreate it in various setups if you post the exact reproducer.