Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp6525431pxv; Thu, 29 Jul 2021 17:26:03 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwgC8fsS4fQZK5c/pddoqBM6tVDV/IbxeTE8Se2EKO0XC/Gwj1V76ku4hysafQw0Njm3jxR X-Received: by 2002:a5d:9304:: with SMTP id l4mr6272602ion.167.1627604763798; Thu, 29 Jul 2021 17:26:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1627604763; cv=none; d=google.com; s=arc-20160816; b=grqGnu5BvuhaxLT2K1CQTQIMCuvLNRocmhzd+Go5Gn3LPmS+aJ4/eOu8bcpQTPfaNH jQIZ3tUwlk2IIFnm7+f0HMjdqDepCQLoj6lbrkWCdW5Kp6aLqo75b4auh7yK6uylBeDP MN4FE9mqeaW98RjvHwuPzJEo5RnBJyoYv9pMQ7don0e19frfOIFka4wehw3SBoKMrSw/ gWZom2dWQwbpjwRdjPHT4zSm3rm1j6OaWoqcODIRl274jTY6kV4862Sbn+vFT/oShDmK Rld3cmUheNs1Qu+v2YD3NGnTP6GwR6k2oalYRqgGWSTU+/hfqRDqvVqmleYW6YmvNuoK xvEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=958nacsjZ0gQdSH3oB6JzO9Ndbds4+jvNk2yw5cz+uk=; b=Xr92LHKbZxiHPFky24xHZmhurbepWiLVKzs9fpMpAgi+KzWcrFzZyd3ZtBhxdfgYGd SHR2RQR4+TkVUZW53znMqhLWr2Gdd3zOiWIgJNNskGqcV1OXFc9JxpcvS50yJhCqBUUu lDXCBdHpoEhiruuCcZTRH8BaQj9mI89ScSlDIdxZqsP8j0gMGwB3G7HDdMN/3WhLLOsd JB9FZbK8Xrgd+V+gQcIsV1e5Ytk7leKlVSDAxJo7yOVpap+0LfoJZkL7Z0JXANb8e3pG PZUfJV31WwbqdI7aBuzy/ZBJUi5tJtfEhdo4l5iB5Atnd4GChZzpo7hoA0gFY/bf/vtc ok1g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 187si5193332iow.86.2021.07.29.17.25.50; Thu, 29 Jul 2021 17:26:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235214AbhG3AZr (ORCPT + 99 others); Thu, 29 Jul 2021 20:25:47 -0400 Received: from zeniv-ca.linux.org.uk ([142.44.231.140]:42656 "EHLO zeniv-ca.linux.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234931AbhG3AZr (ORCPT ); Thu, 29 Jul 2021 20:25:47 -0400 Received: from viro by zeniv-ca.linux.org.uk with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m9GLC-0052tv-VI; Fri, 30 Jul 2021 00:25:31 +0000 Date: Fri, 30 Jul 2021 00:25:30 +0000 From: Al Viro To: NeilBrown Cc: Christoph Hellwig , Josef Bacik , "J. Bruce Fields" , Chuck Lever , Chris Mason , David Sterba , linux-fsdevel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-btrfs@vger.kernel.org Subject: Re: [PATCH 01/11] VFS: show correct dev num in mountinfo Message-ID: References: <162742539595.32498.13687924366155737575.stgit@noble.brown> <162742546548.32498.10889023150565429936.stgit@noble.brown> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <162742546548.32498.10889023150565429936.stgit@noble.brown> Sender: Al Viro Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote: > /proc/$PID/mountinfo contains a field for the device number of the > filesystem at each mount. > > This is taken from the superblock ->s_dev field, which is correct for > every filesystem except btrfs. A btrfs filesystem can contain multiple > subvols which each have a different device number. If (a directory > within) one of these subvols is mounted, the device number reported in > mountinfo will be different from the device number reported by stat(). > > This confuses some libraries and tools such as, historically, findmnt. > Current findmnt seems to cope with the strangeness. > > So instead of using ->s_dev, call vfs_getattr_nosec() and use the ->dev > provided. As there is no STATX flag to ask for the device number, we > pass a request mask for zero, and also ask the filesystem to avoid > syncing with any remote service. Hard NAK. You are putting IO (potentially - network IO, with no upper limit on the completion time) under namespace_sem. This is an instant DoS - have a hung NFS mount anywhere in the system, try to cat /proc/self/mountinfo and watch a system-wide rwsem held shared. From that point on any attempt to take it exclusive will hang *AND* after that all attempts to take it shared will do the same. Please, fix BTRFS shite in BTRFS. Without turning a moderately unpleasant problem (say, unplugged hub on the way to NFS server) into something that escalates into buggered clients. Note that you have taken out any possibility to e.g. umount -l /path/to/stuck/mount, along with any chance of clear shutdown of the client. Not going to happen. NAKed-by: Al Viro