Received: by 2002:a5b:505:0:0:0:0:0 with SMTP id o5csp73227ybp; Thu, 3 Oct 2019 10:20:31 -0700 (PDT) X-Google-Smtp-Source: APXvYqwIXpkwNZ4iHoEbgo6Pdzq/KI8FkC6w+VEhP8UYaYVjVR2PFEVJeM43Rh/6TZ8r0tArSUr4 X-Received: by 2002:a50:95c1:: with SMTP id x1mr10636152eda.180.1570123231095; Thu, 03 Oct 2019 10:20:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1570123231; cv=none; d=google.com; s=arc-20160816; b=FY0d+QLReV+Gm0Jzgm8DiFWO2IfRCj2IFI5lAWifvfR8hY8AFwpCnZ+6B0GgVzh0Dw o/jbKC2UFUAYig1O24QHR23xgyy9rfmvPf9Qnvq0Wdb1VtcRHSFWZr24rdGl2rF23oZK RvJCJT1CcLrn8o2j9UFx2GCynTgN5sDoBSXAAEMt+wpMGuSU9l/ZSc97BuYPxNd9PKvC d9ilZJrQHzJXvlerdunxgMdthjEollmQ4AFn+VxDtapVScjQdhzP0J5+Je4dfAFRLWfQ P+qqgosxLIzkzZ82XJMW6AVvsrHjOnkmEsiSL+MxQ5B6W0345UooO9WLSUQ1bkTf1RaB 18dA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=sxgHQMMPFtlUM91zVuyciLFoJKhVh1gE1eDlSGlgnvM=; b=q1HVPLU8VjuCutHdmPgFkC4p0F2FwRGk4YNdAwYfv2rVPH8YFdXG18mUKwzoAlLBOS bxEHT1XcbX5W8aXQv2MsgGZ9s1TnRalLRnT+L+/j2QmimLW05kMap/DwBSoGou5SASLY P2gd7rJp8BjV8asNspdyjKgoH4a4EN4NSuvdZJMhEZFT8k+zCDV+b6h4iU8VhRfA0id4 sJeFWInPI5gJp9jDDhuakaMKbJgJuwrW9NmP69PWRJq3iAxgv2RiJ1bPOzIN9aOVrKDo 2ydslGjSKyN2zzaS1JsV5qBRWiVONx3Xk/VbfHQrnJdIFVNVUu+n+LkASdczVj+reIOu l7Sg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=iF4w4Quw; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 91si2252796edy.188.2019.10.03.10.20.06; Thu, 03 Oct 2019 10:20:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=iF4w4Quw; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388928AbfJCQYA (ORCPT + 99 others); Thu, 3 Oct 2019 12:24:00 -0400 Received: from mail.kernel.org ([198.145.29.99]:53356 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390657AbfJCQX6 (ORCPT ); Thu, 3 Oct 2019 12:23:58 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8086A2054F; Thu, 3 Oct 2019 16:23:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1570119837; bh=Xdm2Th0eFiyIu2n3tUCthm11Qkt4USebaXVOy0s6vvo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iF4w4QuwEPGko/jY3dxMwmk/mOJvHO/1FBAQKzC3NYwbfaqII01lfOG1mVkRpL0D1 lofCz2Pnsinlhh9aqUh2NMXKBjBzULkuAnVYQVDW+yQwNo2iRtIJyXO5/cPH47cLau fMD1HIX+Cf3FAVJOfPF3FsZHTnOwSOT2jfu5I6ik= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Josef Bacik , Filipe Manana , David Sterba Subject: [PATCH 4.19 195/211] Btrfs: fix race setting up and completing qgroup rescan workers Date: Thu, 3 Oct 2019 17:54:21 +0200 Message-Id: <20191003154529.096595580@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191003154447.010950442@linuxfoundation.org> References: <20191003154447.010950442@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Filipe Manana commit 13fc1d271a2e3ab8a02071e711add01fab9271f6 upstream. There is a race between setting up a qgroup rescan worker and completing a qgroup rescan worker that can lead to callers of the qgroup rescan wait ioctl to either not wait for the rescan worker to complete or to hang forever due to missing wake ups. The following diagram shows a sequence of steps that illustrates the race. CPU 1 CPU 2 CPU 3 btrfs_ioctl_quota_rescan() btrfs_qgroup_rescan() qgroup_rescan_init() mutex_lock(&fs_info->qgroup_rescan_lock) spin_lock(&fs_info->qgroup_lock) fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_RESCAN init_completion( &fs_info->qgroup_rescan_completion) fs_info->qgroup_rescan_running = true mutex_unlock(&fs_info->qgroup_rescan_lock) spin_unlock(&fs_info->qgroup_lock) btrfs_init_work() --> starts the worker btrfs_qgroup_rescan_worker() mutex_lock(&fs_info->qgroup_rescan_lock) fs_info->qgroup_flags &= ~BTRFS_QGROUP_STATUS_FLAG_RESCAN mutex_unlock(&fs_info->qgroup_rescan_lock) starts transaction, updates qgroup status item, etc btrfs_ioctl_quota_rescan() btrfs_qgroup_rescan() qgroup_rescan_init() mutex_lock(&fs_info->qgroup_rescan_lock) spin_lock(&fs_info->qgroup_lock) fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_RESCAN init_completion( &fs_info->qgroup_rescan_completion) fs_info->qgroup_rescan_running = true mutex_unlock(&fs_info->qgroup_rescan_lock) spin_unlock(&fs_info->qgroup_lock) btrfs_init_work() --> starts another worker mutex_lock(&fs_info->qgroup_rescan_lock) fs_info->qgroup_rescan_running = false mutex_unlock(&fs_info->qgroup_rescan_lock) complete_all(&fs_info->qgroup_rescan_completion) Before the rescan worker started by the task at CPU 3 completes, if another task calls btrfs_ioctl_quota_rescan(), it will get -EINPROGRESS because the flag BTRFS_QGROUP_STATUS_FLAG_RESCAN is set at fs_info->qgroup_flags, which is expected and correct behaviour. However if other task calls btrfs_ioctl_quota_rescan_wait() before the rescan worker started by the task at CPU 3 completes, it will return immediately without waiting for the new rescan worker to complete, because fs_info->qgroup_rescan_running is set to false by CPU 2. This race is making test case btrfs/171 (from fstests) to fail often: btrfs/171 9s ... - output mismatch (see /home/fdmanana/git/hub/xfstests/results//btrfs/171.out.bad) # --- tests/btrfs/171.out 2018-09-16 21:30:48.505104287 +0100 # +++ /home/fdmanana/git/hub/xfstests/results//btrfs/171.out.bad 2019-09-19 02:01:36.938486039 +0100 # @@ -1,2 +1,3 @@ # QA output created by 171 # +ERROR: quota rescan failed: Operation now in progress # Silence is golden # ... # (Run 'diff -u /home/fdmanana/git/hub/xfstests/tests/btrfs/171.out /home/fdmanana/git/hub/xfstests/results//btrfs/171.out.bad' to see the entire diff) That is because the test calls the btrfs-progs commands "qgroup quota rescan -w", "qgroup assign" and "qgroup remove" in a sequence that makes calls to the rescan start ioctl fail with -EINPROGRESS (note the "btrfs" commands 'qgroup assign' and 'qgroup remove' often call the rescan start ioctl after calling the qgroup assign ioctl, btrfs_ioctl_qgroup_assign()), since previous waits didn't actually wait for a rescan worker to complete. Another problem the race can cause is missing wake ups for waiters, since the call to complete_all() happens outside a critical section and after clearing the flag BTRFS_QGROUP_STATUS_FLAG_RESCAN. In the sequence diagram above, if we have a waiter for the first rescan task (executed by CPU 2), then fs_info->qgroup_rescan_completion.wait is not empty, and if after the rescan worker clears BTRFS_QGROUP_STATUS_FLAG_RESCAN and before it calls complete_all() against fs_info->qgroup_rescan_completion, the task at CPU 3 calls init_completion() against fs_info->qgroup_rescan_completion which re-initilizes its wait queue to an empty queue, therefore causing the rescan worker at CPU 2 to call complete_all() against an empty queue, never waking up the task waiting for that rescan worker. Fix this by clearing BTRFS_QGROUP_STATUS_FLAG_RESCAN and setting fs_info->qgroup_rescan_running to false in the same critical section, delimited by the mutex fs_info->qgroup_rescan_lock, as well as doing the call to complete_all() in that same critical section. This gives the protection needed to avoid rescan wait ioctl callers not waiting for a running rescan worker and the lost wake ups problem, since setting that rescan flag and boolean as well as initializing the wait queue is done already in a critical section delimited by that mutex (at qgroup_rescan_init()). Fixes: 57254b6ebce4ce ("Btrfs: add ioctl to wait for qgroup rescan completion") Fixes: d2c609b834d62f ("btrfs: properly track when rescan worker is running") CC: stable@vger.kernel.org # 4.4+ Reviewed-by: Josef Bacik Signed-off-by: Filipe Manana Signed-off-by: David Sterba Signed-off-by: Greg Kroah-Hartman --- fs/btrfs/qgroup.c | 33 +++++++++++++++++++-------------- 1 file changed, 19 insertions(+), 14 deletions(-) --- a/fs/btrfs/qgroup.c +++ b/fs/btrfs/qgroup.c @@ -2796,9 +2796,6 @@ out: btrfs_free_path(path); mutex_lock(&fs_info->qgroup_rescan_lock); - if (!btrfs_fs_closing(fs_info)) - fs_info->qgroup_flags &= ~BTRFS_QGROUP_STATUS_FLAG_RESCAN; - if (err > 0 && fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT) { fs_info->qgroup_flags &= ~BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT; @@ -2814,16 +2811,30 @@ out: trans = btrfs_start_transaction(fs_info->quota_root, 1); if (IS_ERR(trans)) { err = PTR_ERR(trans); + trans = NULL; btrfs_err(fs_info, "fail to start transaction for status update: %d", err); - goto done; } - ret = update_qgroup_status_item(trans); - if (ret < 0) { - err = ret; - btrfs_err(fs_info, "fail to update qgroup status: %d", err); + + mutex_lock(&fs_info->qgroup_rescan_lock); + if (!btrfs_fs_closing(fs_info)) + fs_info->qgroup_flags &= ~BTRFS_QGROUP_STATUS_FLAG_RESCAN; + if (trans) { + ret = update_qgroup_status_item(trans); + if (ret < 0) { + err = ret; + btrfs_err(fs_info, "fail to update qgroup status: %d", + err); + } } + fs_info->qgroup_rescan_running = false; + complete_all(&fs_info->qgroup_rescan_completion); + mutex_unlock(&fs_info->qgroup_rescan_lock); + + if (!trans) + return; + btrfs_end_transaction(trans); if (btrfs_fs_closing(fs_info)) { @@ -2834,12 +2845,6 @@ out: } else { btrfs_err(fs_info, "qgroup scan failed with %d", err); } - -done: - mutex_lock(&fs_info->qgroup_rescan_lock); - fs_info->qgroup_rescan_running = false; - mutex_unlock(&fs_info->qgroup_rescan_lock); - complete_all(&fs_info->qgroup_rescan_completion); } /*