Received: by 2002:ac0:aed5:0:0:0:0:0 with SMTP id t21csp2274962imb; Mon, 4 Mar 2019 00:28:04 -0800 (PST) X-Google-Smtp-Source: APXvYqwyenbS0/3kOblWdpoOxy2DBOLF3MT4A24Chxgv4q8phg+IUm5DmhqaziMJz2v9A+Bds/F1 X-Received: by 2002:a65:60d8:: with SMTP id r24mr17413905pgv.6.1551688083933; Mon, 04 Mar 2019 00:28:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551688083; cv=none; d=google.com; s=arc-20160816; b=UmJbWMat65fSYqXfwvlYyvpsP0yjk3Ou4QjEX5H943cyvZPQ3Z2m3LDz2qeXtTzV8/ qCmFb1kQFsynDmC0TNe9WrTPY7/rkMTGIKePi7owE1tegWhtF2UJ/qJH0DnwARVA2Arr RzdcWaRlVpY9/DYm7H5YVlVqW56ql115PnouLVsn/2Dd4F1evUKMK/RdYkY39Z+kx3/a G7xEKswxv9luRrTdeyrqcYre1i+rcxl27klMMAlw2SxsJNODFY8eJme7gDSTUhoxkzCG b/YmkBgsEmrZEzUda+VzjLj+5E8nZbdSukPlqsgngGQMtaCM4jaJxpPOYB2f6feXMDxO 95UA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=JWqPqaXTg22WeK26KmPYzy4Hxsajw7t4PO8oxuq8Ixg=; b=Rqy/KpaifPf/EeYImMJPVhu9/5VsKtfEyLMNLSJCyzZeElU73CZAyjl6Rem+DWYL9p gc0QRZAknmFsrf/ilAnBdK3gpYv3QVkdpwHvdSZk3V73HMxl8afNibNB3Qa4zlkdOWno iPlrndP+u0vHigti/hKtNwLvZbiq/Xe9M9Qs1IqN+R0rcfL/6Jjnrjaadvu3cs9mZt5+ yZMOUPRhjFYRpCetNrUapb9IxRNz5VnmuSESf1EKgE3Wcyu7GXws7/utvAWpduQ1e6RH LI621wXuVE/U83VustsrCFlCrf9DC24SfeRjpzzjryQ1xmcvi3ehA5fHt2/otPhWQJ9h 1ngA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=gvA6TU22; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c9si4657645pgi.139.2019.03.04.00.27.48; Mon, 04 Mar 2019 00:28:03 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=gvA6TU22; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727025AbfCDIZb (ORCPT + 99 others); Mon, 4 Mar 2019 03:25:31 -0500 Received: from mail.kernel.org ([198.145.29.99]:45370 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726659AbfCDIZa (ORCPT ); Mon, 4 Mar 2019 03:25:30 -0500 Received: from localhost (5356596B.cm-6-7b.dynamic.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7FDEE20823; Mon, 4 Mar 2019 08:25:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1551687929; bh=tjTJsWKgl4YKVuRNbwnkRb3R2iiYSuyLAuLeeT6LBMA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gvA6TU22vlDdAYOjBE83c3hkC0Jle+VR/7O+8zKBmfTY0RNAR9pnZ5FzIa8ZXehdF rJd+zN+ZhEhClDK88wRIauWaQvUJOKBHrQ1rQ75fYmLa1CQAIdv9M75Kb0macbq9s8 SP0nlc5eksUCxQYzzJagJLJL0TVxABmcUqoX+X2s= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Tejun Heo , Jiufei Xue , Jan Kara , Jens Axboe , Sasha Levin Subject: [PATCH 4.14 32/52] writeback: synchronize sync(2) against cgroup writeback membership switches Date: Mon, 4 Mar 2019 09:22:30 +0100 Message-Id: <20190304081619.007389596@linuxfoundation.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190304081617.159014799@linuxfoundation.org> References: <20190304081617.159014799@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review X-Patchwork-Hint: ignore MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.14-stable review patch. If anyone has any objections, please let me know. ------------------ [ Upstream commit 7fc5854f8c6efae9e7624970ab49a1eac2faefb1 ] sync_inodes_sb() can race against cgwb (cgroup writeback) membership switches and fail to writeback some inodes. For example, if an inode switches to another wb while sync_inodes_sb() is in progress, the new wb might not be visible to bdi_split_work_to_wbs() at all or the inode might jump from a wb which hasn't issued writebacks yet to one which already has. This patch adds backing_dev_info->wb_switch_rwsem to synchronize cgwb switch path against sync_inodes_sb() so that sync_inodes_sb() is guaranteed to see all the target wbs and inodes can't jump wbs to escape syncing. v2: Fixed misplaced rwsem init. Spotted by Jiufei. Signed-off-by: Tejun Heo Reported-by: Jiufei Xue Link: http://lkml.kernel.org/r/dc694ae2-f07f-61e1-7097-7c8411cee12d@gmail.com Acked-by: Jan Kara Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin --- fs/fs-writeback.c | 40 ++++++++++++++++++++++++++++++-- include/linux/backing-dev-defs.h | 1 + mm/backing-dev.c | 1 + 3 files changed, 40 insertions(+), 2 deletions(-) diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 3244932f4d5cc..6a76616c9401b 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -331,11 +331,22 @@ struct inode_switch_wbs_context { struct work_struct work; }; +static void bdi_down_write_wb_switch_rwsem(struct backing_dev_info *bdi) +{ + down_write(&bdi->wb_switch_rwsem); +} + +static void bdi_up_write_wb_switch_rwsem(struct backing_dev_info *bdi) +{ + up_write(&bdi->wb_switch_rwsem); +} + static void inode_switch_wbs_work_fn(struct work_struct *work) { struct inode_switch_wbs_context *isw = container_of(work, struct inode_switch_wbs_context, work); struct inode *inode = isw->inode; + struct backing_dev_info *bdi = inode_to_bdi(inode); struct address_space *mapping = inode->i_mapping; struct bdi_writeback *old_wb = inode->i_wb; struct bdi_writeback *new_wb = isw->new_wb; @@ -343,6 +354,12 @@ static void inode_switch_wbs_work_fn(struct work_struct *work) bool switched = false; void **slot; + /* + * If @inode switches cgwb membership while sync_inodes_sb() is + * being issued, sync_inodes_sb() might miss it. Synchronize. + */ + down_read(&bdi->wb_switch_rwsem); + /* * By the time control reaches here, RCU grace period has passed * since I_WB_SWITCH assertion and all wb stat update transactions @@ -435,6 +452,8 @@ static void inode_switch_wbs_work_fn(struct work_struct *work) spin_unlock(&new_wb->list_lock); spin_unlock(&old_wb->list_lock); + up_read(&bdi->wb_switch_rwsem); + if (switched) { wb_wakeup(new_wb); wb_put(old_wb); @@ -475,9 +494,18 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id) if (inode->i_state & I_WB_SWITCH) return; + /* + * Avoid starting new switches while sync_inodes_sb() is in + * progress. Otherwise, if the down_write protected issue path + * blocks heavily, we might end up starting a large number of + * switches which will block on the rwsem. + */ + if (!down_read_trylock(&bdi->wb_switch_rwsem)) + return; + isw = kzalloc(sizeof(*isw), GFP_ATOMIC); if (!isw) - return; + goto out_unlock; /* find and pin the new wb */ rcu_read_lock(); @@ -511,12 +539,14 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id) * Let's continue after I_WB_SWITCH is guaranteed to be visible. */ call_rcu(&isw->rcu_head, inode_switch_wbs_rcu_fn); - return; + goto out_unlock; out_free: if (isw->new_wb) wb_put(isw->new_wb); kfree(isw); +out_unlock: + up_read(&bdi->wb_switch_rwsem); } /** @@ -894,6 +924,9 @@ fs_initcall(cgroup_writeback_init); #else /* CONFIG_CGROUP_WRITEBACK */ +static void bdi_down_write_wb_switch_rwsem(struct backing_dev_info *bdi) { } +static void bdi_up_write_wb_switch_rwsem(struct backing_dev_info *bdi) { } + static struct bdi_writeback * locked_inode_to_wb_and_lock_list(struct inode *inode) __releases(&inode->i_lock) @@ -2408,8 +2441,11 @@ void sync_inodes_sb(struct super_block *sb) return; WARN_ON(!rwsem_is_locked(&sb->s_umount)); + /* protect against inode wb switch, see inode_switch_wbs_work_fn() */ + bdi_down_write_wb_switch_rwsem(bdi); bdi_split_work_to_wbs(bdi, &work, false); wb_wait_for_completion(bdi, &done); + bdi_up_write_wb_switch_rwsem(bdi); wait_sb_inodes(sb); } diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h index 19240379637fe..b186c4b464e02 100644 --- a/include/linux/backing-dev-defs.h +++ b/include/linux/backing-dev-defs.h @@ -165,6 +165,7 @@ struct backing_dev_info { struct radix_tree_root cgwb_tree; /* radix tree of active cgroup wbs */ struct rb_root cgwb_congested_tree; /* their congested states */ struct mutex cgwb_release_mutex; /* protect shutdown of wb structs */ + struct rw_semaphore wb_switch_rwsem; /* no cgwb switch while syncing */ #else struct bdi_writeback_congested *wb_congested; #endif diff --git a/mm/backing-dev.c b/mm/backing-dev.c index 9386c98dac123..6fa31754eadd9 100644 --- a/mm/backing-dev.c +++ b/mm/backing-dev.c @@ -684,6 +684,7 @@ static int cgwb_bdi_init(struct backing_dev_info *bdi) INIT_RADIX_TREE(&bdi->cgwb_tree, GFP_ATOMIC); bdi->cgwb_congested_tree = RB_ROOT; mutex_init(&bdi->cgwb_release_mutex); + init_rwsem(&bdi->wb_switch_rwsem); ret = wb_init(&bdi->wb, bdi, 1, GFP_KERNEL); if (!ret) { -- 2.19.1