Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp4091904pxv; Mon, 5 Jul 2021 13:37:03 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyNNVvNqhD+M+yD40gwyeBLVG/XuNNgXAEX86BLWy1JhnSSV/apYR2REbzLFesLsWqI2jPr X-Received: by 2002:a02:9402:: with SMTP id a2mr13945303jai.110.1625517423450; Mon, 05 Jul 2021 13:37:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625517423; cv=none; d=google.com; s=arc-20160816; b=g4I6Sz1mrgoU4H1GBRR02uaZLVhrmwMy5YiQEQrdmElph3Rq1HjHYqOXkbvZR6mBVz P3Q7Jw1QE3ZIB+SWtv1BNW+AWiBOEBHUWN+vslq/a3u1w5GIsehriZHs1snUyRufg2nL cnoIAOlYiveHs+MMk0TZuFArbvLw/zaLfCHHhRUbe/Rebfh9hp46Q1N+K/Xu7dP66MQS Yi8YCZZjwWFq/6lZ6pJIDzNAJ8PnPZKkYlbJwn5//K0BLy8MTEttGD9mknmfCoi7Ry9Y +n8hxTp4nwdhnGwtdLsUv2m27TAgL8OwFkyabJVwco3DXxmmTu7YArqXP36g0STd1hEn 92/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=tyqLdBt5SKnYJJefFeot35SQ3LkSOX5UXxOjX031d48=; b=nwkaofzkkr9bPA8WZDeb8G3CSIB+CmMM59hVoXK4/dbp+7JbzcyWkk+tgEnsP/9eGP +bbmLgbiJQYxIzP+NmMTga6LT5LXXJklViyoWPPDWXkEDQ9MqndfwktqJSl+CJdksos6 Chzm+jAwlcvH0Q38HcaRwS77G4hdIN6AuSl3hvbo9aufvP2KLG1UgOtU39+lGlV9959q WcXa3zs/Ijlj/0+TlVfD07xQnj1mT7uDl9Aqo8lqaGh+dMxtnLRB3jR/nnnhsvBBMhvz QNMfQJkme8kF/GaidA86OlmtmWrjIZ40wN7jPbT1Eti4F5YRjMUENawle8TejcIbr4q/ tzYQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d6si5777773ilv.2.2021.07.05.13.36.51; Mon, 05 Jul 2021 13:37:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229997AbhGEUiU (ORCPT + 99 others); Mon, 5 Jul 2021 16:38:20 -0400 Received: from outgoing-auth-1.mit.edu ([18.9.28.11]:52787 "EHLO outgoing.mit.edu" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S229893AbhGEUiU (ORCPT ); Mon, 5 Jul 2021 16:38:20 -0400 Received: from cwcc.thunk.org (pool-72-74-133-215.bstnma.fios.verizon.net [72.74.133.215]) (authenticated bits=0) (User authenticated as tytso@ATHENA.MIT.EDU) by outgoing.mit.edu (8.14.7/8.12.4) with ESMTP id 165KZSEE004974 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 5 Jul 2021 16:35:29 -0400 Received: by cwcc.thunk.org (Postfix, from userid 15806) id AB1CB15C3C96; Mon, 5 Jul 2021 16:35:28 -0400 (EDT) Date: Mon, 5 Jul 2021 16:35:28 -0400 From: "Theodore Ts'o" To: Jan Kara Cc: Ye Bin , adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/2] ext4: Fix use-after-free about sbi->s_mmp_tsk Message-ID: References: <20210629143603.2166962-1-yebin10@huawei.com> <20210629143603.2166962-2-yebin10@huawei.com> <20210705111548.GD15373@quack2.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210705111548.GD15373@quack2.suse.cz> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 05, 2021 at 01:15:48PM +0200, Jan Kara wrote: > > That being said for this scheme spinlock is enough, you don't need a mutex > for s_mmp_lock. I think we can solve this without using using either a spinlock or a mutex, and it's a smaller and simpler patch as a result. (This is the -v2 version of this patch, which removes an unused label compared to the earlier version.) From 22ebc97aac75e27a5fd11acdb2bc3030d1da58d1 Mon Sep 17 00:00:00 2001 From: Theodore Ts'o Date: Fri, 2 Jul 2021 12:45:02 -0400 Subject: [PATCH] ext4: fix possible UAF when remounting r/o a mmp-protected file system After commit 618f003199c6 ("ext4: fix memory leak in ext4_fill_super"), after the file system is remounted read-only, there is a race where the kmmpd thread can exit, causing sbi->s_mmp_tsk to point at freed memory, which the call to ext4_stop_mmpd() can trip over. Fix this by only allowing kmmpd() to exit when it is stopped via ext4_stop_mmpd(). Link: https://lore.kernel.org/r/e525c0bf7b18da426bb3d3dd63830a3f85218a9e.1625244710.git.tytso@mit.edu Reported-by: Ye Bin Bug-Report-Link: <20210629143603.2166962-1-yebin10@huawei.com> Signed-off-by: Theodore Ts'o --- fs/ext4/mmp.c | 33 +++++++++++++++++---------------- fs/ext4/super.c | 6 +++++- 2 files changed, 22 insertions(+), 17 deletions(-) diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c index 6cb598b549ca..1e95cee3d8b7 100644 --- a/fs/ext4/mmp.c +++ b/fs/ext4/mmp.c @@ -157,6 +157,17 @@ static int kmmpd(void *data) sizeof(mmp->mmp_nodename)); while (!kthread_should_stop()) { + if (!(le32_to_cpu(es->s_feature_incompat) & + EXT4_FEATURE_INCOMPAT_MMP)) { + ext4_warning(sb, "kmmpd being stopped since MMP feature" + " has been disabled."); + goto wait_to_exit; + } + if (sb_rdonly(sb)) { + if (!kthread_should_stop()) + schedule_timeout_interruptible(HZ); + continue; + } if (++seq > EXT4_MMP_SEQ_MAX) seq = 1; @@ -177,16 +188,6 @@ static int kmmpd(void *data) failed_writes++; } - if (!(le32_to_cpu(es->s_feature_incompat) & - EXT4_FEATURE_INCOMPAT_MMP)) { - ext4_warning(sb, "kmmpd being stopped since MMP feature" - " has been disabled."); - goto exit_thread; - } - - if (sb_rdonly(sb)) - break; - diff = jiffies - last_update_time; if (diff < mmp_update_interval * HZ) schedule_timeout_interruptible(mmp_update_interval * @@ -207,7 +208,7 @@ static int kmmpd(void *data) ext4_error_err(sb, -retval, "error reading MMP data: %d", retval); - goto exit_thread; + goto wait_to_exit; } mmp_check = (struct mmp_struct *)(bh_check->b_data); @@ -221,7 +222,7 @@ static int kmmpd(void *data) ext4_error_err(sb, EBUSY, "abort"); put_bh(bh_check); retval = -EBUSY; - goto exit_thread; + goto wait_to_exit; } put_bh(bh_check); } @@ -242,9 +243,11 @@ static int kmmpd(void *data) mmp->mmp_seq = cpu_to_le32(EXT4_MMP_SEQ_CLEAN); mmp->mmp_time = cpu_to_le64(ktime_get_real_seconds()); - retval = write_mmp_block(sb, bh); + return write_mmp_block(sb, bh); -exit_thread: +wait_to_exit: + while (!kthread_should_stop()) + schedule(); return retval; } @@ -391,5 +394,3 @@ int ext4_multi_mount_protect(struct super_block *sb, brelse(bh); return 1; } - - diff --git a/fs/ext4/super.c b/fs/ext4/super.c index cdbe71d935e8..b8ff0399e171 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -5993,7 +5993,6 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data) */ ext4_mark_recovery_complete(sb, es); } - ext4_stop_mmpd(sbi); } else { /* Make sure we can mount this feature set readwrite */ if (ext4_has_feature_readonly(sb) || @@ -6107,6 +6106,9 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data) if (!test_opt(sb, BLOCK_VALIDITY) && sbi->s_system_blks) ext4_release_system_zone(sb); + if (!ext4_has_feature_mmp(sb) || sb_rdonly(sb)) + ext4_stop_mmpd(sbi); + /* * Some options can be enabled by ext4 and/or by VFS mount flag * either way we need to make sure it matches in both *flags and @@ -6140,6 +6142,8 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data) for (i = 0; i < EXT4_MAXQUOTAS; i++) kfree(to_free[i]); #endif + if (!ext4_has_feature_mmp(sb) || sb_rdonly(sb)) + ext4_stop_mmpd(sbi); kfree(orig_data); return err; } -- 2.31.0