Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp3173491ybk; Mon, 18 May 2020 20:29:51 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy79qFZL+f6gwEYZXfSehgvCjulnCe8r260oboaxp1ljgM1OtQlTXnsg1LHPw04Ba7g0obw X-Received: by 2002:a50:a624:: with SMTP id d33mr17001190edc.58.1589858991530; Mon, 18 May 2020 20:29:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1589858991; cv=none; d=google.com; s=arc-20160816; b=YE+bIfhpSZJJ9OjiVkiOoStKGbEqA5+KquPmKBlzd3Xn6cssq+6QFB7N2vViP/LBvI RkgXY3Nd8I/GE0U2iqJbHHw/mW6nOyzaOrrJDorkhKKpm3ERVEGC5VA2LSEqolz65chy z1155kODO/xdDwNb6CR/0RR+0Xn+y+HhBGERfKe8One/88qSAWknyXYzV3So480UtK1m 8UngBrCbqpzh+eYSA76oY41vFi7g/7AWywq/XhKRyRd4wl586QJbANel6UZaYnh/TsUC 5ykPXTIohlDv6Jo2gf7U5QnMR7Cni/QAHzxy/fJbw+IxJdaO5NJmmv6YdiB/tQeLGNG+ T6aQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=YEOG+j8HyfPSbFs8Yyn1Uk7lKhcRWrnNGdtiy/HoksY=; b=fFqRrNynTqLvR4eYWagimuUF9XmrPdW+rsyhCRm+NAMzM5K+Bq6nA0SijI7miy1OBI JahLHuBd1U/N6VMRoEHGypvbdBIHef62gPSxKssLVopJVgl6envD1EfLQ4BlLDAl3MrH 0Tmj9sBnft6N3ROmA7C0kEAONjQUXI9dWy8cFbkNY4A77f2AAhmcMGh0bGgK3jDtyfOY jxbf/9LErEPz0CJr7ezXoO+tyXc3P/a5OVrj4+Khi051YQOpC0xOWnjCZzrabbLjbUNU ragchv8yXoFhmWPG4w6SJagxbBUt/zuReKkTQyIUzqttj+PI4021ul3krKEGufERFASd 7rvA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 63si5911978edn.276.2020.05.18.20.29.24; Mon, 18 May 2020 20:29:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726302AbgESD3U (ORCPT + 99 others); Mon, 18 May 2020 23:29:20 -0400 Received: from out30-130.freemail.mail.aliyun.com ([115.124.30.130]:33578 "EHLO out30-130.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726293AbgESD3U (ORCPT ); Mon, 18 May 2020 23:29:20 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07488;MF=jefflexu@linux.alibaba.com;NM=1;PH=DS;RN=4;SR=0;TI=SMTPD_---0Tz-O038_1589858957; Received: from admindeMacBook-Pro-2.local(mailfrom:jefflexu@linux.alibaba.com fp:SMTPD_---0Tz-O038_1589858957) by smtp.aliyun-inc.com(127.0.0.1); Tue, 19 May 2020 11:29:17 +0800 Subject: Re: [PATCH RFC] ext4: fix partial cluster initialization when splitting extent To: Eric Whitney Cc: linux-ext4@vger.kernel.org, tytso@mit.edu, joseph.qi@linux.alibaba.com References: <1589444097-38535-1-git-send-email-jefflexu@linux.alibaba.com> <20200514222120.GB4710@localhost.localdomain> <20200518220804.GA20248@localhost.localdomain> From: JeffleXu Message-ID: <9b526ae9-cba6-35dd-0424-61e8fa5ab016@linux.alibaba.com> Date: Tue, 19 May 2020 11:29:17 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: <20200518220804.GA20248@localhost.localdomain> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On 5/19/20 6:08 AM, Eric Whitney wrote: > Hi, Jeffle: > > What kernel were you running when you observed your failures? Does your > patch resolve all observed failures, or do any remain? Do you have a > simple test script that reproduces the bug? > > I've made almost 1000 runs of shared/298 on various bigalloc configurations > using Ted's test appliance on 5.7-rc5 and have not observed a failure. > Several auto group runs have also passed without failures. Ideally, I'd > like to be able to reproduce your failure to be sure we fully understand > what's going on. It's still the case that the "2" is wrong, but I think > that code in rm_leaf may be involved in an unexpected way. > > Thanks, > Eric Hi Eric, Following on is my test environment. kernel: 5.7-rc4-git-eb24fdd8e6f5c6bb95129748a1801c6476492aba e2fsprog: latest release version 1.45.6 (20-Mar-2020) xfstests: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git, master branch, latest commit 1. Test device I run the test in a VM and the VM is setup by qemu. The size of vdb is 1G, ``` #lsblk NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vdb    254:16   0   1G  0 disk ``` and is initialized by: ``` qemu-img create -f qcow2 /XX/disk1.qcow2 1G qemu-kvm -drive file=/XX/disk1.qcow2,if=virtio,format=qcow2 ... ``` 2. Test script local.config of xfstests is like: export TEST_DEV=/dev/vdb export TEST_DIR=/mnt/test export SCRATCH_DEV=/dev/vdc export SCRATCH_MNT=/mnt/scratch Following on is an example script to reproduce the failure: ```sh #!/bin/bash for i in `seq 100`; do         echo y | mkfs.ext4 -O bigalloc -C 16K /dev/vdb         ./check shared/298         status=$?         if [[ $status == 1 ]]; then                 echo "$i exit"                 exit         fi done ``` Indeed the failure occurs occasionally. Sometimes the script stops at iteration 4, or sometimes at iteration 2, 7, 24. The failure occurs with the following dmesg report: ``` [  387.471876] EXT4-fs error (device vdb): mb_free_blocks:1457: group 1, block 158084:freeing already freed block (bit 6753); block bitmap corrupt. [  387.473729] EXT4-fs error (device vdb): ext4_mb_generate_buddy:747: group 1, block bitmap and bg descriptor inconsistent: 19550 vs 19551 free clusters ``` 3. About the applied patch The applied patch does fix the failure in my test environment. At least the failure doesn't occur after running the full 100 iterations. Thanks Jeffle