Received: by 2002:a05:6358:16cc:b0:ea:6187:17c9 with SMTP id r12csp5448856rwl; Wed, 28 Dec 2022 19:46:03 -0800 (PST) X-Google-Smtp-Source: AMrXdXvP4lXkPRFjI9iq3zFqmkGxzbzbhbOTBA26Xzwfs13sl/b/AE5qN+cuRhxmy4kqYH7uB6ub X-Received: by 2002:a17:906:3095:b0:7be:fb2c:c112 with SMTP id 21-20020a170906309500b007befb2cc112mr22245589ejv.66.1672285563389; Wed, 28 Dec 2022 19:46:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672285563; cv=none; d=google.com; s=arc-20160816; b=rySi2p2fCKqPrqHfBPSSaMzf4XsDFnhgzlvu4/ex1ehPU/LAP3cf9Q9lcWBksvSRap /ql4z5wcBtFXO6T9ZfSmwOjcInwVBIZsueKy96UpltpW2XpdpmQ9+RcLUMdiaZFc++bg VUSF6R5j2Z521BP65gCjRYmCmwrh9YH0DgY4FBW9RWfBNF/rmqPFjvf1psKC9xQ8LoS3 aMy8wpe539MEaDb72yYrYTRg5f5sqrD2bx9DBXTuWFv6D5AgWhSEw9sZI1vmakdEmlQq jF0KFk1kqU64meAvACbCqbv9HW1H5oY6Ys/FLHAieJp6OUQ/1rss/D8SvwLEpCNSRtBT J6DQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:ui-outboundreport:content-transfer-encoding :in-reply-to:subject:from:references:cc:to:content-language :user-agent:mime-version:date:message-id; bh=luSf2wU/Ih15h+ABj/qK4AoC90zv6pBqWADKJ84zx0o=; b=ycrAn2Rf3GXDwKjdOwsQjpIwz78kI52j+ON4cdwo6AqzSpdJX7zAg7rkgWCkPRPHAH 3Bor6w3no8fbKS4I+pm/1CGcRLXzkjeaFPyz37Mpb72czbPhHgt8Ywa5Y7jsPEW71xuI cgRHloH+Cv0LDZ0HVeYAw0KSAgH8MOQjwJzyi9DboHrDpSYae7EhCMPjDf6yuqUlhe5F SVWNGVq8mwb/rDtd58sqFH52nQ7mSrhja2T6ra+5gnx5/bAr4IZ1l1xIPc9ekr3K1+z2 qJwVY3EBZEusHhKEblgBQS5GN0eYKd3sfrPiLI1OOHxIYIj+AdTn+uETe7OLcFa3/Wk7 bEmg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=gmx.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id sa42-20020a1709076d2a00b007bdd38a5d5asi12413613ejc.636.2022.12.28.19.45.48; Wed, 28 Dec 2022 19:46:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=gmx.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232345AbiL2DGJ (ORCPT + 62 others); Wed, 28 Dec 2022 22:06:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231183AbiL2DGG (ORCPT ); Wed, 28 Dec 2022 22:06:06 -0500 Received: from mout.gmx.net (mout.gmx.net [212.227.15.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B183C63E9; Wed, 28 Dec 2022 19:06:02 -0800 (PST) Received: from [0.0.0.0] ([149.28.201.231]) by mail.gmx.net (mrgmx005 [212.227.17.184]) with ESMTPSA (Nemesis) id 1MtfJd-1oqJsT3ZM7-00v3n0; Thu, 29 Dec 2022 04:05:55 +0100 Message-ID: <9d56a546-ea4f-83cb-4efb-093af270544b@gmx.com> Date: Thu, 29 Dec 2022 11:05:51 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.6.0 Content-Language: en-US To: Mikhail Gavrilov Cc: Qu Wenruo , dsterba@suse.com, Btrfs BTRFS , Linux List Kernel Mailing References: <41734bdb-2df0-6596-01b6-76a7dfd05d64@gmx.com> <24cd64b2-4536-372c-91af-b425d2f6efd4@gmx.com> <7d2edc1d-922b-763c-3122-0a6f81c3454e@suse.com> From: Qu Wenruo Subject: Re: [6.2][regression] after commit 947a629988f191807d2d22ba63ae18259bb645c5 btrfs volume periodical forced switch to readonly after a lot of disk writes In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Provags-ID: V03:K1:C48139QmNLwssfAzBbQOj1vGYGq7X0X5q4ZfCR8qbV8N6cEgVmO W6iO93jPmsNMIktPTPQxYcYByTdwbh4nw84ReaDWIdr0945+xqiEZ8e6XBlRv70sAZDRkaM sfsH32/0mdrqqqddz8zHNBpv/LqYQp5Q2mBZHfFKB/mI150SI/QmJJZHqa0VNOmMFWr84qY 6SAm1DQs0GgQqonS9ShLg== UI-OutboundReport: notjunk:1;M01:P0:+y0CddqR3Lg=;wpvM+eqjVe2cokghf//YUuwhdQ7 UsXUL4dAv7pR+XRWlPWZc5KBv6Buq12sTRA0LiXvvAKQ3WAKAOjFEPQy9MVnx5whKJi34/PZ3 qY/9D5blBXJJPGmbnLxkW7buDHNl5kf8Ee0jIVh5BP/g4ILtWjF3+euQkKxlFN99m5KY9no3V 53jaEmqJhYB3ao8DPfYIc66eBqyIUyz/Cg+LTGelDqrMh+68hMMsEqnfg0yQllc1+rKcDyZ2B e5wk/LyMGOTdduTTN+nFniAc8q/ke0+Y9j27vndBTIVTO0qtw/loFjtHAFY75Fl54NcUl2R6u UiWb7sQwkYBl7kqQQL1bluBRfBi1+SDcbshimUNme6k9hTOLdk5RsgOKiaSpYYZs6osQqorvy N9fc/HdMqcj/9wiLt4wHoZ0kZC2LEDQQbHh6oi/UBuQrN4oifyjFUfAK9zX5IhasvgHS1ZXCI uG/9odCXnJkT1ylzOTJtKRRqrx+aSpe5kAz3PPjqOE5cWDOA7V19NE+71/StS8RP3KBPfPO3+ sZsUGoNmB8rUAanYU9WeGiTnB/cvTpeRqpSYDOrkGMZhietnzdNaCLfHz7Z9hm2zaUqVxFnSy u/6vgz9VxVDgIv7KBG70OgBmtA3ihMeYEivM4AqXiHrpKL7ESiYQBQT7a7ND3oTEk/2ufKxli JaFyqtVgzSXafD0l5hfYfIzSqA/7CpzgE6m+4x5M70jOnz7fw/IPBWoMVvMjAPNpGbr6WZRc7 hmmzSG8xDUev4D3BTIQLhkNA5pyHJaKy6YL0iDSdkERO8uix915H53mmRSDndvJ9ySERUVD6H C+HpA05IWyLbUBA7vpls1XlX2Bh4zTvssM+LJ6AU4EFnjlhWiFXzijCmCPeDM8YBxTzNY6OdT PT7FnrHzHk2YYk8DtZ/kH93g+srZkIOcZurdH+ldkLbgqPHw9b+6R+elSv9BqOORdq74S/C4M lqaL1tGdi/FeqxUDUy6eVrH3+pw= X-Spam-Status: No, score=-3.7 required=5.0 tests=BAYES_00,FREEMAIL_FROM, NICE_REPLY_A,RCVD_IN_DNSWL_LOW,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2022/12/29 08:08, Mikhail Gavrilov wrote: > On Thu, Dec 29, 2022 at 4:31 AM Qu Wenruo wrote: >> >> Are you using qgroup? If so it may be worthy trying disabling qgroup. > > I do not use quota. > And looks like my distro does not use quita by default. > ❯ btrfs qgroup show -f / > ERROR: can't list qgroups: quotas not enabled > >> But for newer kernel, qgroup hang should only happen when dropping large >> snapshot, I don't know if podman pull would cause older snapshots to be >> deleted... > > It is not a regression, it also happened on older kernels. > But it is really annoying when the browser process waits when "podman > pull" writes changes to disk. > In fact, I have been waiting for 5 years for caching of slow HDDs by > using the cache on the SSD, but apparently I can’t wait. > And I started slowly buying expensive large SSDs to replace the big > HDD. I still can’t find time to connect D5 P5316 30.72 Tb to the > primary workstation. > I want to make a video review of it. I understand this is an expensive > solution and not suitable for everyone, unlike an affordable SDD > cache. > This is really sad to hear that. For now, I only have several guesses on how this could happen. - Extra seeks between metadata and data chunks You can try with mixed block groups, but this needs mkfs time tuning. - Extra inlined extents causing too much metadata overhead You can disable inline extents using max_inline=0 as mount options. But that only affects newly created files, not the existing ones. Otherwise, using bcache may be a solution. For now I'm not aware of any HDD specific tests, other than zoned devices, thus the performance problem can be really hard to debug. Thanks, Qu