Received: by 2002:a05:6358:489b:b0:bb:da1:e618 with SMTP id x27csp338275rwn; Thu, 8 Sep 2022 02:22:01 -0700 (PDT) X-Google-Smtp-Source: AA6agR6SVCEt/xLfNByeVCmvDiI6W44On3D4AsxYME8/GxqnrubmqtkFS7rmN7ZMhzdZf97MbRrb X-Received: by 2002:a17:903:40d2:b0:174:e086:c748 with SMTP id t18-20020a17090340d200b00174e086c748mr8361725pld.108.1662628920794; Thu, 08 Sep 2022 02:22:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1662628920; cv=none; d=google.com; s=arc-20160816; b=omvRAMJlvWKJ6cbpx/RDoxlpKWrH51scwcGKOg5O5AoSmAW9gKaKyPbhRw7kBEF25T XrVOWT4QFKCu5dshXhnXSxMEI4YKrT54bRrMxM0/fTOu4Sp+rsc51su/1+ezT23CpKdD ehj76vTjWuJSkKKVa3IatEh+4fRnQn6cwXJg9x2oHXXDWOJj9uVE2/DmsYVOOhGM+zQ7 Ki2BbvZA2uae4lw2EC3XO8tDMbeJ4L2poy1Vn9/VpCp4nzAcCWF8+OleVWKycgzDR/Em zotwd5dyS1ofa7oQE9DWsQP9VwuY1jIanRkv+gK/NqoUc/SVPv/mD9DeB6gZojRBSVIm Bhwg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature :dkim-signature; bh=zO6f3K+IjCx3ZlNRm51AasV7xmzi4VmZ05qH8UdBhDY=; b=q1PeUJeWjMfWLZByFAWtkRtKOw5tE1mubZmj81H5KF92am2uADlc597P1VrRQFzUei fh2giBoUCctUh8qodVda2/lxoqKBW0GBKg4bvNYMXbdTwn9z8Vy2qE5yh9olVzSvkt/e 8N7qOg+KU5oWGHlxhxJgnXa00e0OAkGq+9gy6MGgSNvJSJMp1F8+8hEOYR7RfaPwsxi+ 6icFa9UF82S7/DEw8H8DosrkqhwTN4Y56SKSQj/WRkmscHjW+/gKxe13vcgPOTLhkBDJ s/XxrGOY75KbNlaWN4E2YVKTLlzwRnh5cFdwXsQifZtVWs4IzbC/Z08pLfAmQW3VMY+A 3nOg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=RtTeSHav; dkim=neutral (no key) header.i=@suse.cz header.b=07VIxMsy; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q8-20020a170902788800b00176a080b498si12419600pll.342.2022.09.08.02.21.46; Thu, 08 Sep 2022 02:22:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=RtTeSHav; dkim=neutral (no key) header.i=@suse.cz header.b=07VIxMsy; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229795AbiIHJMY (ORCPT + 99 others); Thu, 8 Sep 2022 05:12:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50608 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229648AbiIHJMX (ORCPT ); Thu, 8 Sep 2022 05:12:23 -0400 Received: from smtp-out1.suse.de (smtp-out1.suse.de [IPv6:2001:67c:2178:6::1c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12A62FF50C for ; Thu, 8 Sep 2022 02:12:22 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id BD09122D91; Thu, 8 Sep 2022 09:12:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1662628340; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=zO6f3K+IjCx3ZlNRm51AasV7xmzi4VmZ05qH8UdBhDY=; b=RtTeSHavegBA16WuQoYIIznwOoHza25JAoh/YxNyDGN9wOd3Y3W1pjhiiIosYjLAFUtPwj 0QnwL6dLXKFpd7C7FKGLaZrwJJWHMj0xx0aUMI+94l3kjNKGgq9ovs8gEulN1mk7x45/vz cocjzRgzX815om0eNuycfzff3xaiLMc= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1662628340; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=zO6f3K+IjCx3ZlNRm51AasV7xmzi4VmZ05qH8UdBhDY=; b=07VIxMsyhPcH2midxKmFp9o3Zwv7YH7u4N6fM1fl8vigcGS1A071cPXLOlApXc0DQ12R3D UtYiMQU+JGJ2FuCQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id AEDB913A6D; Thu, 8 Sep 2022 09:12:20 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id UpWlKvSxGWO2QAAAMHmgww (envelope-from ); Thu, 08 Sep 2022 09:12:20 +0000 Received: by quack3.suse.cz (Postfix, from userid 1000) id 49664A067E; Thu, 8 Sep 2022 11:12:20 +0200 (CEST) Date: Thu, 8 Sep 2022 11:12:20 +0200 From: Jan Kara To: Ojaswin Mujoo Cc: Jan Kara , Ted Tso , linux-ext4@vger.kernel.org, Thorsten Leemhuis , Stefan Wahren , Andreas Dilger Subject: Re: [PATCH 0/5 v2] ext4: Fix performance regression with mballoc Message-ID: <20220908091220.zgtbnlyvhu66s3xr@quack3> References: <20220906150803.375-1-jack@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_SOFTFAIL, T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Thu 08-09-22 13:47:56, Ojaswin Mujoo wrote: > On Tue, Sep 06, 2022 at 05:29:06PM +0200, Jan Kara wrote: > > Hello, > > > > Here is a second version of my mballoc improvements to avoid spreading > > allocations with mb_optimize_scan=1. The patches fix the performance > > regression I was able to reproduce with reaim on my test machine: > > > > mb_optimize_scan=0 mb_optimize_scan=1 patched > > Hmean disk-1 2076.12 ( 0.00%) 2099.37 ( 1.12%) 2032.52 ( -2.10%) > > Hmean disk-41 92481.20 ( 0.00%) 83787.47 * -9.40%* 90308.37 ( -2.35%) > > Hmean disk-81 155073.39 ( 0.00%) 135527.05 * -12.60%* 154285.71 ( -0.51%) > > Hmean disk-121 185109.64 ( 0.00%) 166284.93 * -10.17%* 185298.62 ( 0.10%) > > Hmean disk-161 229890.53 ( 0.00%) 207563.39 * -9.71%* 232883.32 * 1.30%* > > Hmean disk-201 223333.33 ( 0.00%) 203235.59 * -9.00%* 221446.93 ( -0.84%) > > Hmean disk-241 235735.25 ( 0.00%) 217705.51 * -7.65%* 239483.27 * 1.59%* > > Hmean disk-281 266772.15 ( 0.00%) 241132.72 * -9.61%* 263108.62 ( -1.37%) > > Hmean disk-321 265435.50 ( 0.00%) 245412.84 * -7.54%* 267277.27 ( 0.69%) > > > > The changes also significanly reduce spreading of allocations for small / > > moderately sized files. I'm not able to measure a performance difference > > resulting from this but on eMMC storage this seems to be the main culprit > > of reduced performance. Untarring of raspberry-pi archive touches following > > numbers of groups: > > > > mb_optimize_scan=0 mb_optimize_scan=1 patched > > groups 4 22 7 > > > > To achieve this I have added two more changes on top of v1 - patches 4 and 5. > > Patch 4 makes sure we use locality group preallocation even for files that are > > not likely to grow anymore (previously we have disabled all preallocations for > > such files, however locality group preallocation still makes a lot of sense for > > such files). This patch reduced spread of a small file allocations but larger > > file allocations were still spread significantly because they avoid locality > > group preallocation and as they are not power-of-two in size, they also > > immediately start with cr=1 scan. To address that I've changed the data > > structure for looking up the best block group to allocate from (see patch 5 > > for details). > > > > Stefan, can you please test whether these patches fix the problem for you as > > well? Comments & review welcome. > > > > Honza > > Previous versions: > > Link: http://lore.kernel.org/r/20220823134508.27854-1-jack@suse.cz # v1 > > Hi Jan, > > Thanks for the patch. I tested this series on my raspberry pi and I can > confirm that the regression is no longer present with both > mb_optimize_scan=0 and =1 taking similar amount of time to untar. The > allocation spread I'm seeing is as follows: > mb_optimize_scan=0: 10 > mb_optimize_scan=1: 17 (Check graphs for more details) > > For mb_optimize_scan=1, I also compared the spread of locality group PA > eligible files (<64KB) and inode pa files. The results can be found > here: > > mb_optimize_scan=0: > https://github.com/OjaswinM/mbopt-bug/blob/master/grpahs/patchv2-mbopt0.png > mb_optimize_scan=1: > https://github.com/OjaswinM/mbopt-bug/blob/master/grpahs/patchv2.png > mb_optimize_scan=1 (lg pa only): > https://github.com/OjaswinM/mbopt-bug/blob/master/grpahs/patchv2-lgs.png > mb_optimize_scan=1 (inode pa only): > https://github.com/OjaswinM/mbopt-bug/blob/master/grpahs/patchv2-i.png > > The smaller files are now closer together due to the changes to > locality group pa logic. Most of the spread is now coming from mid to > large files. > > To test this further, I created a tar of 2000 100KB files to see if > there is any performance drop due to higher spread of these files and > notcied that although the spread is slightly higher(5BGs vs 9), we don't > see a performance drop while untarring with mb_optimize_scan=1. > > Although we still have some spread, I think we have brought it down to a > much more manageable level and that combined with improvements to CR1 > allocation have given us a good performance improvement. > > Feel free to add: > Tested-by: Ojaswin Mujoo Thanks a lot for the throughout testing! Honza -- Jan Kara SUSE Labs, CR