Received: by 2002:a05:7412:3784:b0:e2:908c:2ebd with SMTP id jk4csp420567rdb; Sat, 30 Sep 2023 09:28:04 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGOgRC2lQd2T44s8ZLcTB8JKJ6vehNvTQQl6tW+tQBLq1J7iK8Wa+rnNXZYaJYTSXlVQIoF X-Received: by 2002:a17:90b:f8b:b0:26f:b733:6887 with SMTP id ft11-20020a17090b0f8b00b0026fb7336887mr5615954pjb.27.1696091284560; Sat, 30 Sep 2023 09:28:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696091284; cv=none; d=google.com; s=arc-20160816; b=uRq9JoHKm+gykQwmyMpRDdzRisf4qwyeWTB6nXFSAMJKLPsxXTCd3/t/jbbB4k4JQB XSJUByXGWf60jk7w20TmAy98RD48ddkUH6cEkiLdB3R2mcZIrJO2YhxBOjcKv+DZ5O8S xuhWQXnKH563IBLQp4PgGRbsOThZhIXMLnqLW2JaIi9Lcg9tfaKsrj2wSfGxGJ2/s2QS W0UY6Y6Gc/q1V2Cfa+t9gtORJJMMEluvasLGh6YRW575j9ueCb6po9IwQIEjTgdF98jh ZcEB2HptLtdxzwcZEFGQAHzX9z4sBg4s3onN3gP9ivUQkLncgN8sqdRFtbK+GwYm9XNh 0Nug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:message-id:in-reply-to :date:references:subject:cc:to:from:dkim-signature; bh=KCfWLUOk3e1JYc4W76Rfp1gmEnzrFY9/V52vbHqCqSQ=; fh=Phz6IeEvUjWRC8M1aErGRlbvS4lodlK6Gt1ZLUwoG2s=; b=hTjnlgAShNugLIZiCEurk4Ic9LjuQP58rnnYOfspKxyY/vxBqgdNMIgchIScy0JZ+T E1Cm7MJ8O7h+2h+N0U64WwmMY3neeuY9cEfQ3w0FAZwqTZkA+KTYPcm5D4tASST607IG qYWajahoJwSp7ynkfKrXVQLUhK94v1tJQVeYVcno2XQdyajbqyJe2ZOi4MV+mIyQUWVD ADxyUpz19lRQRb/7LReINvaiZxv97CD6aSYtoPZ0hMGAFaHYnfzPQVpYbbzT8/PjFXRj qqb2bOkSphfaOsV1Z0TvSs4w70MJPQ34rb1Pa35B9gVBBQBkwSiwUVrtb3ydnccgMlKO XQyQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BwS9mARg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id na4-20020a17090b4c0400b002465d98f18csi4295384pjb.170.2023.09.30.09.28.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 30 Sep 2023 09:28:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BwS9mARg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id D9A5D802F33F; Fri, 29 Sep 2023 21:28:27 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229626AbjI3E2R (ORCPT + 99 others); Sat, 30 Sep 2023 00:28:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229488AbjI3E2Q (ORCPT ); Sat, 30 Sep 2023 00:28:16 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8575EB for ; Fri, 29 Sep 2023 21:28:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696048093; x=1727584093; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=rVpj1lqLtJb8TCKjOHawgxXj1vC2ae/09G5U9A5u7Xo=; b=BwS9mARgoSYcIUSV8PwrOu8VRf7PkYXDeJUgG28irIP4tTAUBIE0PNga HDvm+hXovr/uAeK869basIxMYwcMSRgJKbL25kpLmHw6PJGQwy57KnnSM 3SKq9lQkt/uz4/5bolbzNce6fxLuuFB1q92RO8vgKyl9NkPiNv4Fr6rEr hdisEq110YFqdp38wzxcGRqkMSFdiT4PepeBG9i/SbAaRA89lflfBKi8b zRtyovWcjEg2WORWNqT7wE1TLyXY5zKVlq2Fva/Q3PoOr9vtsgPVy0N2l 800he6RRHhWPGq7Nxy1G5QoO1UD12cpW8a8DQl5PipYXQzP1n1QoxEEkO w==; X-IronPort-AV: E=McAfee;i="6600,9927,10848"; a="381301879" X-IronPort-AV: E=Sophos;i="6.03,189,1694761200"; d="scan'208";a="381301879" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Sep 2023 21:28:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10848"; a="750110687" X-IronPort-AV: E=Sophos;i="6.03,189,1694761200"; d="scan'208";a="750110687" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Sep 2023 21:28:10 -0700 From: "Huang, Ying" To: Johannes Weiner Cc: Andrew Morton , Vlastimil Babka , Mel Gorman , Miaohe Lin , Kefeng Wang , Zi Yan , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/6] mm: page_alloc: remove pcppage migratetype caching References: <20230911195023.247694-1-hannes@cmpxchg.org> <20230911195023.247694-2-hannes@cmpxchg.org> <87y1gsrx32.fsf@yhuang6-desk2.ccr.corp.intel.com> <20230927145115.GA365513@cmpxchg.org> Date: Sat, 30 Sep 2023 12:26:01 +0800 In-Reply-To: <20230927145115.GA365513@cmpxchg.org> (Johannes Weiner's message of "Wed, 27 Sep 2023 10:51:15 -0400") Message-ID: <87pm20p9ra.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Fri, 29 Sep 2023 21:28:28 -0700 (PDT) Johannes Weiner writes: > On Wed, Sep 27, 2023 at 01:42:25PM +0800, Huang, Ying wrote: >> Johannes Weiner writes: >> >> > The idea behind the cache is to save get_pageblock_migratetype() >> > lookups during bulk freeing. A microbenchmark suggests this isn't >> > helping, though. The pcp migratetype can get stale, which means that >> > bulk freeing has an extra branch to check if the pageblock was >> > isolated while on the pcp. >> > >> > While the variance overlaps, the cache write and the branch seem to >> > make this a net negative. The following test allocates and frees >> > batches of 10,000 pages (~3x the pcp high marks to trigger flushing): >> > >> > Before: >> > 8,668.48 msec task-clock # 99.735 CPUs utilized ( +- 2.90% ) >> > 19 context-switches # 4.341 /sec ( +- 3.24% ) >> > 0 cpu-migrations # 0.000 /sec >> > 17,440 page-faults # 3.984 K/sec ( +- 2.90% ) >> > 41,758,692,473 cycles # 9.541 GHz ( +- 2.90% ) >> > 126,201,294,231 instructions # 5.98 insn per cycle ( +- 2.90% ) >> > 25,348,098,335 branches # 5.791 G/sec ( +- 2.90% ) >> > 33,436,921 branch-misses # 0.26% of all branches ( +- 2.90% ) >> > >> > 0.0869148 +- 0.0000302 seconds time elapsed ( +- 0.03% ) >> > >> > After: >> > 8,444.81 msec task-clock # 99.726 CPUs utilized ( +- 2.90% ) >> > 22 context-switches # 5.160 /sec ( +- 3.23% ) >> > 0 cpu-migrations # 0.000 /sec >> > 17,443 page-faults # 4.091 K/sec ( +- 2.90% ) >> > 40,616,738,355 cycles # 9.527 GHz ( +- 2.90% ) >> > 126,383,351,792 instructions # 6.16 insn per cycle ( +- 2.90% ) >> > 25,224,985,153 branches # 5.917 G/sec ( +- 2.90% ) >> > 32,236,793 branch-misses # 0.25% of all branches ( +- 2.90% ) >> > >> > 0.0846799 +- 0.0000412 seconds time elapsed ( +- 0.05% ) >> > >> > A side effect is that this also ensures that pages whose pageblock >> > gets stolen while on the pcplist end up on the right freelist and we >> > don't perform potentially type-incompatible buddy merges (or skip >> > merges when we shouldn't), whis is likely beneficial to long-term >> > fragmentation management, although the effects would be harder to >> > measure. Settle for simpler and faster code as justification here. >> >> I suspected the PCP allocating/freeing path may be influenced (that is, >> allocating/freeing batch is less than PCP high). So I tested >> one-process will-it-scale/page_fault1 with sysctl >> percpu_pagelist_high_fraction=8. So pages will be allocated/freed >> from/to PCP only. The test results are as follows, >> >> Before: >> will-it-scale.1.processes 618364.3 (+- 0.075%) >> perf-profile.children.get_pfnblock_flags_mask 0.13 (+- 9.350%) >> >> After: >> will-it-scale.1.processes 616512.0 (+- 0.057%) >> perf-profile.children.get_pfnblock_flags_mask 0.41 (+- 22.44%) >> >> The change isn't large: -0.3%. Perf profiling shows the cycles% of >> get_pfnblock_flags_mask() increases. > > Ah, this is going through the free_unref_page_list() path that > Vlastimil had pointed out as well. I made another change on top that > eliminates the second lookup. After that, both pcp fast paths have the > same number of lookups as before: 1. This fixes the regression for me. > > Would you mind confirming this as well? I have done more test for the series and addon patches. The test results are as follows, base perf-profile.children.get_pfnblock_flags_mask 0.15 (+- 32.62%) will-it-scale.1.processes 618621.7 (+- 0.18%) mm: page_alloc: remove pcppage migratetype caching perf-profile.children.get_pfnblock_flags_mask 0.40 (+- 21.55%) will-it-scale.1.processes 616350.3 (+- 0.27%) mm: page_alloc: fix up block types when merging compatible blocks perf-profile.children.get_pfnblock_flags_mask 0.36 (+- 8.36%) will-it-scale.1.processes 617121.0 (+- 0.17%) mm: page_alloc: move free pages when converting block during isolation perf-profile.children.get_pfnblock_flags_mask 0.36 (+- 15.10%) will-it-scale.1.processes 615578.0 (+- 0.18%) mm: page_alloc: fix move_freepages_block() range error perf-profile.children.get_pfnblock_flags_mask 0.36 (+- 12.78%) will-it-scale.1.processes 615364.7 (+- 0.27%) mm: page_alloc: fix freelist movement during block conversion perf-profile.children.get_pfnblock_flags_mask 0.36 (+- 10.52%) will-it-scale.1.processes 617834.8 (+- 0.52%) mm: page_alloc: consolidate free page accounting perf-profile.children.get_pfnblock_flags_mask 0.39 (+- 8.27%) will-it-scale.1.processes 621000.0 (+- 0.13%) mm: page_alloc: close migratetype race between freeing and stealing perf-profile.children.get_pfnblock_flags_mask 0.37 (+- 5.87%) will-it-scale.1.processes 618378.8 (+- 0.17%) mm: page_alloc: optimize free_unref_page_list() perf-profile.children.get_pfnblock_flags_mask 0.20 (+- 14.96%) will-it-scale.1.processes 618136.3 (+- 0.16%) It seems that the will-it-scale score is influenced by some other factors too. But anyway, the series + addon patches restores the score of will-it-scale. And the cycles% of get_pfnblock_flags_mask() is almost restored by the final patch (mm: page_alloc: optimize free_unref_page_list()). Feel free to add my "Tested-by" for these patches. -- Best Regards, Huang, Ying