Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4316060pxj; Wed, 12 May 2021 02:56:58 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyztN3nimUZSk16QirlZm5KO4izJSDy5OAHlvab6z5XFuxAEJ4v4kcgJSfpL3b5WOiTgyTj X-Received: by 2002:a92:b12:: with SMTP id b18mr1860528ilf.100.1620813418209; Wed, 12 May 2021 02:56:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620813418; cv=none; d=google.com; s=arc-20160816; b=rC/fJIlpto8xM+x8b/DUtZO2VQWbW8wPP3fzG8VVZ4sVT6usn1CL4ZLJqFTAzjt1Fj xvMqMPJ6ixJc/OZzSeHiYSCsQSLmEVk3YcccLOQfDyUVNtV9fLM6j7sVi7Did5jI5qge bnozxofswd+dZ8P5nQHHXbes4yeZJDunDpB9TU/OSMGM3Nzp1tREcwaD3WfBP0Ssau5q 8oT27BcvpDvCoIXMmHQyKNWamfCm9VrLtmO+bqw5lU5WdCdFlJQ4yHn9H9Y1hoRtJt8p KrBOsZk6bsXWn6VJfyb29yWXHx+UBN/cb5zeawLdDBS26Hq+ztzGBvkibNQCfl4KPrqS oaWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=Ig7BDnezZf4m3cR0x4ig3CD/Cuf9R9xupSqUhUhv5Fk=; b=XMIHEw+rZ947mSaKc/U60Y8GteKk3vyVklNB/iDXpfNQi789qbmCVoqU1SxK/OELaT ysTOrsIvSjzKVh6B3+i+rFAg/JHpxmqP/wEter5XBHt5lfZC6ON28oRb0qfTi06CqtdC HsloPspMxBxXfkvN73lYNKyROloLG8q5cgzRoPGhXDD+A6a2wJ/1Hy10OVNcOfUnGBqk 59VM4pISZLZoLxlZ24qN4qeBgOfKgsUUYc5Sgc0iAz4Qoaeo3a9XYnEbicOJi9f3+Csg u1XqiItJ6abFWeJRhqWGcVjPc5cZtAg7FlxTYmloR1v/w4tMiWF5ZQGDNkFFYssjb0mq TypQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e10si22509049jaq.25.2021.05.12.02.56.45; Wed, 12 May 2021 02:56:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230102AbhELJ4U (ORCPT + 99 others); Wed, 12 May 2021 05:56:20 -0400 Received: from outbound-smtp37.blacknight.com ([46.22.139.220]:49573 "EHLO outbound-smtp37.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230096AbhELJ4T (ORCPT ); Wed, 12 May 2021 05:56:19 -0400 Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp37.blacknight.com (Postfix) with ESMTPS id 6F85C17D8 for ; Wed, 12 May 2021 10:55:10 +0100 (IST) Received: (qmail 25857 invoked from network); 12 May 2021 09:55:10 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.23.168]) by 81.17.254.9 with ESMTPA; 12 May 2021 09:55:10 -0000 From: Mel Gorman To: Andrew Morton Cc: Chuck Lever , Jesper Dangaard Brouer , Thomas Gleixner , Sebastian Andrzej Siewior , Peter Zijlstra , Ingo Molnar , Michal Hocko , Vlastimil Babka , Linux-MM , Linux-RT-Users , LKML , Mel Gorman Subject: [PATCH 0/9 v6] Use local_lock for pcp protection and reduce stat overhead Date: Wed, 12 May 2021 10:54:49 +0100 Message-Id: <20210512095458.30632-1-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Changelog since v5 o Rebase 5.13-rc1 on top of Hugh's changes to mm/vmstat.c which means the series no longer has pre-requisites from Andrew's tree o Not mentioned in changelogs but Sebastian (cc'd) indicated it passed tests replacing the local_lock implementation in the preempt-rt tree Changelog since v4 o Dropped local_lock embed patch due to complexity o Fix !NUMA build o Avoid adding pages with mt >= MIGRATE_PCPTYPES to non-existant per-cpu list Changelog since v3 o Preserve NUMA_* counters after CPU hotplug o Drop "mm/page_alloc: Remove duplicate checks if migratetype should be isolated" o Add micro-optimisation tracking PFN during free_unref_page_list o Add Acks Changelog since v2 o Fix zonestats initialisation o Merged memory hotplug fix separately o Embed local_lock within per_cpu_pages This series has no pre-requisites from Andrew's tree any more but is also available at git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git mm-percpu-local_lock-v6r4 The PCP (per-cpu page allocator in page_alloc.c) shares locking requirements with vmstat and the zone lock which is inconvenient and causes some issues. For example, the PCP list and vmstat share the same per-cpu space meaning that it's possible that vmstat updates dirty cache lines holding per-cpu lists across CPUs unless padding is used. Second, PREEMPT_RT does not want to disable IRQs for too long in the page allocator. This series splits the locking requirements and uses locks types more suitable for PREEMPT_RT, reduces the time when special locking is required for stats and reduces the time when IRQs need to be disabled on !PREEMPT_RT kernels. Why local_lock? PREEMPT_RT considers the following sequence to be unsafe as documented in Documentation/locking/locktypes.rst local_irq_disable(); spin_lock(&lock); The pcp allocator has this sequence for rmqueue_pcplist (local_irq_save) -> __rmqueue_pcplist -> rmqueue_bulk (spin_lock). While it's possible to separate this out, it generally means there are points where we enable IRQs and reenable them again immediately. To prevent a migration and the per-cpu pointer going stale, migrate_disable is also needed. That is a custom lock that is similar, but worse, than local_lock. Furthermore, on PREEMPT_RT, it's undesirable to leave IRQs disabled for too long. By converting to local_lock which disables migration on PREEMPT_RT, the locking requirements can be separated and start moving the protections for PCP, stats and the zone lock to PREEMPT_RT-safe equivalent locking. As a bonus, local_lock also means that PROVE_LOCKING does something useful. After that, it's obvious that zone_statistics incurs too much overhead and leaves IRQs disabled for longer than necessary on !PREEMPT_RT kernels. zone_statistics uses perfectly accurate counters requiring IRQs be disabled for parallel RMW sequences when inaccurate ones like vm_events would do. The series makes the NUMA statistics (NUMA_HIT and friends) inaccurate counters that then require no special protection on !PREEMPT_RT. The bulk page allocator can then do stat updates in bulk with IRQs enabled which should improve the efficiency. Technically, this could have been done without the local_lock and vmstat conversion work and the order simply reflects the timing of when different series were implemented. Finally, there are places where we conflate IRQs being disabled for the PCP with the IRQ-safe zone spinlock. The remainder of the series reduces the scope of what is protected by disabled IRQs on !PREEMPT_RT kernels. By the end of the series, page_alloc.c does not call local_irq_save so the locking scope is a bit clearer. The one exception is that modifying NR_FREE_PAGES still happens in places where it's known the IRQs are disabled as it's harmless for PREEMPT_RT and would be expensive to split the locking there. No performance data is included because despite the overhead of the stats, it's within the noise for most workloads on !PREEMPT_RT. However, Jesper Dangaard Brouer ran a page allocation microbenchmark on a E5-1650 v4 @ 3.60GHz CPU on the first version of this series. Focusing on the array variant of the bulk page allocator reveals the following. (CPU: Intel(R) Xeon(R) CPU E5-1650 v4 @ 3.60GHz) ARRAY variant: time_bulk_page_alloc_free_array: step=bulk size Baseline Patched 1 56.383 54.225 (+3.83%) 2 40.047 35.492 (+11.38%) 3 37.339 32.643 (+12.58%) 4 35.578 30.992 (+12.89%) 8 33.592 29.606 (+11.87%) 16 32.362 28.532 (+11.85%) 32 31.476 27.728 (+11.91%) 64 30.633 27.252 (+11.04%) 128 30.596 27.090 (+11.46%) While this is a positive outcome, the series is more likely to be interesting to the RT people in terms of getting parts of the PREEMPT_RT tree into mainline. drivers/base/node.c | 18 +-- include/linux/mmzone.h | 31 +++-- include/linux/vmstat.h | 65 ++++++----- mm/mempolicy.c | 2 +- mm/page_alloc.c | 255 ++++++++++++++++++++++++----------------- mm/vmstat.c | 246 +++++++++++++++++---------------------- 6 files changed, 324 insertions(+), 293 deletions(-) -- 2.26.2