Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp685587iog; Mon, 13 Jun 2022 10:36:22 -0700 (PDT) X-Google-Smtp-Source: AGRyM1tRO9bTNjYlUPjNPxkYGlRQBq8FNVRq/WEScGOm6Pep941TTzikNPQs0AGKotrNn3fSORc9 X-Received: by 2002:a17:902:d650:b0:168:b027:1a58 with SMTP id y16-20020a170902d65000b00168b0271a58mr77864plh.60.1655141782070; Mon, 13 Jun 2022 10:36:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1655141782; cv=none; d=google.com; s=arc-20160816; b=t8/xNR4BPJxfWZseG/s+HmROC3BSwxhXdvD39o0FnGrG1dBXPSE8c6ydYnHlIC4H9d i6MPExpu7SVLpM5CLBwVW516QX1I1s/zb57gltWhcju3o/NJDjKCfGcvzgGAocNG1hU0 H1ylWn3DeAnFxRgNg404Hbpj0JPZ9A88bi1Mzm96iIvHcTkyhIjL2f4ShoJcuq9GIGJm MOFSgU88k4zVLdq+6JpZvi3tnACyN2JBwXJFM47s3K6CPSATikzBAVLzB6GRKdG8h8qV EPMqI/7QknwEodFUTD2FLREKav7cTyZWPHe2bpCgrBHLhVlZZlhDrRBiGuHj16NoTGlN wxyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=9NOjV0wFVRnxcNeRs/52sqJ/82+jSWZuNys5nNC9IRE=; b=KJW9Aoe+MOxU4/pR1Rrp/clWGo0AFtmygwo5YjJ5jJzcmJU6UZ+fhaRczU1ljRkpFb Jj7Wb0mySWiw3rwxAx8aaSn8nufLMzWOU84ljpr1FPjnxqe91AUFxry0X7yhJFNC12MB uFzN6k7liT4Vpr1ZOUWCAZi1QNM27SR77V+kd3avoPTc3dmvV6uNkxT8PqK5vcAC9SF6 iKEhTI8YxnjTg47RpMYg9Gf3GkYJveXhOMO+/2vU3Hf678R9LGxECnMUm+Sx7jfKOaZu QS/x72lUIW6754o4J7WmBRLPUeQwC8pJ+9Fj/JnQ1XthUBz0aMvG3tHNpQwffR+psQF7 +ACw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d126-20020a621d84000000b0051b85fb65b9si8391428pfd.244.2022.06.13.10.36.09; Mon, 13 Jun 2022 10:36:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237321AbiFMRdr (ORCPT + 99 others); Mon, 13 Jun 2022 13:33:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238067AbiFMRcl (ORCPT ); Mon, 13 Jun 2022 13:32:41 -0400 Received: from outbound-smtp30.blacknight.com (outbound-smtp30.blacknight.com [81.17.249.61]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7019D554BF for ; Mon, 13 Jun 2022 05:56:35 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp30.blacknight.com (Postfix) with ESMTPS id 9C80EBAAA9 for ; Mon, 13 Jun 2022 13:56:33 +0100 (IST) Received: (qmail 27785 invoked from network); 13 Jun 2022 12:56:33 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 13 Jun 2022 12:56:33 -0000 From: Mel Gorman To: Andrew Morton Cc: Nicolas Saenz Julienne , Marcelo Tosatti , Vlastimil Babka , Michal Hocko , Hugh Dickins , LKML , Linux-MM , Mel Gorman Subject: [PATCH v4 00/7] Drain remote per-cpu directly Date: Mon, 13 Jun 2022 13:56:15 +0100 Message-Id: <20220613125622.18628-1-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This replaces the existing version on mm-unstable. The biggest difference is replacing local_lock entirely which is the last patch. The other changes are minor fixes reported by Hugh and Vlastimil. Changelog since v3 o Checkpatch fixes from mm-unstable (akpm) o Replace local_lock with spinlock (akpm) o Remove IRQ-disabled check in free_unref_page_list as it triggers a false positive (hughd) o Take an unlikely check out of the rmqueue fast path (vbabka) Some setups, notably NOHZ_FULL CPUs, may be running realtime or latency-sensitive applications that cannot tolerate interference due to per-cpu drain work queued by __drain_all_pages(). Introduce a new mechanism to remotely drain the per-cpu lists. It is made possible by remotely locking 'struct per_cpu_pages' new per-cpu spinlocks. This has two advantages, the time to drain is more predictable and other unrelated tasks are not interrupted. This series has the same intent as Nicolas' series "mm/page_alloc: Remote per-cpu lists drain support" -- avoid interference of a high priority task due to a workqueue item draining per-cpu page lists. While many workloads can tolerate a brief interruption, it may cause a real-time task running on a NOHZ_FULL CPU to miss a deadline and at minimum, the draining is non-deterministic. Currently an IRQ-safe local_lock protects the page allocator per-cpu lists. The local_lock on its own prevents migration and the IRQ disabling protects from corruption due to an interrupt arriving while a page allocation is in progress. This series adjusts the locking. A spinlock is added to struct per_cpu_pages to protect the list contents while local_lock_irq is ultimately replaced by just the spinlock in the final patch. This allows a remote CPU to safely. Follow-on work should allow the local_irq_save to be converted to a local_irq to avoid IRQs being disabled/enabled in most cases. Patch 1 is a cosmetic patch to clarify when page->lru is storing buddy pages and when it is storing per-cpu pages. Patch 2 shrinks per_cpu_pages to make room for a spin lock. Strictly speaking this is not necessary but it avoids per_cpu_pages consuming another cache line. Patch 3 is a preparation patch to avoid code duplication. Patch 4 is a simple micro-optimisation that improves code flow necessary for a later patch to avoid code duplication. Patch 5 uses a spin_lock to protect the per_cpu_pages contents while still relying on local_lock to prevent migration, stabilise the pcp lookup and prevent IRQ reentrancy. Patch 6 remote drains per-cpu pages directly instead of using a workqueue. Patch 7 uses a normal spinlock instead of local_lock for remote draining include/linux/mm_types.h | 5 + include/linux/mmzone.h | 12 +- mm/page_alloc.c | 404 ++++++++++++++++++++++++--------------- 3 files changed, 266 insertions(+), 155 deletions(-) -- 2.35.3