Received: by 2002:a05:6a10:6d10:0:0:0:0 with SMTP id gq16csp403055pxb; Fri, 22 Apr 2022 03:36:38 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzTVLMlepFFvjlgaY9HAMfoYv6bBx7hSrEaMe7unhgIVavGQ/peZd0nY+Wzp2JpwVLQyudl X-Received: by 2002:aa7:cd18:0:b0:41d:8df8:86e5 with SMTP id b24-20020aa7cd18000000b0041d8df886e5mr4099378edw.248.1650623797950; Fri, 22 Apr 2022 03:36:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1650623797; cv=none; d=google.com; s=arc-20160816; b=O00FOQNVrta2uzoj/Y3cjSExbcrPDXRZ7ikHg4gAYicGUwZ5kicv20PSgGEkqQAAFn /6dNCx0xm94UtGsxihAVtXwtqw1e8bcobZjS3mNbdCtl+QYQYOQBuX7hffJavI2yIbTj 7NUZ9ZgnD06dOYM5Fe7hSmq+Ge13Pt+dYbWOxoy/OfRuDeTN2G316EbLuKeazp+qArbQ 553sBQdNRkt4tHz+Tzl/44LxbshXI4G8u2vVL0G6aZA25iEafVTEApmxjVI1UiklQfU7 C+mima77aTSmY4dkVgrlFY+nILcT8CDkAzpIGP7tqpL1VATzlNE+54KpuDP79jWwzeD7 4MiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=UvzpanneVECbTevX/8J4WsYjyrhrPmk4/y8qSNLNXpw=; b=aSvRUOfK++Moojop84T7XVIGCkxLHhV9TFwNnVHoaHPMsjNwkeVYQxFbw+bnuTfT1T TO8UPxrBzu6Bo5kaENt86q+DLBG3HSS/G8zexx52RhGUP8OcPLZDKGUZJmEtTB0eqdGE xtVVOH1CwF48EO+tyZkF+XKyozHoOrJhrd5553AwDF78rgSDDVtGCQEBwKErYFp/l8mN 4PE6KggsNnFqdueh+fclvxojfQ6Rtyvu8+cOM0IWv7/4dKJXc3qOvWXObOIsRoskI8oe jq2fJFkqaMsZxT77peyo5gvMJXbM0Nz/QONpXqPO3HQRLA6lyJq6sCzmXHeKlEf1jG1+ yQ+A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h3-20020a1709067cc300b006df76385e41si6251668ejp.737.2022.04.22.03.36.13; Fri, 22 Apr 2022 03:36:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377414AbiDTKCH (ORCPT + 99 others); Wed, 20 Apr 2022 06:02:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349456AbiDTKCG (ORCPT ); Wed, 20 Apr 2022 06:02:06 -0400 Received: from outbound-smtp20.blacknight.com (outbound-smtp20.blacknight.com [46.22.139.247]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9104F3BF93 for ; Wed, 20 Apr 2022 02:59:19 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp20.blacknight.com (Postfix) with ESMTPS id 269591C4169 for ; Wed, 20 Apr 2022 10:59:18 +0100 (IST) Received: (qmail 9740 invoked from network); 20 Apr 2022 09:59:18 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 20 Apr 2022 09:59:17 -0000 From: Mel Gorman To: Nicolas Saenz Julienne Cc: Marcelo Tosatti , Vlastimil Babka , Michal Hocko , LKML , Linux-MM , Mel Gorman Subject: [RFC PATCH 0/6] Drain remote per-cpu directly Date: Wed, 20 Apr 2022 10:59:00 +0100 Message-Id: <20220420095906.27349-1-mgorman@techsingularity.net> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This series has the same intent as Nicolas' series "mm/page_alloc: Remote per-cpu lists drain support" -- avoid interference of a high priority task due to a workqueue item draining per-cpu page lists. While many workloads can tolerate a brief interruption, it may be cause a real-time task runnning on a NOHZ_FULL CPU to miss a deadline and at minimum, the draining in non-deterministic. Currently an IRQ-safe local_lock protects the page allocator per-cpu lists. The local_lock on its own prevents migration and the IRQ disabling protects from corruption due to an interrupt arriving while a page allocation is in progress. The locking is inherently unsafe for remote access unless the CPU is hot-removed. This series adjusts the locking. A spin-lock is added to struct per_cpu_pages to protect the list contents while local_lock_irq continues to prevent migration and IRQ reentry. This allows a remote CPU to safely drain a remote per-cpu list. This series is a partial series. Follow-on work would allow the local_irq_save to be converted to a local_irq to avoid IRQs being disabled/enabled in most cases. However, there are enough corner cases that it deserves a series on its own separated by one kernel release and the priority right now is to avoid interference of high priority tasks. Patch 1 is a cosmetic patch to clarify when page->lru is storing buddy pages and when it is storing per-cpu pages. Patch 2 shrinks per_cpu_pages to make room for a spin lock. Strictly speaking this is not necessary but it avoids per_cpu_pages consuming another cache line. Patch 3 is a preparation patch to avoid code duplication. Patch 4 is a simple micro-optimisation that improves code flow necessary for a later patch to avoid code duplication. Patch 5 uses a spin_lock to protect the per_cpu_pages contents while still relying on local_lock to prevent migration, stabilise the pcp lookup and prevent IRQ reentrancy. Patch 6 remote drains per-cpu pages directly instead of using a workqueue. include/linux/mm_types.h | 5 + include/linux/mmzone.h | 12 +- mm/page_alloc.c | 333 ++++++++++++++++++++++++--------------- 3 files changed, 222 insertions(+), 128 deletions(-) -- 2.34.1