Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp884690iob; Thu, 12 May 2022 06:46:52 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwo+ITwaRxHKKi8VWOVNaZMm6/nSvXzLMaqb6vQQ9d/hKwu0QwrIErDj81hXXxCCFVm8TIv X-Received: by 2002:a05:6402:3488:b0:427:b4ec:991b with SMTP id v8-20020a056402348800b00427b4ec991bmr35259453edc.319.1652363212509; Thu, 12 May 2022 06:46:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1652363212; cv=none; d=google.com; s=arc-20160816; b=ritalVFNUM3umN8wRlfzMichYKAhyxmXwA0CMfnt/2IeWTMP45IkxKRIRsvbIv90CN STNXiOb+5ftQcnlFoxmNNT+pMdfPQAxGFCkZrgS2rUMXBwAmSG8XpoGCG5Jksr0mDJ/9 11PePqo3etDy2Eb2F71WU8vaS4IA7/H8RrhgXKDfVmRBpPG/LkuZBDojnj0uk324djAK /Dj1xqEhi402MTmvKO3YNr4Zj/NCwTgc7WZcJX7l08a7vhcugmDP+ZjXrNd3W8ZCZENz SREEjUQhtBB/TN2qFdPOIqW5+EtMIxOFyZT59qnmp5S9AKQzHgdghPoEUiGw/aubRqyc wQ5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=jFMHc1eWjH4TFOKuOidAFOnKLbJJM5b2P5+dC+vufFM=; b=aRPvlkPtMzkT/bW2vrEhx/cWpASuwJ89KAmnpqEywFSN3k6P+whgxBh4kHgrTAkEQ3 lvAX94Xvtogmyj72idNSnRWb2cUGz4drGRuEkxDJgBvXt5bj8ShRzusshvs6ULUKziGt QImGjhy6vbNAHiSj/zNL8uPcnNOwM4ZwesFEtBSw+CDLfeIwYPEjE3vZRWYR4HDiucin hF+839JPijho9TmJBxAa8iaJyN9GbQm9ztoeoUYsu8BAm7giV6Rb6BM/ITU0ri3VOzSe 3CNCIY8yvYQMV5YHoHXXrZo8B4w8nXODRKBEcSgXE7Rz2vO+Gn/uwGHCKjbRhnvcm7+Q rGpA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hq16-20020a1709073f1000b006f3b5c2ed49si6381148ejc.778.2022.05.12.06.46.24; Thu, 12 May 2022 06:46:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351744AbiELIwG (ORCPT + 99 others); Thu, 12 May 2022 04:52:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40984 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351703AbiELIvA (ORCPT ); Thu, 12 May 2022 04:51:00 -0400 Received: from outbound-smtp18.blacknight.com (outbound-smtp18.blacknight.com [46.22.139.245]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 031B73D4AA for ; Thu, 12 May 2022 01:50:57 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp18.blacknight.com (Postfix) with ESMTPS id 6BB4A1C5935 for ; Thu, 12 May 2022 09:50:55 +0100 (IST) Received: (qmail 13761 invoked from network); 12 May 2022 08:50:55 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 12 May 2022 08:50:55 -0000 From: Mel Gorman To: Andrew Morton Cc: Nicolas Saenz Julienne , Marcelo Tosatti , Vlastimil Babka , Michal Hocko , LKML , Linux-MM , Mel Gorman Subject: [PATCH 0/6] Drain remote per-cpu directly v3 Date: Thu, 12 May 2022 09:50:37 +0100 Message-Id: <20220512085043.5234-1-mgorman@techsingularity.net> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Changelog since v2 o More conversions from page->lru to page->[pcp_list|buddy_list] o Additional test results in changelogs Changelog since v1 o Fix unsafe RT locking scheme o Use spin_trylock on UP PREEMPT_RT This series has the same intent as Nicolas' series "mm/page_alloc: Remote per-cpu lists drain support" -- avoid interference of a high priority task due to a workqueue item draining per-cpu page lists. While many workloads can tolerate a brief interruption, it may be cause a real-time task runnning on a NOHZ_FULL CPU to miss a deadline and at minimum, the draining in non-deterministic. Currently an IRQ-safe local_lock protects the page allocator per-cpu lists. The local_lock on its own prevents migration and the IRQ disabling protects from corruption due to an interrupt arriving while a page allocation is in progress. The locking is inherently unsafe for remote access unless the CPU is hot-removed. This series adjusts the locking. A spinlock is added to struct per_cpu_pages to protect the list contents while local_lock_irq continues to prevent migration and IRQ reentry. This allows a remote CPU to safely drain a remote per-cpu list. This series is a partial series. Follow-on work should allow the local_irq_save to be converted to a local_irq to avoid IRQs being disabled/enabled in most cases. Consequently, there are some TODO comments highlighting the places that would change if local_irq was used. However, there are enough corner cases that it deserves a series on its own separated by one kernel release and the priority right now is to avoid interference of high priority tasks. Patch 1 is a cosmetic patch to clarify when page->lru is storing buddy pages and when it is storing per-cpu pages. Patch 2 shrinks per_cpu_pages to make room for a spin lock. Strictly speaking this is not necessary but it avoids per_cpu_pages consuming another cache line. Patch 3 is a preparation patch to avoid code duplication. Patch 4 is a simple micro-optimisation that improves code flow necessary for a later patch to avoid code duplication. Patch 5 uses a spin_lock to protect the per_cpu_pages contents while still relying on local_lock to prevent migration, stabilise the pcp lookup and prevent IRQ reentrancy. Patch 6 remote drains per-cpu pages directly instead of using a workqueue. include/linux/mm_types.h | 5 + include/linux/mmzone.h | 12 +- mm/page_alloc.c | 348 +++++++++++++++++++++++++-------------- 3 files changed, 233 insertions(+), 132 deletions(-) -- 2.34.1