Received: by 2002:a05:6a10:9afc:0:0:0:0 with SMTP id t28csp180643pxm; Tue, 1 Mar 2022 18:01:59 -0800 (PST) X-Google-Smtp-Source: ABdhPJy+PqxDiH6Go6taYuEwuPZ91rrdGXFtBRphhzoTYAwFlEM5rnpAKwn9YZ8z+VVofM6EIsC1 X-Received: by 2002:a17:902:8a8a:b0:14d:bd69:e797 with SMTP id p10-20020a1709028a8a00b0014dbd69e797mr28026473plo.49.1646186508641; Tue, 01 Mar 2022 18:01:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1646186508; cv=none; d=google.com; s=arc-20160816; b=ZPj0gNtYM73ELkpguZa+6TR9oqkViqh/HVgIj666KMOE5nc2c9ev5bquvEwbM7Mj2t H/RFzeOE8ljoPjF3msJ63hK7/FSbCHePEh+y5G75GerbD5WqYMBH/jJa/ZCoIQuZo/Na I7V7gwRnb089EXI+LSR376q1KfupgKQV5fKAgvFpoIUCfN7JBSAzsshs5KxJY852Nghl F0XV4GLO+Z1tB4E5eD1NB2bsdlmpwmTalqLI56rCp30y737jyHxUE73tXnHDFzT3K0RL RAe80UOSjfUhQMeAeST83XjFpjcZsupJNSqQ9ZAK4YRMEH62mrXCdZvv3UonNp6fAKrP cl/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=SLTMGPlk7m9BUyey2nJimGnjymGnIfWBWhbFIZII+Cg=; b=JBDlTXcSwgnzEqoCYDLfr2Y5J9ukoeOAkKy5jtBEJcMMFw/O2O/PJBVcH+byvtPI3M FzzUcg138A0uYUUwLW1dmsraO3n158NbxPl4Hyevh/jHWmNIQS7t1MlDVvaEn9UPirn1 5sF+8VSFjPr4IEhqWPzOWYa9CZQQg1vnTzRzDm08lxJwP8Z9A1/dFHuQqiykMhvF+pKG GB1g6qQ5bM6W0ljTQwDo0t4ffgBzMjxSOxi1QMJB/VMwh/xHuv+Htth8nnBpj/mYgeuS scxJcjAjhE0/qO92QbFpth5JRSjDXyMc52K1jd6mpvBIiUmYmG1dGsZC96LOukcx5Qv5 +WDw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=bsXrQVc+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ay5-20020a1709028b8500b001501189c92fsi13289523plb.129.2022.03.01.18.01.29; Tue, 01 Mar 2022 18:01:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=bsXrQVc+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237585AbiCAVNS (ORCPT + 99 others); Tue, 1 Mar 2022 16:13:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52738 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237439AbiCAVNP (ORCPT ); Tue, 1 Mar 2022 16:13:15 -0500 Received: from mail-yw1-x112f.google.com (mail-yw1-x112f.google.com [IPv6:2607:f8b0:4864:20::112f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B145F625A for ; Tue, 1 Mar 2022 13:12:31 -0800 (PST) Received: by mail-yw1-x112f.google.com with SMTP id 00721157ae682-2dbd97f9bfcso41075727b3.9 for ; Tue, 01 Mar 2022 13:12:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=SLTMGPlk7m9BUyey2nJimGnjymGnIfWBWhbFIZII+Cg=; b=bsXrQVc+XxaRTYpztFqa81b3Vx8euJnUzbRJl6nb/1/B9k1+VBpi9QcxKe3T9QfMKO 6bMfYTAYERmvRHNLW28I3N6SE7+zW12oJ3PH/zS4Nfprv7wUMFh93pqeo06jmIo3YVmz yoL6E7hX2VMI4rgzgn8UhOv3eWKPeOtcFrZDY5gchLbOcP26ywBGdWR2EID68APeNG32 hXhwT9nK/1nCLckLsghw6bO4YI5joM7h7ceZ/AG1yQUEb1j+AgcgmRA17cIe+dkYFESU tiBlZRwYYJ9nbINAVpSjBWU+IRkxj4CxNTksScBzv1F+LXd8fskEALroe5PYtWh3IUym AT3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=SLTMGPlk7m9BUyey2nJimGnjymGnIfWBWhbFIZII+Cg=; b=ZGOwk/HGNomv+LsY2nF8lKdZkkGqOM3sNvuyKWVoQGJmj0DpRLMnjappuqKJC3jRxo VRrs4PcTpDK0mLAuSHKdHnjvfAPx1Sm5Dhwt4PjOyu2OTAkWi3zObFGS/WU78q7gfbPs kRK4VWHhb45C+hBgYojtbVoRTMk27LlSj1o1g3AIbXPpVZxNBHbr/DqyHvhcDWPmIAu6 bDNFZe867cFSgJ++QPReF/mJx0F9D/qgUH8cP/5idZGAxmnTcXGFu2T3ZFWYqwEOwL00 hKIzkH42ENiEFC+jdjY+TBG3t0h+iZtzi+LpoYEL3Balpmmdio7Zfv9eS8hs0CjpEmSa SJaQ== X-Gm-Message-State: AOAM533G8AziZa15h4aIM0JLJDpTks8E80mXoSTIShcvI7ig7gfqXPSZ D1h6kotEbck8UZvu4tTUx3HUqzbL5NsUGjPqt+/9J14kpeIN+A== X-Received: by 2002:a81:6982:0:b0:2db:2a9f:302c with SMTP id e124-20020a816982000000b002db2a9f302cmr19893143ywc.237.1646169150679; Tue, 01 Mar 2022 13:12:30 -0800 (PST) MIME-Version: 1.0 References: <20220225012819.1807147-1-surenb@google.com> <20220301122520.GB23924@pathway.suse.cz> In-Reply-To: <20220301122520.GB23924@pathway.suse.cz> From: Suren Baghdasaryan Date: Tue, 1 Mar 2022 13:12:19 -0800 Message-ID: Subject: Re: [RFC 1/1] mm: page_alloc: replace mm_percpu_wq with kthreads in drain_all_pages To: Petr Mladek Cc: Andrew Morton , Johannes Weiner , Michal Hocko , Peter Zijlstra , Roman Gushchin , Shakeel Butt , Minchan Kim , Tim Murray , linux-mm , LKML , kernel-team Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-18.1 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 1, 2022 at 4:25 AM Petr Mladek wrote: > > On Thu 2022-02-24 17:28:19, Suren Baghdasaryan wrote: > > Sending as an RFC to confirm if this is the right direction and to > > clarify if other tasks currently executed on mm_percpu_wq should be > > also moved to kthreads. The patch seems stable in testing but I want > > to collect more performance data before submitting a non-RFC version. > > > > > > Currently drain_all_pages uses mm_percpu_wq to drain pages from pcp > > list during direct reclaim. The tasks on a workqueue can be delayed > > by other tasks in the workqueues using the same per-cpu worker pool. > > This results in sizable delays in drain_all_pages when cpus are highly > > contended. > > Memory management operations designed to relieve memory pressure should > > not be allowed to block by other tasks, especially if the task in direct > > reclaim has higher priority than the blocking tasks. > > Replace the usage of mm_percpu_wq with per-cpu low priority FIFO > > kthreads to execute draining tasks. > > > > Suggested-by: Petr Mladek > > Signed-off-by: Suren Baghdasaryan > > The patch looks good to me. See few comments below about things > where I was in doubts. But I do not see any real problem with > this approach. Thanks for the review, Petr. One question inline. Other than that I would like to check if: 1. Using low priority FIFO for these kthreads is warranted. From https://lore.kernel.org/all/CAEe=Sxmow-jx60cDjFMY7qi7+KVc+BT++BTdwC5+G9E=1soMmQ@mail.gmail.com/#t my understanding was that we want this work to be done by RT kthread_worker but maybe that's not appropriate here? 2. Do we want to move any other work done on mm_percpu_wq (vmstat_work, lru_add_drain_all) to these kthreads? If what I have currently is ok, I'll post the first version. Thanks, Suren. > > > --- > > mm/page_alloc.c | 84 ++++++++++++++++++++++++++++++++++++++++--------- > > 1 file changed, 70 insertions(+), 14 deletions(-) > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 3589febc6d31..c9ab2cf4b05b 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -2209,6 +2210,58 @@ _deferred_grow_zone(struct zone *zone, unsigned int order) > > > > #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ > > > > +static void drain_local_pages_func(struct kthread_work *work); > > + > > +static int alloc_drain_worker(unsigned int cpu) > > +{ > > + struct pcpu_drain *drain; > > + > > + mutex_lock(&pcpu_drain_mutex); > > + drain = per_cpu_ptr(&pcpu_drain, cpu); > > + drain->worker = kthread_create_worker_on_cpu(cpu, 0, "pg_drain/%u", cpu); > > + if (IS_ERR(drain->worker)) { > > + drain->worker = NULL; > > + pr_err("Failed to create pg_drain/%u\n", cpu); > > + goto out; > > + } > > + /* Ensure the thread is not blocked by normal priority tasks */ > > + sched_set_fifo_low(drain->worker->task); > > + kthread_init_work(&drain->work, drain_local_pages_func); > > +out: > > + mutex_unlock(&pcpu_drain_mutex); > > + > > + return 0; > > +} > > + > > +static int free_drain_worker(unsigned int cpu) > > +{ > > + struct pcpu_drain *drain; > > + > > + mutex_lock(&pcpu_drain_mutex); > > + drain = per_cpu_ptr(&pcpu_drain, cpu); > > + kthread_cancel_work_sync(&drain->work); > > I do see not how CPU down was handled in the original code. > > Note that workqueues call unbind_workers() when a CPU > is going down. The pending work items might be proceed > on another CPU. From this POV, the new code looks more > safe. > > > + kthread_destroy_worker(drain->worker); > > + drain->worker = NULL; > > + mutex_unlock(&pcpu_drain_mutex); > > + > > + return 0; > > +} > > + > > +static void __init init_drain_workers(void) > > +{ > > + unsigned int cpu; > > + > > + for_each_online_cpu(cpu) > > + alloc_drain_worker(cpu); > > I though whether this need to be called under cpus_read_lock(); > And I think that the code should be safe as it is. There > is this call chain: > > + kernel_init_freeable() > + page_alloc_init_late() > + init_drain_workers() > > It is called after smp_init() but before the init process > is executed. I guess that nobody could trigger CPU hotplug > at this state. So there there is no need to synchronize > against it. Should I add a comment here to describe why we don't need cpus_read_lock here (due to init process not being active at this time)? > > > + > > + if (cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, > > + "page_alloc/drain:online", > > + alloc_drain_worker, > > + free_drain_worker)) { > > + pr_err("page_alloc_drain: Failed to allocate a hotplug state\n"); > > I am not sure if there are any special requirements about the > ordering vs. other CPU hotplug operations. > > Just note that the per-CPU workqueues are started/stopped > via CPUHP_AP_WORKQUEUE_ONLINE. They are available slightly > earlier before CPUHP_AP_ONLINE_DYN when the CPU is being > enabled. > > > + } > > +} > > + > > void __init page_alloc_init_late(void) > > { > > struct zone *zone; > > Best Regards, > Petr