Received: by 2002:a05:6a10:9afc:0:0:0:0 with SMTP id t28csp124233pxm; Fri, 4 Mar 2022 17:47:59 -0800 (PST) X-Google-Smtp-Source: ABdhPJyE4/lqIABJU76rFu6fby+2YLsciSkxx9co8WzQ++6/YwDJb0Z7tUQ4dP0H4wEu+eeRDgBT X-Received: by 2002:a63:5d0a:0:b0:377:1ad7:5be1 with SMTP id r10-20020a635d0a000000b003771ad75be1mr973961pgb.576.1646444879132; Fri, 04 Mar 2022 17:47:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1646444879; cv=none; d=google.com; s=arc-20160816; b=qmgRJ5vPci3tgOljqga7bf83FoB+po7nocFrWhgmVI92v6jBh3KDYf6pxgZ96suJdM +iFgOfDtS9GmHy6UMyVaaeYNYH5/cS93UWkWa76HA5dvhCz7KFvCvKlMzKWuq1O2NcjS bDFcFlDEhNg7OPfUwtOfV8Uq8MqsGhOMmOFf2SMvoEwBQve7D6ydc/RYZ3XzT/azU5oj ZlqU5JtSpeFTSI3QikGWxJOzLmY7ObtJqEvhnTPMU0a3Ln58ymTDHOgyVoLz2WmUKb0b U3Uc4ELxaEA+DqoAmOgBVRPhAHmg5zSzsF0KKuhy5GJxqka39AtfzEq2i/BI3IiFrvWJ 3lIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=GSeRIdN+0y9OI9HSaTJMuXPQ7liwYPEzZ1NtLB6cnSk=; b=YPPawH08SyLpZ4VFxRW9OOZGOc5FgvBEtaEjuc0uu1SOeMSly2daa3ZAe332LuLsZJ +NVsqq67jdqqCHjunbEi8DrnrDQWqi8NYPao8DaSYY1U3TVDQDO+h6CnWgkYAjOAmsX3 lyoRRf3MoxP8NoHjEo08IQiqm7lFbZ2Y+bJtzM6TYk+FYf+ZXlrnBTeeMoa+O+aE6jWG nZuvlYWOev7bnri6QHVGdoa8TXO8RyRxXS2SdNZ2+sMpsbNnh5dQJ3AG4B2envEcmnb1 x0dMfIxLmqbyR0N3B5JhzJenYCgc6AqA4lI3IMPknGVn/DaI0klVfB27bGhJIvm0aJ+p /PRA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b="e8ct/L1e"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b11-20020a170902bd4b00b001516f2cc30esi5375369plx.497.2022.03.04.17.47.16; Fri, 04 Mar 2022 17:47:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b="e8ct/L1e"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230257AbiCEAgt (ORCPT + 99 others); Fri, 4 Mar 2022 19:36:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229769AbiCEAgr (ORCPT ); Fri, 4 Mar 2022 19:36:47 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E49E259F5E for ; Fri, 4 Mar 2022 16:35:58 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 1953CB82C6D for ; Sat, 5 Mar 2022 00:35:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 604A8C340E9; Sat, 5 Mar 2022 00:35:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1646440555; bh=mwjsS+YMGajXXQMaUkaiH/3MXku41Jx+TRuzkQ6Y6Jg=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=e8ct/L1e1A16OfW/iQ8Zgh+spt2H9t+3S0zaHsjY0h00LqO852i10s5evccJvhiLF UONEs4TjVYWKHIagAmMk1b5Sh97NXXUgx8gKFFLTGu56sjbSKAJ4uD0Kf2+1Aj4fE6 9f3S2dNGVFnHOvAZ89DsE/iV3Wl33YhneVHDRfnw= Date: Fri, 4 Mar 2022 16:35:54 -0800 From: Andrew Morton To: Marcelo Tosatti Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Minchan Kim , Matthew Wilcox , Mel Gorman , Nicolas Saenz Julienne , Juri Lelli , Thomas Gleixner , Sebastian Andrzej Siewior , "Paul E. McKenney" Subject: Re: [patch v4] mm: lru_cache_disable: replace work queue synchronization with synchronize_rcu Message-Id: <20220304163554.8872fe5d5a9d634f7a2884f5@linux-foundation.org> In-Reply-To: References: X-Mailer: Sylpheed 3.7.0 (GTK+ 2.24.33; x86_64-redhat-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 4 Mar 2022 13:29:31 -0300 Marcelo Tosatti wrote: > > On systems that run FIFO:1 applications that busy loop > on isolated CPUs, executing tasks on such CPUs under > lower priority is undesired (since that will either > hang the system, or cause longer interruption to the > FIFO task due to execution of lower priority task > with very small sched slices). > > Commit d479960e44f27e0e52ba31b21740b703c538027c ("mm: disable LRU > pagevec during the migration temporarily") relies on > queueing work items on all online CPUs to ensure visibility > of lru_disable_count. > > However, its possible to use synchronize_rcu which will provide the same > guarantees (see comment this patch modifies on lru_cache_disable). > > Fixes: > > ... > > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -831,8 +831,7 @@ inline void __lru_add_drain_all(bool force_all_cpus) > for_each_online_cpu(cpu) { > struct work_struct *work = &per_cpu(lru_add_drain_work, cpu); > > - if (force_all_cpus || > - pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) || > + if (pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) || Please changelog this alteration? > data_race(pagevec_count(&per_cpu(lru_rotate.pvec, cpu))) || > pagevec_count(&per_cpu(lru_pvecs.lru_deactivate_file, cpu)) || > pagevec_count(&per_cpu(lru_pvecs.lru_deactivate, cpu)) || > @@ -876,15 +875,21 @@ atomic_t lru_disable_count = ATOMIC_INIT(0); > void lru_cache_disable(void) > { > atomic_inc(&lru_disable_count); > -#ifdef CONFIG_SMP > /* > - * lru_add_drain_all in the force mode will schedule draining on > - * all online CPUs so any calls of lru_cache_disabled wrapped by > - * local_lock or preemption disabled would be ordered by that. > - * The atomic operation doesn't need to have stronger ordering > - * requirements because that is enforced by the scheduling > - * guarantees. > + * Readers of lru_disable_count are protected by either disabling > + * preemption or rcu_read_lock: > + * > + * preempt_disable, local_irq_disable [bh_lru_lock()] > + * rcu_read_lock [rt_spin_lock CONFIG_PREEMPT_RT] > + * preempt_disable [local_lock !CONFIG_PREEMPT_RT] > + * > + * Since v5.1 kernel, synchronize_rcu() is guaranteed to wait on > + * preempt_disable() regions of code. So any CPU which sees > + * lru_disable_count = 0 will have exited the critical > + * section when synchronize_rcu() returns. > */ > + synchronize_rcu(); > +#ifdef CONFIG_SMP > __lru_add_drain_all(true); > #else > lru_add_and_bh_lrus_drain();