Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp670392iob; Wed, 18 May 2022 10:15:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyfld/kLm5pFxK7lX489NKhYleNabQWfiqbN/ypxoYLi368pkMs9LLDxE3oFPhbsyuYkYO/ X-Received: by 2002:a17:902:f647:b0:161:67af:6bf0 with SMTP id m7-20020a170902f64700b0016167af6bf0mr392424plg.100.1652894129294; Wed, 18 May 2022 10:15:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1652894129; cv=none; d=google.com; s=arc-20160816; b=l908Mx3/5BfD7q5lCx2xMDOG7HQPi6n+w2Ewhs2b1r6nsngY+3wLzAiNzqi7S2rVnd +8nUhnSD3tTGadwKbTVpFQUwkF+r33KR+bWKrNYg3IdR6C2fr4Gs2roM6DR1kVu+kWjK 81gPDssG1AxdKY6hs0OYgGM041hMInl8K7e6Hr9NLoZtx5+p/cMt2O4jg5KqhQXdwn6d /f+D7n1TNcE59PO3ohoVh7t6uL0VTglX192hv0RGBpha36e0xSMZYNI5Im1m8xv2HLzv X49LqN4D/8F18D7QzZzhQIA64jylUP7z95+r3Z4DtJjxGWlOYw8EU3okucLQILm7QH5C uBlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=80pr1cu8kjR3pwW34HSc/BYjei/wVlkl76cnrU55nrQ=; b=ap+xMCvhkGwEsLkw4Y9a+K4FYshZjCh3J7rjHaYIUpEyBvYKiSI5TB9m8g8iawtPDJ m6t/cL+SRrTJ0NJ12uJ8bjJSqqND9NcKuHLswqq62GzAmbE5lToUAiCafF+RVF38BVcI /J3qUO/7LpztuybiS5mjDzXmQg3asu1GbVlvkRJJrmIA9Bj7ZXiots2mgOIFIQtBHQk0 3NyyhX6t2kVNmumxy+wFNJGvWzflQrqnnOLg/Lf+SefE+uV1s1dE3jV72/BCEaBlog9e HebhWtS9sZya+Hyl6HzwggNQBXsnqS24ddXsuYeK7lfAxxhoYDNUV+hgDpXbH/WhiT4C R98A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="kYzs+A/4"; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id t11-20020a654b8b000000b003db48b17c00si3231130pgq.587.2022.05.18.10.15.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 May 2022 10:15:29 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="kYzs+A/4"; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5EFE520D27A; Wed, 18 May 2022 10:15:26 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240749AbiERRPK (ORCPT + 99 others); Wed, 18 May 2022 13:15:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240720AbiERRPG (ORCPT ); Wed, 18 May 2022 13:15:06 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA0DA1A359C for ; Wed, 18 May 2022 10:15:04 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 77D126176F for ; Wed, 18 May 2022 17:15:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DAD96C385A5; Wed, 18 May 2022 17:15:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652894103; bh=6eLnhzSjpe2dnWiv6MP9U3c35Rk2zMADFitpVDWmurg=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=kYzs+A/4KS1hV3F3qZWcrmi9scPbnFJgD5z8UmSgEFpJciQhiE8fW2KVO2qWQu1as rGRRvmr6aiS3UjDjD3gdabSuR8DT6uoL+zLkYOJbfmE1qCydo3ogbwDZ1tXY6EE8P1 Qy19MuTF+BxCfNvxXygGQgyRs0BLOjArBh8+Vs0vgOASCHg3S6/521p9595Z1RJga3 NGqLXDgglpG0HvQXxfrSyESZvdaKCsj2kezszRLibXOa3NZNJPpMZo7BiGwvn0yTTo FlTwMV/60iDtj8hAcpK/XPYJcdBXBsqjzJd7Dwvnrt/2GlfgHitbO8v1P2PSPQc5ay SEBpL8ZvPGwVw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 7EBCF5C042D; Wed, 18 May 2022 10:15:03 -0700 (PDT) Date: Wed, 18 May 2022 10:15:03 -0700 From: "Paul E. McKenney" To: Qian Cai Cc: Mel Gorman , Andrew Morton , Nicolas Saenz Julienne , Marcelo Tosatti , Vlastimil Babka , Michal Hocko , LKML , Linux-MM , kafai@fb.com, kpsingh@kernel.org Subject: Re: [PATCH 0/6] Drain remote per-cpu directly v3 Message-ID: <20220518171503.GQ1790663@paulmck-ThinkPad-P17-Gen-1> Reply-To: paulmck@kernel.org References: <20220512085043.5234-1-mgorman@techsingularity.net> <20220517233507.GA423@qian> <20220518125152.GQ3441@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, May 18, 2022 at 12:27:22PM -0400, Qian Cai wrote: > On Wed, May 18, 2022 at 01:51:52PM +0100, Mel Gorman wrote: > > On Tue, May 17, 2022 at 07:35:07PM -0400, Qian Cai wrote: > > > On Thu, May 12, 2022 at 09:50:37AM +0100, Mel Gorman wrote: > > > > Changelog since v2 > > > > o More conversions from page->lru to page->[pcp_list|buddy_list] > > > > o Additional test results in changelogs > > > > > > > > Changelog since v1 > > > > o Fix unsafe RT locking scheme > > > > o Use spin_trylock on UP PREEMPT_RT > > > > > > > > This series has the same intent as Nicolas' series "mm/page_alloc: Remote > > > > per-cpu lists drain support" -- avoid interference of a high priority > > > > task due to a workqueue item draining per-cpu page lists. While many > > > > workloads can tolerate a brief interruption, it may be cause a real-time > > > > task runnning on a NOHZ_FULL CPU to miss a deadline and at minimum, > > > > the draining in non-deterministic. > > > > > > > > Currently an IRQ-safe local_lock protects the page allocator per-cpu lists. > > > > The local_lock on its own prevents migration and the IRQ disabling protects > > > > from corruption due to an interrupt arriving while a page allocation is > > > > in progress. The locking is inherently unsafe for remote access unless > > > > the CPU is hot-removed. > > > > > > > > This series adjusts the locking. A spinlock is added to struct > > > > per_cpu_pages to protect the list contents while local_lock_irq continues > > > > to prevent migration and IRQ reentry. This allows a remote CPU to safely > > > > drain a remote per-cpu list. > > > > > > > > This series is a partial series. Follow-on work should allow the > > > > local_irq_save to be converted to a local_irq to avoid IRQs being > > > > disabled/enabled in most cases. Consequently, there are some TODO comments > > > > highlighting the places that would change if local_irq was used. However, > > > > there are enough corner cases that it deserves a series on its own > > > > separated by one kernel release and the priority right now is to avoid > > > > interference of high priority tasks. > > > > > > Reverting the whole series fixed an issue that offlining a memory > > > section blocking for hours on today's linux-next tree. > > > > > > __wait_rcu_gp > > > synchronize_rcu at kernel/rcu/tree.c:3915 > > > lru_cache_disable at mm/swap.c:886 > > > __alloc_contig_migrate_range at mm/page_alloc.c:9078 > > > isolate_single_pageblock at mm/page_isolation.c:405 > > > start_isolate_page_range > > > offline_pages > > > memory_subsys_offline > > > device_offline > > > online_store > > > dev_attr_store > > > sysfs_kf_write > > > kernfs_fop_write_iter > > > new_sync_write > > > vfs_write > > > ksys_write > > > __arm64_sys_write > > > invoke_syscall > > > el0_svc_common.constprop.0 > > > do_el0_svc > > > el0_svc > > > el0t_64_sync_handler > > > el0t_64_sync > > > > > > For full disclosure, I have also reverted the commit 0d523026abd4 > > > ("mm/page_alloc: fix tracepoint mm_page_alloc_zone_locked()"), so the > > > series can be reverted cleanly. But, I can't see how the commit > > > 0d523026abd4 could cause this issue at all. > > > > This is halting in __lru_add_drain_all where it calls synchronize_rcu > > before the drain even happens. It's also an LRU drain and not PCP which > > is what the series affects and the allocator doesn't use rcu. In a KVM > > machine, I can do > > > > $ for BANK in `(for i in {1..20}; do echo $((RANDOM%416)); done) | sort -n | uniq`; do BEFORE=`cat /sys/devices/system/memory/memory$BANK/online`; echo 0 > /sys/devices/system/memory/memory$BANK/online; AFTER=`cat /sys/devices/system/memory/memory$BANK/online`; printf "%4d %d -> %d\n" $BANK $BEFORE $AFTER; done > > 3 1 -> 0 > > 57 1 -> 0 > > 74 1 -> 0 > > 93 1 -> 0 > > 101 1 -> 0 > > 128 1 -> 0 > > 133 1 -> 0 > > 199 1 -> 0 > > 223 1 -> 0 > > 225 1 -> 0 > > 229 1 -> 0 > > 243 1 -> 0 > > 263 1 -> 0 > > 300 1 -> 0 > > 309 1 -> 0 > > 329 1 -> 0 > > 355 1 -> 0 > > 365 1 -> 0 > > 372 1 -> 0 > > 383 1 -> 0 > > > > It offlines 20 sections although after several attempts free -m starts > > reporting negative used memory so there is a bug of some description. > > How are you testing this exactly? Is it every time or intermittent? Are > > you confident that reverting the series makes the problem go away? > > Cc'ing Paul. Either reverting this series or Paul's 3 patches below from > today's linux-next tree fixed the issue. > > ca52639daa5b rcu-tasks: Drive synchronous grace periods from calling task > 89ad98e93ce8 rcu-tasks: Move synchronize_rcu_tasks_generic() down > 0d90e7225fb1 rcu-tasks: Split rcu_tasks_one_gp() from rcu_tasks_kthread() > > It was reproduced by running this script below on an arm64 server. I can > reproduce it every time within 5 attempts. I noticed that when it happens, > we have a few rcu kthreads all are stuck in this line, > > rcuwait_wait_event(&rtp->cbs_wait, > (needgpcb = rcu_tasks_need_gpcb(rtp)), > TASK_IDLE); > > rcu_tasks_kthread > rcu_tasks_rude_kthread > [rcu_tasks_trace_kthread This is the normal state of these kthreads when there is nothing for them to do. And unless you are removing tracing trampolines (kprobes, ftrace, BPF), there should be nothing for them to do. So does this python script somehow change the tracing state? (It does not look to me like it does, but I could easily be missing something.) Either way, is there something else waiting for these RCU flavors? (There should not be.) Nevertheless, if so, there should be a synchronize_rcu_tasks(), synchronize_rcu_tasks_rude(), or synchronize_rcu_tasks_trace() on some other blocked task's stack somewhere. Or maybe something sleeps waiting for an RCU Tasks * callback to be invoked. In that case (and in the above case, for that matter), at least one of these pointers would be non-NULL on some CPU: 1. rcu_tasks__percpu.cblist.head 2. rcu_tasks_rude__percpu.cblist.head 3. rcu_tasks_trace__percpu.cblist.head The ->func field of the pointed-to structure contains a pointer to the callback function, which will help work out what is going on. (Most likely a wakeup being lost or not provided.) Alternatively, if your system has hundreds of thousands of tasks and you have attached BPF programs to short-lived socket structures and you don't yet have the workaround, then you can see hangs. (I am working on a longer-term fix.) In the short term, applying the workaround is the right thing to do. (Adding a couple of the BPF guys on CC for their thoughts.) Does any of that help? Thanx, Paul > #!/usr/bin/env python3 > # SPDX-License-Identifier: GPL-2.0 > > import os > import re > import subprocess > > > def mem_iter(): > base_dir = '/sys/devices/system/memory/' > for curr_dir in os.listdir(base_dir): > if re.match(r'memory\d+', curr_dir): > yield base_dir + curr_dir > > > if __name__ == '__main__': > print('- Try to remove each memory section and then add it back.') > for mem_dir in mem_iter(): > status = f'{mem_dir}/online' > if open(status).read().rstrip() == '1': > # This could expectedly fail due to many reasons. > section = os.path.basename(mem_dir) > print(f'- Try to remove {section}.') > proc = subprocess.run([f'echo 0 | sudo tee {status}'], shell=True) > if proc.returncode == 0: > print(f'- Try to add {section}.') > subprocess.check_call([f'echo 1 | sudo tee {status}'], shell=True) >