Received: by 2002:a05:6358:701b:b0:131:369:b2a3 with SMTP id 27csp329151rwo; Fri, 21 Jul 2023 12:32:01 -0700 (PDT) X-Google-Smtp-Source: APBJJlFyCwtd8qWvf3UgZ3QskqDNMbYNLHXbst7nkyVwbr+0isaWJH1/8oGb9vaRY460WSPkLFUR X-Received: by 2002:a17:906:3009:b0:993:e752:1a73 with SMTP id 9-20020a170906300900b00993e7521a73mr2346112ejz.19.1689967921549; Fri, 21 Jul 2023 12:32:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689967921; cv=none; d=google.com; s=arc-20160816; b=0VG4LiWFfAwy4XOYAtkHat8wvsR3/Y/iKgBaPEvxIpEatbPVllw9XZA8x3h6VLEyAu /ssILN1XBw4GLAY9FzS7iLpIwHlmRG8Ldb7ZTCDGW15Gf3XVCfchIMiPAQC/hjUijflO FponSQ7KRFphyLG0hSIcOqNu9OmOMjwWfkAolwz41M7jEG4njYmXvsJjY5V7f05JXWZn 6pRJkgysbUVSG954NCpIY5o/E5wJveEgizqVR1goHD1LiSyDFI1FeYJJxh/oBqqn6XIe +WMo/4lw6sddMNVVjh4qEWuXHNJhbdKj80sY5UMjtD/p+51NSxIPOGSIMtZ4fcQY7at2 FAmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:references :cc:to:from:content-language:subject:user-agent:mime-version:date :message-id:dkim-signature; bh=gVLht2DkkpNq7SBlRJvR3FFoekKm//GcsByuwOIRh5c=; fh=OV1GRP9yNOr2udlZfGzm/2X8iCu9CWWGrCtRq4pFD/4=; b=Gbyz4zwUfzClRK27PYdIekr4OFw/3Xbx5EE7Gx7d7xumg6XbhKPBYL2C2f1vWGZDrK JwNMX6PASUEVc2OMEYaba/CUwji/0ZoNjSe/BV40WLNc9lnn79Gk1K3sxw27+FboEJbQ HZD6Fp7xBEAnabnJa2SZywr8+QIZ/X0vgupK2IlMpc1MqgUdwxOfK0sHYM0Ef39oK36l Lb7mrhqZUNpe1tV9gkkPKLyXl/gLUBWFGmLfZ+RElUxuERJIZhS1kOTllE6oH/fB06bQ qyRiHfQ1p8xqqNaQ8dvmOA/+dVI3VfsnpuDTyUQpA2UMf5ZDGcMxkpzj/CGdQOcfm/uU qbSQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@joelfernandes.org header.s=google header.b=sn+TuX6y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c24-20020a17090603d800b00992dcae742asi2520465eja.985.2023.07.21.12.31.36; Fri, 21 Jul 2023 12:32:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@joelfernandes.org header.s=google header.b=sn+TuX6y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232117AbjGUTU6 (ORCPT + 99 others); Fri, 21 Jul 2023 15:20:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232135AbjGUTUy (ORCPT ); Fri, 21 Jul 2023 15:20:54 -0400 Received: from mail-qv1-xf34.google.com (mail-qv1-xf34.google.com [IPv6:2607:f8b0:4864:20::f34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B881430ED for ; Fri, 21 Jul 2023 12:20:51 -0700 (PDT) Received: by mail-qv1-xf34.google.com with SMTP id 6a1803df08f44-63ce8bea776so571446d6.0 for ; Fri, 21 Jul 2023 12:20:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; t=1689967251; x=1690572051; h=content-transfer-encoding:in-reply-to:references:cc:to:from :content-language:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=gVLht2DkkpNq7SBlRJvR3FFoekKm//GcsByuwOIRh5c=; b=sn+TuX6yDPWPGmIE18lk8KdECmq4UypqVf1UZ/HSBedoa+3FFGKQTlkQryRE7CFJZb ObKksKomePZCwmYlz6erwXrGrEahfYJ7FA3dVBrilFcfwvjBbE+/aFJmF5OLwqC/2QjZ HJzxlnEV9bIFQGJEf9XBLiluxE3QS/5UaCkWQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689967251; x=1690572051; h=content-transfer-encoding:in-reply-to:references:cc:to:from :content-language:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gVLht2DkkpNq7SBlRJvR3FFoekKm//GcsByuwOIRh5c=; b=Tnq4vm0UR6bsXgesSqYAAAiFJXdJ95QnALRFxEVOtrOvRIUOu2h/dWgnKyPlla6tII UJ6vx/kdgf31mAc3kz26UopvfBn+1gfhA6YWrg4fbkAfiCAkE6/HKGIejykIqKIRWm4r poe2yCI8+S6F1J94iDoS+RBPOHfAAODwpnqY8zcSs82OWtoVSElVzAI2MKh852r5IfdV oZVC3Jqu+c0JYeZCjPvMFXSnkNLrJS1A06/4Ap/v3ebpowuBLE/wWxMBXaG/4u4UlvJ4 SebzGvds4dazmsjblAOnqWCNDSj8ph7KPld5EYnk/t6PyPLJGqmuxk4TKwIq5FZC8gLi wQ3Q== X-Gm-Message-State: ABy/qLZ486BUURGnjTXd9kGw2d0GWXc5bvUKwsHY9QwjcPNcfhoHifsh ge/LpuhE4XGy8wICZTYpgKalLw== X-Received: by 2002:a05:6214:e4d:b0:635:df71:1fe5 with SMTP id o13-20020a0562140e4d00b00635df711fe5mr1114700qvc.34.1689967250671; Fri, 21 Jul 2023 12:20:50 -0700 (PDT) Received: from [192.168.0.198] (c-98-249-43-138.hsd1.va.comcast.net. [98.249.43.138]) by smtp.gmail.com with ESMTPSA id x20-20020a0cb214000000b00626362f1bf1sm1501058qvd.63.2023.07.21.12.20.49 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 21 Jul 2023 12:20:49 -0700 (PDT) Message-ID: <9b42cb38-8375-fc41-475a-2bd26c60a7b9@joelfernandes.org> Date: Fri, 21 Jul 2023 15:20:48 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.12.0 Subject: Re: [BUG] Re: Linux 6.4.4 Content-Language: en-US From: Joel Fernandes To: paulmck@kernel.org Cc: linux-kernel@vger.kernel.org, stable@vger.kernel.org, rcu@vger.kernel.org, Greg KH References: <2023071940-suspect-ominous-4a6a@gregkh> <20230720132714.GA3726096@google.com> <2b8fc10b-785e-48b9-9a38-5c1af81f9578@paulmck-laptop> <8682b08c-347b-5547-60e0-013dcf1f8c93@joelfernandes.org> <32aec6d1-bf25-7b47-8f31-7b6318d5238d@joelfernandes.org> In-Reply-To: <32aec6d1-bf25-7b47-8f31-7b6318d5238d@joelfernandes.org> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.2 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org (Trimming the CC list a bit) On 7/21/23 08:13, Joel Fernandes wrote: > On 7/20/23 15:47, Paul E. McKenney wrote: >> On Thu, Jul 20, 2023 at 03:32:35PM -0400, Joel Fernandes wrote: >>> On 7/20/23 15:04, Paul E. McKenney wrote: >>>> On Thu, Jul 20, 2023 at 12:31:13PM -0400, Joel Fernandes wrote: >>>>> Hi Paul, >>>>> >>>>> On Thu, Jul 20, 2023 at 11:55 AM Paul E. McKenney >>>>> wrote: >>>>>> >>>>>> On Thu, Jul 20, 2023 at 01:27:14PM +0000, Joel Fernandes wrote: >>> [...] >>>>>>> >>>>>>> So likely RCU boosting is failing: >>>>>>> >>>>>>> The full TREE03 splat: >>>>>>> [   54.243588] ------------[ cut here ]------------ >>>>>>> [   54.244547] rcu-torture: rcu_torture_boost started >>> [...] >>>>>>> [   54.300499] RIP: 0010:rcu_torture_stats_print+0x5b2/0x620 >>> [...] >>>>>>> [ 2169.481783] rcu_torture_writer: rtort_pipe_count: 9 >>>>>>> >>>>>>> However, if we are to believe the '9', it appears the object did >>>>>>> made it >>>>>>> quite some till the end of the pipe array but not until the free >>>>>>> pool. >>>>>> >>>>>> This is from this if/for statement, correct? >>>>>> >>>>>>                   stutter_waited = >>>>>> stutter_wait("rcu_torture_writer"); >>>>>>                   if (stutter_waited && >>>>>>                       !atomic_read(&rcu_fwd_cb_nodelay) && >>>>>>                       !cur_ops->slow_gps && >>>>>>                       !torture_must_stop() && >>>>>>                       boot_ended) >>>>>>                           for (i = 0; i < >>>>>> ARRAY_SIZE(rcu_tortures); i++) >>>>>>                                   if >>>>>> (list_empty(&rcu_tortures[i].rtort_free) && >>>>>>                                       >>>>>> rcu_access_pointer(rcu_torture_current) != >>>>>>                                       &rcu_tortures[i]) { >>>>>>                                           tracing_off(); >>>>>>                                           show_rcu_gp_kthreads(); >>>>>>                                           WARN(1, "%s: >>>>>> rtort_pipe_count: >>>>>>                                           rcu_ftrace_dump(DUMP_ALL); >>>>>>                                   } >>>>> >>>>> Yes, that's right. >>>>> >>>>>> If so, this happens when there was a stutter wait, but RCU grace >>>>>> periods failed to clear out the backlog during the several seconds >>>>>> that >>>>>> rcutorture was forced idle.  This might be related to the RCU >>>>>> priority >>>>>> boosting failure, in which a preempted reader persisted across the >>>>>> stutter interval. >>>>> >>>>> When RCU is operating normally, shouldn't the check >>>>> "(list_empty(&rcu_tortures[i].rtort_free)" not run until the preempted >>>>> reader unblocks and exits its RCU read-side critical section? >>>> >>>> Yes, but not just "until", but rather "long after".  If RCU is doing >>>> grace periods correctly, an active reader on a given rcu_tortures[] >>>> element will prevent .rtort_pipe_count from exceeding the value 2. >>> >>> Ah ok, so the rtort_pipe_count being 9 is a sign RCU isn't making >>> progress >>> thus making it absent from the free list. >> >> Yes, though RCU is -just- -barely- too slow, as one more grace period >> would have done it. >> >>>> The element will not be put on a list until .rtort_pipe_count is equal >>>> to RCU_TORTURE_PIPE_LEN, which is 10. >>>> >>>> This warning usually appears when something is holding up the >>>> grace-period >>>> kthread.  Historically, this has included deadlocks, missed timers, >>>> and whatever else can prevent the grace-period kthread from running. >>> >>> Makes sense. >>> >>>>> One thing that confuses me, in the case of >>>>> "cur_ops->deferred_free(old_rp);" , the earlier do-while loop may exit >>>>> before the async callbacks can finish. So what prevents the >>>>> "(list_empty(&rcu_tortures[i].rtort_free)" check from happening before >>>>> grace periods happen? Thanks for any clarification. >>>> >>>> We only enter this code if the stutter_wait() actually waited, and by >>>> default this function will wait about five seconds.  Since the >>>> rcutorture >>>> testing goes idle during this time period (or is supposed to!), if >>>> things >>>> are working properly, knocking off ten grace periods during that time >>>> should be pretty much a given. >>> >>> Sure, makes sense. And this is not Lazy-RCU so 5 seconds should be >>> plenty >>> ;). I think I was subconsciously expecting an rcu_barrier() somewhere >>> in the >>> code before those checks, but that's not needed as you pointed that the >>> stutter should be giving enough time for RCU to make progress. >> >> And there might need to be a call_rcu_hurry() in there somewhere, >> now that you mention it.  Which would pretty much defeat any sort of >> lazy-RCU-callback testing in rcutorture, but testing of laziness might >> need to be separate anyway. >> >>> So hmm, the count being 9 means that not enough RCU grace periods have >>> passed for the rcu_torture object in question thus keeping it always >>> allocated. The GP thread not getting CPU can do that indeed, or perhaps >>> something else stalling RCU like a preempted reader, length preemption >>> disabling on a CPU and so forth..  I'll try to collect a trace when it >>> happens. >> >> Looking forward to seeing what you come up with! > > So far I found this. Before the crash, GPs took about 50ms, during the > crash it took 5 seconds before the warning which aligns with what you > mentioned about stutter. > > > The GP that never completed is at this line: > > [ 2816.041082]    <...>-13       13d.... 1237016139us : > rcu_grace_period: rcu_sched 144681 start > > And fqs loop got a "dti" for CPUs: > 1 > 2 > 12 > > And I see cpuqs for: > 13 > 6 > 10 > 5 > 4 > 11 > > No idea what happened to 3, 8, 9, 14, 15. Maybe the "dti" for those did > not show in the trace? > > However, I see that CPU 7 did this: > [ 2816.205277]   -0         7d.... 1237016284us : sched_switch: > prev_comm=swapper/7 prev_pid=0 prev_prio=120 prev_state=R ==> > next_comm=rcu_torture_rea next_pid=149 next_prio=139 > > > and then did this about 3 seconds later: > [ 2819.394111] rcu_tort-149       7dNs.. 1237025699us : > rcu_grace_period: rcu_sched 144677 cpuend > [ 2819.412745] rcu_tort-149       7dNs.. 1237025699us : > rcu_grace_period: rcu_sched 144681 cpustart > > > Which means it never switched out from the CPU all this while. Neither > did it report cpuqs. Ok it did notice the new GP started, welcome to the > party mate ;) > > That points the issue I think. I wonder if the emergency provisions for > forcing quiescent states on NOHZ_FULL CPUs kicked in. ISTR, we had > issues with that in the past where we had to fix the tick path to report > a QS. > > I'll add some more traces. Especially around preempted readers, the > emergency provisions for forcing a QS and so forth and see if I can dig > more information. In another instance, I caught a stack trace of CPU5 when it did not check-in with RCU for 3 seconds or so: [ 1127.067889] kmalloc_trace+0x25/0x90 [ 1127.072823] rcu_torture_fwd_prog+0x3d8/0xa60 [ 1127.078749] ? __pfx_rcu_torture_fwd_prog+0x10/0x10 [ 1127.085468] ? kthread+0xcb/0xf0 [ 1127.090040] ? __pfx_rcu_torture_fwd_prog+0x10/0x10 [ 1127.096626] kthread+0xcb/0xf0 [ 1127.100803] ? __pfx_kthread+0x10/0x10 [ 1127.107207] ret_from_fork+0x2c/0x50 [ 1127.113662] [ 1127.117818] Kernel panic - not syncing: kernel: panic_on_warn set ... It appears it was in "rcu_torture_fwd_prog" at the time. It got stuck there for 2 seconds. Interestingly, I did find the scheduler tick was trying to get in touch with the CPU and it was running code on it even: [ 5858.463102] rcu_tort-174 5d.h.. 1131149320us : rcu_utilization: Start scheduler-tick [ 5858.472903] rcu_tort-174 5d.h.. 1131149320us : rcu_utilization: End scheduler-tick So I could add some traces there to figure out what is it upto. ISTR, we also send IPIs to these CPUs? So maybe adding some traces there too is in order. Also earlier in dmesg it did this: [ 1124.825504] rcu_torture_fwd_prog n_max_cbs: 44181 [ 1124.832158] rcu_torture_fwd_prog: Starting forward-progress test 0 [ 1124.840977] rcu_torture_fwd_prog_cr: Starting forward-progress test 0 [ 1126.726621] ------------[ cut here ]------------ [ 1126.733296] rcutorture_oom_notify invoked upon OOM during forward-progress testing. Before that "cut here", there is a 2 second gap which tells me it got stuck there. So this function rcu_torture_fwd_prog() is keeping the CPU unresponsive from an RCU perspective for some reason? Because I never saw the "rcu_torture_fwd_prog_cr: Waiting for CBs" message before it panicked. Collecting traces again.. ;) thanks, - Joel