Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933462AbcK3K2m (ORCPT ); Wed, 30 Nov 2016 05:28:42 -0500 Received: from mx1.molgen.mpg.de ([141.14.17.9]:54125 "EHLO mx1.molgen.mpg.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755453AbcK3K2i (ORCPT ); Wed, 30 Nov 2016 05:28:38 -0500 Subject: Re: INFO: rcu_sched detected stalls on CPUs/tasks with `kswapd` and `mem_cgroup_shrink_node` To: Michal Hocko References: <20161108183938.GD4127@linux.vnet.ibm.com> <9f87f8f0-9d0f-f78f-8dca-993b09b19a69@molgen.mpg.de> <20161116173036.GK3612@linux.vnet.ibm.com> <20161121134130.GB18112@dhcp22.suse.cz> <20161121140122.GU3612@linux.vnet.ibm.com> <20161121141818.GD18112@dhcp22.suse.cz> <20161121142901.GV3612@linux.vnet.ibm.com> <68025f6c-6801-ab46-b0fc-a9407353d8ce@molgen.mpg.de> <20161124101525.GB20668@dhcp22.suse.cz> <583AA50A.9010608@molgen.mpg.de> <20161128110449.GK14788@dhcp22.suse.cz> <109d5128-f3a4-4b6e-db17-7a1fcb953500@molgen.mpg.de> Cc: Paul Menzel , dvteam@molgen.mpg.de, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Josh Triplett From: Donald Buczek Message-ID: <29196f89-c35e-f79d-8e4d-2bf73fe930df@molgen.mpg.de> Date: Wed, 30 Nov 2016 11:28:34 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0 MIME-Version: 1.0 In-Reply-To: <109d5128-f3a4-4b6e-db17-7a1fcb953500@molgen.mpg.de> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5876 Lines: 150 On 11/28/16 13:26, Paul Menzel wrote: > [...] > > On 11/28/16 12:04, Michal Hocko wrote: >> [...] >> >> OK, so one of the stall is reported at >> [118077.988410] INFO: rcu_sched detected stalls on CPUs/tasks: >> [118077.988416] 1-...: (181 ticks this GP) >> idle=6d5/140000000000000/0 softirq=46417663/46417663 fqs=10691 >> [118077.988417] (detected by 4, t=60002 jiffies, g=11845915, >> c=11845914, q=46475) >> [118077.988421] Task dump for CPU 1: >> [118077.988421] kswapd1 R running task 0 86 2 >> 0x00000008 >> [118077.988424] ffff88080ad87c58 ffff88080ad87c58 ffff88080ad87cf8 >> ffff88100c1e5200 >> [118077.988426] 0000000000000003 0000000000000000 ffff88080ad87e60 >> ffff88080ad87d90 >> [118077.988428] ffffffff811345f5 ffff88080ad87da0 ffff88100c1e5200 >> ffff88080ad87dd0 >> [118077.988430] Call Trace: >> [118077.988436] [] ? shrink_node_memcg+0x605/0x870 >> [118077.988438] [] ? shrink_node+0xbf/0x1c0 >> [118077.988440] [] ? kswapd+0x342/0x6b0 >> >> the interesting part of the traces would be around the same time: >> clusterd-989 [009] .... 118023.654491: >> mm_vmscan_direct_reclaim_end: nr_reclaimed=193 >> kswapd1-86 [001] dN.. 118023.987475: >> mm_vmscan_lru_isolate: isolate_mode=0 classzone=0 order=0 >> nr_requested=32 nr_scanned=4239830 nr_taken=0 file=1 >> kswapd1-86 [001] dN.. 118024.320968: >> mm_vmscan_lru_isolate: isolate_mode=0 classzone=0 order=0 >> nr_requested=32 nr_scanned=4239844 nr_taken=0 file=1 >> kswapd1-86 [001] dN.. 118024.654375: >> mm_vmscan_lru_isolate: isolate_mode=0 classzone=0 order=0 >> nr_requested=32 nr_scanned=4239858 nr_taken=0 file=1 >> kswapd1-86 [001] dN.. 118024.987036: >> mm_vmscan_lru_isolate: isolate_mode=0 classzone=0 order=0 >> nr_requested=32 nr_scanned=4239872 nr_taken=0 file=1 >> kswapd1-86 [001] dN.. 118025.319651: >> mm_vmscan_lru_isolate: isolate_mode=0 classzone=0 order=0 >> nr_requested=32 nr_scanned=4239886 nr_taken=0 file=1 >> kswapd1-86 [001] dN.. 118025.652248: >> mm_vmscan_lru_isolate: isolate_mode=0 classzone=0 order=0 >> nr_requested=32 nr_scanned=4239900 nr_taken=0 file=1 >> kswapd1-86 [001] dN.. 118025.984870: >> mm_vmscan_lru_isolate: isolate_mode=0 classzone=0 order=0 >> nr_requested=32 nr_scanned=4239914 nr_taken=0 file=1 >> [...] >> kswapd1-86 [001] dN.. 118084.274403: >> mm_vmscan_lru_isolate: isolate_mode=0 classzone=0 order=0 >> nr_requested=32 nr_scanned=4241133 nr_taken=0 file=1 >> >> Note the Need resched flag. The IRQ off part is expected because we are >> holding the LRU lock which is IRQ safe. Hmmm. With the lock held, preemption is disabled. If we are in that state for some time, I'd expect need_resched just because of time quantum. But... : The call stack always has > [] ? shrink_node_memcg+0x605/0x870 which translates to > (gdb) list *0xffffffff811345f5 > 0xffffffff811345f5 is in shrink_node_memcg (mm/vmscan.c:2065). > 2060 static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan, > 2061 struct lruvec *lruvec, struct scan_control *sc) > 2062 { > 2063 if (is_active_lru(lru)) { > 2064 if (inactive_list_is_low(lruvec, is_file_lru(lru), sc)) > 2065 shrink_active_list(nr_to_scan, lruvec, sc, lru); > 2066 return 0; > 2067 } > 2068 > 2069 return shrink_inactive_list(nr_to_scan, lruvec, sc, lru); So we are in shrink_active_list. I made a small change without keeping the old vmlinux and the addresses are off by 16 bytes, but it can be verified exactly on another machine: > buczek@void:/scratch/local/linux-4.8.10-121.x86_64/source$ grep shrink_node_memcg /var/log/messages > [...] > void kernel: [508779.136016] [] ? shrink_node_memcg+0x60a/0x870 > (gdb) disas 0xffffffff8114833a > [...] > 0xffffffff81148330 <+1536>: mov %r10,0x38(%rsp) > 0xffffffff81148335 <+1541>: callq 0xffffffff81147a00 > 0xffffffff8114833a <+1546>: mov 0x38(%rsp),%r10 > 0xffffffff8114833f <+1551>: jmpq 0xffffffff81147f80 > 0xffffffff81148344 <+1556>: mov %r13,0x78(%r12) shrink_active_list gets and releases the spinlock and calls cond_resched(). This should give other tasks a chance to run. Just as an experiment, I'm trying --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1921,7 +1921,7 @@ static void shrink_active_list(unsigned long nr_to_scan, spin_unlock_irq(&pgdat->lru_lock); while (!list_empty(&l_hold)) { - cond_resched(); + cond_resched_rcu_qs(); page = lru_to_page(&l_hold); list_del(&page->lru); and didn't hit a rcu_sched warning for >21 hours uptime now. We'll see. Is preemption disabled for another reason? Regards Donald >> That is not a problem because >> the lock is only held for SWAP_CLUSTER_MAX pages at maximum. It is also >> interesing to see that we have scanned only 1303 pages during that 1 >> minute. That would be dead slow. None of them were good enough for the >> reclaim but that doesn't sound like a problem. The trace simply suggests >> that the reclaim was preempted by something else. Otherwise I cannot >> imagine such a slow scanning. >> >> Is it possible that something else is hogging the CPU and the RCU just >> happens to blame kswapd which is running in the standard user process >> context? > > From looking at the monitoring graphs, there was always enough CPU > resources available. The machine has 12x E5-2630 @ 2.30GHz. So that > shouldn?t have been a problem. > > > Kind regards, > > Paul Menzel -- Donald Buczek buczek@molgen.mpg.de Tel: +49 30 8413 1433