Received: by 2002:a05:7412:40d:b0:e2:908c:2ebd with SMTP id 13csp1013948rdf; Wed, 22 Nov 2023 03:24:28 -0800 (PST) X-Google-Smtp-Source: AGHT+IHQUGYRDeUw2SxrxgV812ZcI/mLjFAHOi/Ag08BxHUKnQibxdOAR2VmiRERnBLalOHylqfg X-Received: by 2002:a05:6a20:e30b:b0:186:bd68:facc with SMTP id nb11-20020a056a20e30b00b00186bd68faccmr2106097pzb.28.1700652268064; Wed, 22 Nov 2023 03:24:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1700652268; cv=none; d=google.com; s=arc-20160816; b=htvKNOQ3LYDYDjecBKionApFR7aBmvHvpcI6VoL641TggWoU0OQ4HvZQKY/U6tswOd Y84tMWvTf/W9dMR/9TjcKmYPunz8KlaTpCxTxTcEujI8w2ebjOfTy6PH0S8swhRHIvpS lnFQj7Pkzc3eGVMQNcO6ds2pbNDzfFJjKnGf4sBN4y8qmC4PQIU8PkTETB6NmhvSSdgk OxfGD/Ysbv4PUQLPH1mnQ1UFGd7zX3rGv4omqSEtOOFB0fD4lWiwUInD/sBp3v2RgscY 1YLaOixv65i7vSP7YCzgSHIWmpu2StiVAVqq7zj1L/8YzvM3Eja+NlLrYsD+TTNQmFVQ Kkiw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=Q3xI+jfvtvlHIXl6/qPsk32Rnxkwsx8pzxHaHq5T0nE=; fh=fvddWMaUSgUDO8bJ5Fiett03egvAmvV0GdxiRgr+IOU=; b=h78D5b0KeJoldxTXZkqsH4M/jcNBSdSiIdGJpCu4RnvVsyKje5wOXS6BiadLaRP6F3 ONDjKMTA8Jx6pTF/htT78heJ3DF2xmY0oywSrCB9z4guMTALKvUTyx2Rc/+LWjRdx0/a O/KmKrI8gCLBVivjD6Rm7Nwwg8VFt+67YQY2E5W5qhiI2HnSGpCg22B1N02uROlnfu52 rjLTlnsGrTjQEDVVgopqABKr7qmDW9SXg5ZA32PlTpfD1lJZmJpIb2TJLGyWOaZrrQNn sHct17dVyadI0SW2wvnVnOZxB2gQpgfP8X9veQdtdNy/l094uFOT/VOfUIROe/l31rpK Ui6Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gvRlcB56; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id d5-20020a170902cec500b001c752577582si12610136plg.359.2023.11.22.03.24.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Nov 2023 03:24:28 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gvRlcB56; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id EFC0C8032036; Wed, 22 Nov 2023 03:24:26 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343685AbjKVLYX (ORCPT + 99 others); Wed, 22 Nov 2023 06:24:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234757AbjKVLYS (ORCPT ); Wed, 22 Nov 2023 06:24:18 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F46B18D for ; Wed, 22 Nov 2023 03:24:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1700652252; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=Q3xI+jfvtvlHIXl6/qPsk32Rnxkwsx8pzxHaHq5T0nE=; b=gvRlcB56eXvbLxXkfIBIcZnTVjgJDxpk0NHp2BVABybjVfdwpX6ap3cWU7Leelw4YUnten 74jzMdNag8jN/RhugvuFWcn8KNVOFAkQO435pxb3q+ROiau4XqxjzRfW1Sh6R8GAEf9Orf gL9j3JRDdneOprdg6yfWKP1q4RFzcX0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-339-EB-TIXtIMn2oqiMDdj9krg-1; Wed, 22 Nov 2023 06:24:09 -0500 X-MC-Unique: EB-TIXtIMn2oqiMDdj9krg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2299B83B826; Wed, 22 Nov 2023 11:24:09 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-3.gru2.redhat.com [10.97.112.3]) by smtp.corp.redhat.com (Postfix) with ESMTPS id A7CED40C6EBA; Wed, 22 Nov 2023 11:24:08 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 1723E401DEDE0; Wed, 22 Nov 2023 08:23:51 -0300 (-03) Date: Wed, 22 Nov 2023 08:23:51 -0300 From: Marcelo Tosatti To: Michal Hocko Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Vlastimil Babka , Andrew Morton , David Hildenbrand , Peter Xu Subject: Re: [patch 0/2] mm: too_many_isolated can stall due to out of sync VM counters Message-ID: References: <20231113233420.446465795@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 X-Spam-Status: No, score=-2.2 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H4,RCVD_IN_MSPIKE_WL, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Wed, 22 Nov 2023 03:24:27 -0800 (PST) On Tue, Nov 14, 2023 at 01:46:41PM +0100, Michal Hocko wrote: > On Tue 14-11-23 09:26:53, Marcelo Tosatti wrote: > > Hi Michal, > > > > On Tue, Nov 14, 2023 at 09:20:09AM +0100, Michal Hocko wrote: > > > On Mon 13-11-23 20:34:20, Marcelo Tosatti wrote: > > > > A customer reported seeing processes hung at too_many_isolated, > > > > while analysis indicated that the problem occurred due to out > > > > of sync per-CPU stats (see below). > > > > > > > > Fix is to use node_page_state_snapshot to avoid the out of stale values. > > > > > > > > 2136 static unsigned long > > > > 2137 shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec, > > > > 2138 struct scan_control *sc, enum lru_list lru) > > > > 2139 { > > > > : > > > > 2145 bool file = is_file_lru(lru); > > > > : > > > > 2147 struct pglist_data *pgdat = lruvec_pgdat(lruvec); > > > > : > > > > 2150 while (unlikely(too_many_isolated(pgdat, file, sc))) { > > > > 2151 if (stalled) > > > > 2152 return 0; > > > > 2153 > > > > 2154 /* wait a bit for the reclaimer. */ > > > > 2155 msleep(100); <--- some processes were sleeping here, with pending SIGKILL. > > > > 2156 stalled = true; > > > > 2157 > > > > 2158 /* We are about to die and free our memory. Return now. */ > > > > 2159 if (fatal_signal_pending(current)) > > > > 2160 return SWAP_CLUSTER_MAX; > > > > 2161 } > > > > > > > > msleep() must be called only when there are too many isolated pages: > > > > > > What do you mean here? > > > > That msleep() must not be called when > > > > isolated > inactive > > > > is false. > > Well, but the code is structured in a way that this is simply true. > too_many_isolated might be false positive because it is a very loose > interface and the number of isolated pages can fluctuate depending on > the number of direct reclaimers. > > > > > 2019 static int too_many_isolated(struct pglist_data *pgdat, int file, > > > > 2020 struct scan_control *sc) > > > > 2021 { > > > > : > > > > 2030 if (file) { > > > > 2031 inactive = node_page_state(pgdat, NR_INACTIVE_FILE); > > > > 2032 isolated = node_page_state(pgdat, NR_ISOLATED_FILE); > > > > 2033 } else { > > > > : > > > > 2046 return isolated > inactive; > > > > > > > > The return value was true since: > > > > > > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_INACTIVE_FILE] > > > > $8 = { > > > > counter = 1 > > > > } > > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_ISOLATED_FILE] > > > > $9 = { > > > > counter = 2 > > > > > > > > while per_cpu stats had: > > > > > > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->per_cpu_nodestats > > > > $85 = (struct per_cpu_nodestat *) 0xffff8000118832e0 > > > > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[42] > > > > $86 = 0xffff00917fcc32e0 > > > > crash> p ((struct per_cpu_nodestat *) 0xffff00917fcc32e0)->vm_node_stat_diff[NR_ISOLATED_FILE] > > > > $87 = -1 '\377' > > > > > > > > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[44] > > > > $89 = 0xffff00917fe032e0 > > > > crash> p ((struct per_cpu_nodestat *) 0xffff00917fe032e0)->vm_node_stat_diff[NR_ISOLATED_FILE] > > > > $91 = -1 '\377' > > > > > > This doesn't really tell much. How much out of sync they really are > > > cumulatively over all cpus? > > > > This is the cumulative value over all CPUs (offsets for other CPUs > > have been omitted since they are zero). > > OK, so that means the NR_ISOLATED_FILE is 0 while NR_INACTIVE_FILE is 1, > correct? If that is the case then the value is indeed outdated but it > also means that the NR_INACTIVE_FILE is so small that all but 1 (resp. 2 > as kswapd is never throttled) reclaimers will be stalled anyway. So does > the exact snapshot really help? Do you have any means to reproduce this > behavior and see that the patch actually changed the behavior? > > [...] > > > > With a very low NR_FREE_PAGES and many contending allocation the system > > > could be easily stuck in reclaim. What are other reclaim > > > characteristics? > > > > I can ask. What information in particular do you want to know? > > When I am dealing with issues like this I heavily rely on /proc/vmstat > counters and pgscan, pgsteal counters to see whether there is any > progress over time. > > > > Is the direct reclaim successful? > > > > Processes are stuck in too_many_isolated (unnecessarily). What do you mean when you ask > > "Is the direct reclaim successful", precisely? > > With such a small LRU list it is quite likely that many processes will > be competing over last pages on the list while rest will be throttled > because there is nothing to reclaim. It is quite possible that all > reclaimers will be waiting for a single reclaimer (either kswapd or > other direct reclaimer). I would like to understand whether the system > is stuck in unproductive state where everybody just waits until the > counter is synced or everything just progress very slowly because of the > small LRU. > -- > Michal Hocko > SUSE Labs Michal, I think this provides the data you are looking for: It seems that the situation was invoking memory-consuming user program in pallarel expecting that the system will kick oom-killer at the end. The node 0-3 are small containing system data and almost all files. The node 4-7 are large prepared to contain user data only. The issue described in above was observed on node 4-7, where had very few memory for files. The node 4-7 has more cpu than node 0-3. Only cpus on node 4-7 are configuerd to be nohz_full. So we often found unflushed percpu vmstat on cpus of node 4-7.