On Tue, Nov 14, 2023 at 01:46:41PM +0100, Michal Hocko wrote:
> On Tue 14-11-23 09:26:53, Marcelo Tosatti wrote:
> > Hi Michal,
> >
> > On Tue, Nov 14, 2023 at 09:20:09AM +0100, Michal Hocko wrote:
> > > On Mon 13-11-23 20:34:20, Marcelo Tosatti wrote:
> > > > A customer reported seeing processes hung at too_many_isolated,
> > > > while analysis indicated that the problem occurred due to out
> > > > of sync per-CPU stats (see below).
> > > >
> > > > Fix is to use node_page_state_snapshot to avoid the out of stale values.
> > > >
> > > > 2136 static unsigned long
> > > > 2137 shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
> > > > 2138 struct scan_control *sc, enum lru_list lru)
> > > > 2139 {
> > > > :
> > > > 2145 bool file = is_file_lru(lru);
> > > > :
> > > > 2147 struct pglist_data *pgdat = lruvec_pgdat(lruvec);
> > > > :
> > > > 2150 while (unlikely(too_many_isolated(pgdat, file, sc))) {
> > > > 2151 if (stalled)
> > > > 2152 return 0;
> > > > 2153
> > > > 2154 /* wait a bit for the reclaimer. */
> > > > 2155 msleep(100); <--- some processes were sleeping here, with pending SIGKILL.
> > > > 2156 stalled = true;
> > > > 2157
> > > > 2158 /* We are about to die and free our memory. Return now. */
> > > > 2159 if (fatal_signal_pending(current))
> > > > 2160 return SWAP_CLUSTER_MAX;
> > > > 2161 }
> > > >
> > > > msleep() must be called only when there are too many isolated pages:
> > >
> > > What do you mean here?
> >
> > That msleep() must not be called when
> >
> > isolated > inactive
> >
> > is false.
>
> Well, but the code is structured in a way that this is simply true.
> too_many_isolated might be false positive because it is a very loose
> interface and the number of isolated pages can fluctuate depending on
> the number of direct reclaimers.
>
> > > > 2019 static int too_many_isolated(struct pglist_data *pgdat, int file,
> > > > 2020 struct scan_control *sc)
> > > > 2021 {
> > > > :
> > > > 2030 if (file) {
> > > > 2031 inactive = node_page_state(pgdat, NR_INACTIVE_FILE);
> > > > 2032 isolated = node_page_state(pgdat, NR_ISOLATED_FILE);
> > > > 2033 } else {
> > > > :
> > > > 2046 return isolated > inactive;
> > > >
> > > > The return value was true since:
> > > >
> > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_INACTIVE_FILE]
> > > > $8 = {
> > > > counter = 1
> > > > }
> > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_ISOLATED_FILE]
> > > > $9 = {
> > > > counter = 2
> > > >
> > > > while per_cpu stats had:
> > > >
> > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->per_cpu_nodestats
> > > > $85 = (struct per_cpu_nodestat *) 0xffff8000118832e0
> > > > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[42]
> > > > $86 = 0xffff00917fcc32e0
> > > > crash> p ((struct per_cpu_nodestat *) 0xffff00917fcc32e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
> > > > $87 = -1 '\377'
> > > >
> > > > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[44]
> > > > $89 = 0xffff00917fe032e0
> > > > crash> p ((struct per_cpu_nodestat *) 0xffff00917fe032e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
> > > > $91 = -1 '\377'
> > >
> > > This doesn't really tell much. How much out of sync they really are
> > > cumulatively over all cpus?
> >
> > This is the cumulative value over all CPUs (offsets for other CPUs
> > have been omitted since they are zero).
>
> OK, so that means the NR_ISOLATED_FILE is 0 while NR_INACTIVE_FILE is 1,
> correct? If that is the case then the value is indeed outdated but it
> also means that the NR_INACTIVE_FILE is so small that all but 1 (resp. 2
> as kswapd is never throttled) reclaimers will be stalled anyway. So does
> the exact snapshot really help? Do you have any means to reproduce this
> behavior and see that the patch actually changed the behavior?
>
> [...]
>
> > > With a very low NR_FREE_PAGES and many contending allocation the system
> > > could be easily stuck in reclaim. What are other reclaim
> > > characteristics?
> >
> > I can ask. What information in particular do you want to know?
>
> When I am dealing with issues like this I heavily rely on /proc/vmstat
> counters and pgscan, pgsteal counters to see whether there is any
> progress over time.
>
> > > Is the direct reclaim successful?
> >
> > Processes are stuck in too_many_isolated (unnecessarily). What do you mean when you ask
> > "Is the direct reclaim successful", precisely?
>
> With such a small LRU list it is quite likely that many processes will
> be competing over last pages on the list while rest will be throttled
> because there is nothing to reclaim. It is quite possible that all
> reclaimers will be waiting for a single reclaimer (either kswapd or
> other direct reclaimer). I would like to understand whether the system
> is stuck in unproductive state where everybody just waits until the
> counter is synced or everything just progress very slowly because of the
> small LRU.
> --
> Michal Hocko
> SUSE Labs
Michal,
I think this provides the data you are looking for:
It seems that the situation was invoking memory-consuming user program
in pallarel expecting that the system will kick oom-killer at the end.
The node 0-3 are small containing system data and almost all files.
The node 4-7 are large prepared to contain user data only.
The issue described in above was observed on node 4-7, where
had very few memory for files.
The node 4-7 has more cpu than node 0-3.
Only cpus on node 4-7 are configuerd to be nohz_full.
So we often found unflushed percpu vmstat on cpus of node 4-7.
On Wed, Nov 22, 2023 at 08:23:51AM -0300, Marcelo Tosatti wrote:
> On Tue, Nov 14, 2023 at 01:46:41PM +0100, Michal Hocko wrote:
> > On Tue 14-11-23 09:26:53, Marcelo Tosatti wrote:
> > > Hi Michal,
> > >
> > > On Tue, Nov 14, 2023 at 09:20:09AM +0100, Michal Hocko wrote:
> > > > On Mon 13-11-23 20:34:20, Marcelo Tosatti wrote:
> > > > > A customer reported seeing processes hung at too_many_isolated,
> > > > > while analysis indicated that the problem occurred due to out
> > > > > of sync per-CPU stats (see below).
> > > > >
> > > > > Fix is to use node_page_state_snapshot to avoid the out of stale values.
> > > > >
> > > > > 2136 static unsigned long
> > > > > 2137 shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
> > > > > 2138 struct scan_control *sc, enum lru_list lru)
> > > > > 2139 {
> > > > > :
> > > > > 2145 bool file = is_file_lru(lru);
> > > > > :
> > > > > 2147 struct pglist_data *pgdat = lruvec_pgdat(lruvec);
> > > > > :
> > > > > 2150 while (unlikely(too_many_isolated(pgdat, file, sc))) {
> > > > > 2151 if (stalled)
> > > > > 2152 return 0;
> > > > > 2153
> > > > > 2154 /* wait a bit for the reclaimer. */
> > > > > 2155 msleep(100); <--- some processes were sleeping here, with pending SIGKILL.
> > > > > 2156 stalled = true;
> > > > > 2157
> > > > > 2158 /* We are about to die and free our memory. Return now. */
> > > > > 2159 if (fatal_signal_pending(current))
> > > > > 2160 return SWAP_CLUSTER_MAX;
> > > > > 2161 }
> > > > >
> > > > > msleep() must be called only when there are too many isolated pages:
> > > >
> > > > What do you mean here?
> > >
> > > That msleep() must not be called when
> > >
> > > isolated > inactive
> > >
> > > is false.
> >
> > Well, but the code is structured in a way that this is simply true.
> > too_many_isolated might be false positive because it is a very loose
> > interface and the number of isolated pages can fluctuate depending on
> > the number of direct reclaimers.
> >
> > > > > 2019 static int too_many_isolated(struct pglist_data *pgdat, int file,
> > > > > 2020 struct scan_control *sc)
> > > > > 2021 {
> > > > > :
> > > > > 2030 if (file) {
> > > > > 2031 inactive = node_page_state(pgdat, NR_INACTIVE_FILE);
> > > > > 2032 isolated = node_page_state(pgdat, NR_ISOLATED_FILE);
> > > > > 2033 } else {
> > > > > :
> > > > > 2046 return isolated > inactive;
> > > > >
> > > > > The return value was true since:
> > > > >
> > > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_INACTIVE_FILE]
> > > > > $8 = {
> > > > > counter = 1
> > > > > }
> > > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_ISOLATED_FILE]
> > > > > $9 = {
> > > > > counter = 2
> > > > >
> > > > > while per_cpu stats had:
> > > > >
> > > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->per_cpu_nodestats
> > > > > $85 = (struct per_cpu_nodestat *) 0xffff8000118832e0
> > > > > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[42]
> > > > > $86 = 0xffff00917fcc32e0
> > > > > crash> p ((struct per_cpu_nodestat *) 0xffff00917fcc32e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
> > > > > $87 = -1 '\377'
> > > > >
> > > > > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[44]
> > > > > $89 = 0xffff00917fe032e0
> > > > > crash> p ((struct per_cpu_nodestat *) 0xffff00917fe032e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
> > > > > $91 = -1 '\377'
> > > >
> > > > This doesn't really tell much. How much out of sync they really are
> > > > cumulatively over all cpus?
> > >
> > > This is the cumulative value over all CPUs (offsets for other CPUs
> > > have been omitted since they are zero).
> >
> > OK, so that means the NR_ISOLATED_FILE is 0 while NR_INACTIVE_FILE is 1,
> > correct? If that is the case then the value is indeed outdated but it
> > also means that the NR_INACTIVE_FILE is so small that all but 1 (resp. 2
> > as kswapd is never throttled) reclaimers will be stalled anyway. So does
> > the exact snapshot really help? Do you have any means to reproduce this
> > behavior and see that the patch actually changed the behavior?
> >
> > [...]
> >
> > > > With a very low NR_FREE_PAGES and many contending allocation the system
> > > > could be easily stuck in reclaim. What are other reclaim
> > > > characteristics?
> > >
> > > I can ask. What information in particular do you want to know?
> >
> > When I am dealing with issues like this I heavily rely on /proc/vmstat
> > counters and pgscan, pgsteal counters to see whether there is any
> > progress over time.
> >
> > > > Is the direct reclaim successful?
> > >
> > > Processes are stuck in too_many_isolated (unnecessarily). What do you mean when you ask
> > > "Is the direct reclaim successful", precisely?
> >
> > With such a small LRU list it is quite likely that many processes will
> > be competing over last pages on the list while rest will be throttled
> > because there is nothing to reclaim. It is quite possible that all
> > reclaimers will be waiting for a single reclaimer (either kswapd or
> > other direct reclaimer). I would like to understand whether the system
> > is stuck in unproductive state where everybody just waits until the
> > counter is synced or everything just progress very slowly because of the
> > small LRU.
> > --
> > Michal Hocko
> > SUSE Labs
>
> Michal,
>
> I think this provides the data you are looking for:
>
> It seems that the situation was invoking memory-consuming user program
> in pallarel expecting that the system will kick oom-killer at the end.
>
> The node 0-3 are small containing system data and almost all files.
> The node 4-7 are large prepared to contain user data only.
> The issue described in above was observed on node 4-7, where
> had very few memory for files.
>
> The node 4-7 has more cpu than node 0-3.
> Only cpus on node 4-7 are configuerd to be nohz_full.
> So we often found unflushed percpu vmstat on cpus of node 4-7.
>
>
Michal,
Let me know if you have any objections to the patch, thanks.
On Wed 22-11-23 08:26:02, Marcelo Tosatti wrote:
[...]
> Michal,
>
> Let me know if you have any objections to the patch, thanks.
I do not think you have exaplained how the patch helps nor you have
shown it has fixed the described problem. You seem to be very focused on
the specific snapshot which I do agree shows that the data is out of
sync and that there is throttling happening when strictly speaking it
should noti. But (let me repeat) those discrepancies are so small that
it is very likely that concurrent reclaimers will be stalled (just take
one to isolate those pages) anyway. Maybe this leads to an earlier OOM
killer invocation as untrottled reclaimers will be able to conclude
there is no progress rather than being throttled on the direct reclaim.
That being said I am not saying the patch is incorrect. Nevertheless, I
do not think we want to merge this patch without a better understanding
what is going on in your specific case and what kind of runtime
difference does the patch make in that case. From your previous email it
seems like the actual case is mostly memory stress test that manages to
fill out the memory to push almost all the file LRU while anon LRU is
not reclaimable for some reason. That shouldn't be terribly hard to
reproduce.
--
Michal Hocko
SUSE Labs