2023-11-05 12:52:28

by Charan Teja Kalla

[permalink] [raw]
Subject: [PATCH V2 0/3] mm: page_alloc: fixes for early oom kills

Ealry OOM is happened on a system with out unreserving the highatomic
page blocks and draining of pcp lists for an allocation request.

The state of the system where this issue exposed shown in oom kill logs:
[ 295.998653] Normal free:7728kB boost:0kB min:804kB low:1004kB
high:1204kB reserved_highatomic:8192KB active_anon:4kB inactive_anon:0kB
active_file:24kB inactive_file:24kB unevictable:1220kB writepending:0kB
present:70732kB managed:49224kB mlocked:0kB bounce:0kB free_pcp:688kB
local_pcp:492kB free_cma:0kB
[ 295.998656] lowmem_reserve[]: 0 32
[ 295.998659] Normal: 508*4kB (UMEH) 241*8kB (UMEH) 143*16kB (UMEH)
33*32kB (UH) 7*64kB (UH) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB
0*4096kB = 7752kB

From the above log:
a) OOM is occurred with out unreserving the high atomic page blocks.
b) ~16MB of memory reserved for high atomic reserves against the
expectation of 1% reserves.

These 2 issues are tried to be fixed in 1st and 2nd patch.

Another excerpt from the oom kill message:
Normal free:760kB boost:0kB min:768kB low:960kB high:1152kB
reserved_highatomic:0KB managed:49152kB free_pcp:460kB

In the above, pcp lists too aren't drained before entering into
oom kill path. This is tried to be fixed in 3rd patch.

Changes in V1:
o Unreserving the high atomic page blocks is tried to fix from
the oom kill path rather than in should_reclaim_retry()
o discussed why a lot more than 1% of managed memory is reserved
for high atomic reserves.
o https://lore.kernel.org/linux-mm/[email protected]/

Charan Teja Kalla (3):
mm: page_alloc: unreserve highatomic page blocks before oom
mm: page_alloc: correct high atomic reserve calculations
mm: page_alloc: drain pcp lists before oom kill

mm/page_alloc.c | 23 +++++++++++++----------
1 file changed, 13 insertions(+), 10 deletions(-)

--
2.7.4