Greeting,
FYI, we noticed the following commit (built with gcc-11):
commit: 4051a81774d6d8e28192742c26999d6f29bc0e68 ("locking/lockdep: Use sched_clock() for random numbers")
https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git locking/urgent
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>
[ 17.451787][ T1] rtrs_server L2256: Loading module rtrs_server, proto 2.0: (max_chunk_size: 131072 (pure IO 126976, headers 4096) , sess_queue_depth: 512, always_invalidate: 1)
[ 17.470894][ T1] swapper: page allocation failure: order:5, mode:0xcc0(GFP_KERNEL), nodemask=(null)
[ 17.470905][ T1] CPU: 0 PID: 1 Comm: swapper Not tainted 5.19.0-rc2-00001-g4051a81774d6 #1
[ 17.470910][ T1] Call Trace:
[ 17.470913][ T1] dump_stack_lvl+0x2a/0x49
[ 17.470921][ T1] dump_stack+0xd/0x10
[ 17.470924][ T1] warn_alloc.cold+0x53/0xad
[ 17.470932][ T1] __alloc_pages_slowpath+0x781/0x8b0
[ 17.470941][ T1] __alloc_pages+0x276/0x290
[ 17.470945][ T1] mempool_alloc_pages+0xc/0x10
[ 17.470951][ T1] mempool_init_node+0x99/0xe0
[ 17.470954][ T1] ? mempool_free_pages+0x10/0x10
[ 17.470958][ T1] ? mempool_free_pages+0x10/0x10
[ 17.470961][ T1] mempool_create+0x40/0x70
[ 17.470964][ T1] ? mempool_free_slab+0x10/0x10
[ 17.470968][ T1] ? mempool_free_slab+0x10/0x10
[ 17.470971][ T1] rtrs_server_init+0xec/0x16b
[ 17.470977][ T1] ? rtrs_client_init+0x6e/0x6e
[ 17.470979][ T1] do_one_initcall+0x4c/0x220
[ 17.470983][ T1] ? parse_args+0xe0/0x260
[ 17.470988][ T1] do_initcalls+0xf4/0x116
[ 17.470994][ T1] kernel_init_freeable+0xbd/0xec
[ 17.470998][ T1] ? rest_init+0xa0/0xa0
[ 17.471002][ T1] kernel_init+0x12/0xf0
[ 17.471004][ T1] ret_from_fork+0x1c/0x28
[ 17.471008][ T1] Mem-Info:
[ 17.471009][ T1] active_anon:0 inactive_anon:0 isolated_anon:0
[ 17.471009][ T1] active_file:0 inactive_file:0 isolated_file:0
[ 17.471009][ T1] unevictable:87718 dirty:0 writeback:0
[ 17.471009][ T1] slab_reclaimable:7758 slab_unreclaimable:2398
[ 17.471009][ T1] mapped:0 shmem:0 pagetables:1 bounce:0
[ 17.471009][ T1] kernel_misc_reclaimable:0
[ 17.471009][ T1] free:511515 free_pcp:299 free_cma:0
[ 17.471016][ T1] Node 0 active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:350872kB isolated(anon):0kB isolated(file):0kB mapped:0kB dirty:0kB writeback:0kB shmem:0kB writeback_tmp:0kB kernel_stack:304kB pagetables:4kB all_unreclaimable? no
[ 17.471022][ T1] DMA free:2304kB boost:0kB min:80kB low:100kB high:120kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 17.471029][ T1] lowmem_reserve[]: 0 544 2867 2867
[ 17.471034][ T1] Normal free:3040kB boost:0kB min:2940kB low:3672kB high:4404kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:16348kB writepending:0kB present:749560kB managed:671404kB mlocked:0kB bounce:0kB free_pcp:900kB local_pcp:900kB free_cma:0kB
[ 17.471041][ T1] lowmem_reserve[]: 0 0 18591 18591
[ 17.471045][ T1] HighMem free:2040716kB boost:0kB min:512kB low:3652kB high:6792kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:334524kB writepending:0kB present:2379656kB managed:2379656kB mlocked:0kB bounce:0kB free_pcp:296kB local_pcp:296kB free_cma:0kB
[ 17.471052][ T1] lowmem_reserve[]: 0 0 0 0
[ 17.471056][ T1] DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 1*256kB (U) 0*512kB 0*1024kB 1*2048kB (U) 0*4096kB = 2304kB
[ 17.471073][ T1] Normal: 4*4kB (UM) 2*8kB (UM) 2*16kB (U) 3*32kB (UM) 3*64kB (UM) 1*128kB (M) 2*256kB (M) 0*512kB 2*1024kB (M) 0*2048kB 0*4096kB = 3040kB
[ 17.471093][ T1] HighMem: 1*4kB (M) 1*8kB (U) 0*16kB 0*32kB 0*64kB 1*128kB (M) 1*256kB (U) 1*512kB (M) 2*1024kB (UM) 1*2048kB (M) 497*4096kB (M) = 2040716kB
[ 17.471114][ T1] 87718 total pagecache pages
[ 17.471116][ T1] 786302 pages RAM
[ 17.471116][ T1] 594914 pages HighMem/MovableOnly
[ 17.471117][ T1] 19697 pages reserved
[ 17.471118][ T1] 0 pages hwpoisoned
To reproduce:
# build kernel
cd linux
cp config-5.19.0-rc2-00001-g4051a81774d6 .config
make HOSTCC=gcc-11 CC=gcc-11 ARCH=i386 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-11 CC=gcc-11 ARCH=i386 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
--
0-DAY CI Kernel Test Service
https://01.org/lkp
+ rtrs, infiniband folks.
On 2022-06-20 10:07:27 [+0800], kernel test robot wrote:
> Greeting,
>
> FYI, we noticed the following commit (built with gcc-11):
>
> commit: 4051a81774d6d8e28192742c26999d6f29bc0e68 ("locking/lockdep: Use sched_clock() for random numbers")
> https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git locking/urgent
…
> in testcase: boot
>
> on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G
>
…
> [ 17.451787][ T1] rtrs_server L2256: Loading module rtrs_server, proto 2.0: (max_chunk_size: 131072 (pure IO 126976, headers 4096) , sess_queue_depth: 512, always_invalidate: 1)
> [ 17.470894][ T1] swapper: page allocation failure: order:5, mode:0xcc0(GFP_KERNEL), nodemask=(null)
If I read this right, it allocates "512 * 10" chunks of order 5 / 128KiB
of memory (contiguous memory). And this appears to fail.
This is either a lot of memory or something that shouldn't be used on
i386.
Either way, locking/urgent is innocent.
> [ 17.470905][ T1] CPU: 0 PID: 1 Comm: swapper Not tainted 5.19.0-rc2-00001-g4051a81774d6 #1
…
> [ 17.471016][ T1] Node 0 active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:350872kB isolated(anon):0kB isolated(file):0kB mapped:0kB dirty:0kB writeback:0kB shmem:0kB writeback_tmp:0kB kernel_stack:304kB pagetables:4kB all_unreclaimable? no
> [ 17.471022][ T1] DMA free:2304kB boost:0kB min:80kB low:100kB high:120kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
> [ 17.471029][ T1] lowmem_reserve[]: 0 544 2867 2867
> [ 17.471034][ T1] Normal free:3040kB boost:0kB min:2940kB low:3672kB high:4404kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:16348kB writepending:0kB present:749560kB managed:671404kB mlocked:0kB bounce:0kB free_pcp:900kB local_pcp:900kB free_cma:0kB
> [ 17.471041][ T1] lowmem_reserve[]: 0 0 18591 18591
> [ 17.471045][ T1] HighMem free:2040716kB boost:0kB min:512kB low:3652kB high:6792kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:334524kB writepending:0kB present:2379656kB managed:2379656kB mlocked:0kB bounce:0kB free_pcp:296kB local_pcp:296kB free_cma:0kB
> [ 17.471052][ T1] lowmem_reserve[]: 0 0 0 0
> [ 17.471056][ T1] DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 1*256kB (U) 0*512kB 0*1024kB 1*2048kB (U) 0*4096kB = 2304kB
> [ 17.471073][ T1] Normal: 4*4kB (UM) 2*8kB (UM) 2*16kB (U) 3*32kB (UM) 3*64kB (UM) 1*128kB (M) 2*256kB (M) 0*512kB 2*1024kB (M) 0*2048kB 0*4096kB = 3040kB
> [ 17.471093][ T1] HighMem: 1*4kB (M) 1*8kB (U) 0*16kB 0*32kB 0*64kB 1*128kB (M) 1*256kB (U) 1*512kB (M) 2*1024kB (UM) 1*2048kB (M) 497*4096kB (M) = 2040716kB
> [ 17.471114][ T1] 87718 total pagecache pages
> [ 17.471116][ T1] 786302 pages RAM
> [ 17.471116][ T1] 594914 pages HighMem/MovableOnly
> [ 17.471117][ T1] 19697 pages reserved
> [ 17.471118][ T1] 0 pages hwpoisoned
Sebastian
Hi, there
On Mon, Jun 20, 2022 at 11:42 AM Sebastian Andrzej Siewior
<[email protected]> wrote:
>
> + rtrs, infiniband folks.
>
> On 2022-06-20 10:07:27 [+0800], kernel test robot wrote:
> > Greeting,
> >
> > FYI, we noticed the following commit (built with gcc-11):
> >
> > commit: 4051a81774d6d8e28192742c26999d6f29bc0e68 ("locking/lockdep: Use sched_clock() for random numbers")
> > https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git locking/urgent
> …
> > in testcase: boot
> >
> > on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G
> >
> …
> > [ 17.451787][ T1] rtrs_server L2256: Loading module rtrs_server, proto 2.0: (max_chunk_size: 131072 (pure IO 126976, headers 4096) , sess_queue_depth: 512, always_invalidate: 1)
> > [ 17.470894][ T1] swapper: page allocation failure: order:5, mode:0xcc0(GFP_KERNEL), nodemask=(null)
>
> If I read this right, it allocates "512 * 10" chunks of order 5 / 128KiB
> of memory (contiguous memory). And this appears to fail.
> This is either a lot of memory or something that shouldn't be used on
> i386.
It allocates 512 * 128 KiB of memory, which is probably to big for
this VM setup.
>
> Either way, locking/urgent is innocent.
Agree.
Thanks!
On 2022-06-21 17:27:22 [+0200], Jinpu Wang wrote:
> Hi, there
Hi,
> > > on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G
> > >
> > …
> > > [ 17.451787][ T1] rtrs_server L2256: Loading module rtrs_server, proto 2.0: (max_chunk_size: 131072 (pure IO 126976, headers 4096) , sess_queue_depth: 512, always_invalidate: 1)
> > > [ 17.470894][ T1] swapper: page allocation failure: order:5, mode:0xcc0(GFP_KERNEL), nodemask=(null)
> >
> > If I read this right, it allocates "512 * 10" chunks of order 5 / 128KiB
> > of memory (contiguous memory). And this appears to fail.
> > This is either a lot of memory or something that shouldn't be used on
> > i386.
> It allocates 512 * 128 KiB of memory, which is probably to big for
> this VM setup.
why 512 * 128KiB? It is:
| chunk_pool = mempool_create_page_pool(sess_queue_depth * CHUNK_POOL_SZ,
| get_order(max_chunk_size));
with
| static int __read_mostly max_chunk_size = DEFAULT_MAX_CHUNK_SIZE;
| static int __read_mostly sess_queue_depth = DEFAULT_SESS_QUEUE_DEPTH;
| #define CHUNK_POOL_SZ 10
so isn't it (512 * 10) * 128KiB?
> Thanks!
Sebastian
On Wed, Jun 22, 2022 at 8:43 AM Sebastian Andrzej Siewior
<[email protected]> wrote:
>
> On 2022-06-21 17:27:22 [+0200], Jinpu Wang wrote:
> > Hi, there
> Hi,
>
> > > > on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G
> > > >
> > > …
> > > > [ 17.451787][ T1] rtrs_server L2256: Loading module rtrs_server, proto 2.0: (max_chunk_size: 131072 (pure IO 126976, headers 4096) , sess_queue_depth: 512, always_invalidate: 1)
> > > > [ 17.470894][ T1] swapper: page allocation failure: order:5, mode:0xcc0(GFP_KERNEL), nodemask=(null)
> > >
> > > If I read this right, it allocates "512 * 10" chunks of order 5 / 128KiB
> > > of memory (contiguous memory). And this appears to fail.
> > > This is either a lot of memory or something that shouldn't be used on
> > > i386.
> > It allocates 512 * 128 KiB of memory, which is probably to big for
> > this VM setup.
>
> why 512 * 128KiB? It is:
> | chunk_pool = mempool_create_page_pool(sess_queue_depth * CHUNK_POOL_SZ,
> | get_order(max_chunk_size));
> with
> | static int __read_mostly max_chunk_size = DEFAULT_MAX_CHUNK_SIZE;
> | static int __read_mostly sess_queue_depth = DEFAULT_SESS_QUEUE_DEPTH;
> | #define CHUNK_POOL_SZ 10
>
> so isn't it (512 * 10) * 128KiB?
eh, you're right, I forgot we have mempool. We discussed internally in
the past to remove that, we should do it.
Sorry
>
> > Thanks!
>
> Sebastian