2022-01-14 01:15:13

by Zhang Qiao

[permalink] [raw]
Subject: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()


Hello everyone

I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
another, while I also performed the cpu hotplug operation, and got following calltrace.

This may lead to a inconsistency between the affinity of the task and cpuset.cpus of the
dest cpuset, but this task can be successfully migrated to the dest cpuset cgroup.

Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
doesn't fail, as follows:

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index d0e163a02099..2535d23d2c51 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -2265,6 +2265,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
guarantee_online_mems(cs, &cpuset_attach_nodemask_to);

cgroup_taskset_for_each(task, css, tset) {
+ cpus_read_lock();
if (cs != &top_cpuset)
guarantee_online_cpus(task, cpus_attach);
else
@@ -2274,6 +2275,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
* fail. TODO: have a better way to handle failure here
*/
WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
+ cpus_read_unlock();


Is there a better solution?

Thanks

log:
[ 43.853794] ------------[ cut here ]------------
[ 43.853798] WARNING: CPU: 7 PID: 463 at ../kernel/cgroup/cpuset.c:2279 cpuset_attach+0xee/0x1f0
[ 43.853806] Modules linked in:
[ 43.853807] CPU: 7 PID: 463 Comm: bash Not tainted 5.16.0-rc4+ #10
[ 43.853810] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-48-gd9c812dda519-prebuilt.qemu.org 04/01/2014
[ 43.853811] RIP: 0010:cpuset_attach+0xee/0x1f0
[ 43.853814] Code: ff ff 48 85 c0 48 89 c3 74 24 48 81 fd 40 42 54 82 75 96 80 bb 38 07 00 00 6f 48 8b 05 93 b3 55 01 48 89 05 bc 05 bb 01 75 97 <0f> 0b eb b3 48 8b 85 e8 00 00 00 48 85
[ 43.853816] RSP: 0018:ffffc90000623c30 EFLAGS: 00010246
[ 43.853818] RAX: 0000000000000000 RBX: ffff888101f39c80 RCX: 0000000000000001
[ 43.853819] RDX: 0000000000007fff RSI: ffffffff82cd5708 RDI: ffff888101f39c80
[ 43.853821] RBP: ffff8881001afe00 R08: 0000000000000000 R09: ffffc90000623d00
[ 43.853822] R10: ffffc900000a3de8 R11: 0000000000000001 R12: ffffc90000623cf0
[ 43.853823] R13: ffffffff82cd56d0 R14: ffffffff82544240 R15: 0000000000000001
[ 43.853824] FS: 00007f012414d740(0000) GS:ffff8882b5bc0000(0000) knlGS:0000000000000000
[ 43.853828] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 43.853829] CR2: 000055cfdb27de28 CR3: 00000001020cc000 CR4: 00000000000006e0
[ 43.853830] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 43.853831] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 43.853832] Call Trace:
[ 43.853846] <TASK>
[ 43.853848] cgroup_migrate_execute+0x319/0x410
[ 43.853853] cgroup_attach_task+0x159/0x200
[ 43.853857] ? __cgroup1_procs_write.constprop.21+0x10d/0x170
[ 43.853858] __cgroup1_procs_write.constprop.21+0x10d/0x170
[ 43.853860] cgroup_file_write+0x65/0x160
[ 43.853863] kernfs_fop_write_iter+0x12a/0x1a0
[ 43.853870] new_sync_write+0x11d/0x1b0
[ 43.853877] vfs_write+0x232/0x290
[ 43.853880] ksys_write+0x9c/0xd0
[ 43.853882] ? fpregs_assert_state_consistent+0x19/0x40
[ 43.853886] do_syscall_64+0x3a/0x80
[ 43.853896] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 43.853902] RIP: 0033:0x7f012381f224
[ 43.853904] Code: 89 02 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 8d 05 c1 07 2e 00 8b 00 85 c0 75 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 f3 c3 66 90 45
[ 43.853906] RSP: 002b:00007ffd3f411f28 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[ 43.853908] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f012381f224
[ 43.853909] RDX: 0000000000000004 RSI: 000055cfdb297a70 RDI: 0000000000000001
[ 43.853910] RBP: 000055cfdb297a70 R08: 000000000000000a R09: 0000000000000003
[ 43.853911] R10: 000000000000000a R11: 0000000000000246 R12: 00007f0123afb760
[ 43.853913] R13: 0000000000000004 R14: 00007f0123af72a0 R15: 00007f0123af6760
[ 43.853914] </TASK>
[ 43.853915] ---[ end trace 8292bcee7ea90403 ]---


2022-01-14 22:48:41

by Tejun Heo

[permalink] [raw]
Subject: Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()

(cc'ing Waiman and Michal and quoting whole body)

Seems sane to me but let's hear what Waiman and Michal think.

On Fri, Jan 14, 2022 at 09:15:06AM +0800, Zhang Qiao wrote:
>
> Hello everyone
>
> I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
> another, while I also performed the cpu hotplug operation, and got following calltrace.
>
> This may lead to a inconsistency between the affinity of the task and cpuset.cpus of the
> dest cpuset, but this task can be successfully migrated to the dest cpuset cgroup.
>
> Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
> doesn't fail, as follows:
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index d0e163a02099..2535d23d2c51 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -2265,6 +2265,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
> guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
>
> cgroup_taskset_for_each(task, css, tset) {
> + cpus_read_lock();
> if (cs != &top_cpuset)
> guarantee_online_cpus(task, cpus_attach);
> else
> @@ -2274,6 +2275,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
> * fail. TODO: have a better way to handle failure here
> */
> WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
> + cpus_read_unlock();
>
>
> Is there a better solution?
>
> Thanks
>
> log:
> [ 43.853794] ------------[ cut here ]------------
> [ 43.853798] WARNING: CPU: 7 PID: 463 at ../kernel/cgroup/cpuset.c:2279 cpuset_attach+0xee/0x1f0
> [ 43.853806] Modules linked in:
> [ 43.853807] CPU: 7 PID: 463 Comm: bash Not tainted 5.16.0-rc4+ #10
> [ 43.853810] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-48-gd9c812dda519-prebuilt.qemu.org 04/01/2014
> [ 43.853811] RIP: 0010:cpuset_attach+0xee/0x1f0
> [ 43.853814] Code: ff ff 48 85 c0 48 89 c3 74 24 48 81 fd 40 42 54 82 75 96 80 bb 38 07 00 00 6f 48 8b 05 93 b3 55 01 48 89 05 bc 05 bb 01 75 97 <0f> 0b eb b3 48 8b 85 e8 00 00 00 48 85
> [ 43.853816] RSP: 0018:ffffc90000623c30 EFLAGS: 00010246
> [ 43.853818] RAX: 0000000000000000 RBX: ffff888101f39c80 RCX: 0000000000000001
> [ 43.853819] RDX: 0000000000007fff RSI: ffffffff82cd5708 RDI: ffff888101f39c80
> [ 43.853821] RBP: ffff8881001afe00 R08: 0000000000000000 R09: ffffc90000623d00
> [ 43.853822] R10: ffffc900000a3de8 R11: 0000000000000001 R12: ffffc90000623cf0
> [ 43.853823] R13: ffffffff82cd56d0 R14: ffffffff82544240 R15: 0000000000000001
> [ 43.853824] FS: 00007f012414d740(0000) GS:ffff8882b5bc0000(0000) knlGS:0000000000000000
> [ 43.853828] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 43.853829] CR2: 000055cfdb27de28 CR3: 00000001020cc000 CR4: 00000000000006e0
> [ 43.853830] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [ 43.853831] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> [ 43.853832] Call Trace:
> [ 43.853846] <TASK>
> [ 43.853848] cgroup_migrate_execute+0x319/0x410
> [ 43.853853] cgroup_attach_task+0x159/0x200
> [ 43.853857] ? __cgroup1_procs_write.constprop.21+0x10d/0x170
> [ 43.853858] __cgroup1_procs_write.constprop.21+0x10d/0x170
> [ 43.853860] cgroup_file_write+0x65/0x160
> [ 43.853863] kernfs_fop_write_iter+0x12a/0x1a0
> [ 43.853870] new_sync_write+0x11d/0x1b0
> [ 43.853877] vfs_write+0x232/0x290
> [ 43.853880] ksys_write+0x9c/0xd0
> [ 43.853882] ? fpregs_assert_state_consistent+0x19/0x40
> [ 43.853886] do_syscall_64+0x3a/0x80
> [ 43.853896] entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ 43.853902] RIP: 0033:0x7f012381f224
> [ 43.853904] Code: 89 02 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 8d 05 c1 07 2e 00 8b 00 85 c0 75 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 f3 c3 66 90 45
> [ 43.853906] RSP: 002b:00007ffd3f411f28 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
> [ 43.853908] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f012381f224
> [ 43.853909] RDX: 0000000000000004 RSI: 000055cfdb297a70 RDI: 0000000000000001
> [ 43.853910] RBP: 000055cfdb297a70 R08: 000000000000000a R09: 0000000000000003
> [ 43.853911] R10: 000000000000000a R11: 0000000000000246 R12: 00007f0123afb760
> [ 43.853913] R13: 0000000000000004 R14: 00007f0123af72a0 R15: 00007f0123af6760
> [ 43.853914] </TASK>
> [ 43.853915] ---[ end trace 8292bcee7ea90403 ]---

--
tejun

2022-01-14 23:06:58

by Waiman Long

[permalink] [raw]
Subject: Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()

On 1/14/22 11:20, Tejun Heo wrote:
> (cc'ing Waiman and Michal and quoting whole body)
>
> Seems sane to me but let's hear what Waiman and Michal think.
>
> On Fri, Jan 14, 2022 at 09:15:06AM +0800, Zhang Qiao wrote:
>> Hello everyone
>>
>> I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
>> another, while I also performed the cpu hotplug operation, and got following calltrace.
>>
>> This may lead to a inconsistency between the affinity of the task and cpuset.cpus of the
>> dest cpuset, but this task can be successfully migrated to the dest cpuset cgroup.
>>
>> Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
>> doesn't fail, as follows:
>>
>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>> index d0e163a02099..2535d23d2c51 100644
>> --- a/kernel/cgroup/cpuset.c
>> +++ b/kernel/cgroup/cpuset.c
>> @@ -2265,6 +2265,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>> guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
>>
>> cgroup_taskset_for_each(task, css, tset) {
>> + cpus_read_lock();
>> if (cs != &top_cpuset)
>> guarantee_online_cpus(task, cpus_attach);
>> else
>> @@ -2274,6 +2275,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>> * fail. TODO: have a better way to handle failure here
>> */
>> WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
>> + cpus_read_unlock();
>>
>>
>> Is there a better solution?
>>
>> Thanks

The change looks OK to me. However, we may need to run the full set of
regression test to make sure that lockdep won't complain about potential
deadlock.

Cheers,
Longman

2022-01-17 11:11:02

by Zhang Qiao

[permalink] [raw]
Subject: Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()

hello

在 2022/1/15 4:33, Waiman Long 写道:
> On 1/14/22 11:20, Tejun Heo wrote:
>> (cc'ing Waiman and Michal and quoting whole body)
>>
>> Seems sane to me but let's hear what Waiman and Michal think.
>>
>> On Fri, Jan 14, 2022 at 09:15:06AM +0800, Zhang Qiao wrote:
>>> Hello everyone
>>>
>>>     I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
>>> another, while I also performed the cpu hotplug operation, and got following calltrace.
>>>
>>>     This may lead to a inconsistency between the affinity of the task and cpuset.cpus of the
>>> dest cpuset, but this task can be successfully migrated to the dest cpuset cgroup.
>>>
>>>     Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
>>> doesn't fail, as follows:
>>>
>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>>> index d0e163a02099..2535d23d2c51 100644
>>> --- a/kernel/cgroup/cpuset.c
>>> +++ b/kernel/cgroup/cpuset.c
>>> @@ -2265,6 +2265,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>>>          guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
>>>
>>>          cgroup_taskset_for_each(task, css, tset) {
>>> +               cpus_read_lock();
>>>                  if (cs != &top_cpuset)
>>>                          guarantee_online_cpus(task, cpus_attach);
>>>                  else
>>> @@ -2274,6 +2275,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>>>                   * fail.  TODO: have a better way to handle failure here
>>>                   */
>>>                  WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
>>> +               cpus_read_unlock();
>>>
>>>
>>>     Is there a better solution?
>>>
>>>     Thanks
>
> The change looks OK to me. However, we may need to run the full set of regression test to make sure that lockdep won't complain about potential deadlock.
>
I run the test with lockdep enabled, and got lockdep warning like that below.
so we should take the cpu_hotplug_lock first, then take the cpuset_rwsem lock.

thanks,
Zhang Qiao

[ 38.420372] ======================================================
[ 38.421339] WARNING: possible circular locking dependency detected
[ 38.422312] 5.16.0-rc4+ #13 Not tainted
[ 38.422920] ------------------------------------------------------
[ 38.423883] bash/594 is trying to acquire lock:
[ 38.424595] ffffffff8286afc0 (cpu_hotplug_lock){++++}-{0:0}, at: cpuset_attach+0xc2/0x1e0
[ 38.425880]
[ 38.425880] but task is already holding lock:
[ 38.426787] ffffffff8296a5a0 (&cpuset_rwsem){++++}-{0:0}, at: cpuset_attach+0x3e/0x1e0
[ 38.428015]
[ 38.428015] which lock already depends on the new lock.
[ 38.428015]
[ 38.429279]
[ 38.429279] the existing dependency chain (in reverse order) is:
[ 38.430445]
[ 38.430445] -> #1 (&cpuset_rwsem){++++}-{0:0}:
[ 38.431371] percpu_down_write+0x42/0x130
[ 38.432085] cpuset_css_online+0x2b/0x2e0
[ 38.432808] online_css+0x24/0x80
[ 38.433411] cgroup_apply_control_enable+0x2fa/0x330
[ 38.434273] cgroup_mkdir+0x396/0x4c0
[ 38.434930] kernfs_iop_mkdir+0x56/0x80
[ 38.435614] vfs_mkdir+0xde/0x190
[ 38.436220] do_mkdirat+0x7d/0xf0
[ 38.436824] __x64_sys_mkdir+0x21/0x30
[ 38.437495] do_syscall_64+0x3a/0x80
[ 38.438145] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 38.439015]
[ 38.439015] -> #0 (cpu_hotplug_lock){++++}-{0:0}:
[ 38.439980] __lock_acquire+0x17f6/0x2260
[ 38.440691] lock_acquire+0x277/0x320
[ 38.441347] cpus_read_lock+0x37/0xc0
[ 38.442011] cpuset_attach+0xc2/0x1e0
[ 38.442671] cgroup_migrate_execute+0x3a6/0x490
[ 38.443461] cgroup_attach_task+0x22c/0x3d0
[ 38.444197] __cgroup1_procs_write.constprop.21+0x10d/0x170
[ 38.445145] cgroup_file_write+0x6f/0x230
[ 38.445860] kernfs_fop_write_iter+0x130/0x1b0
[ 38.446636] new_sync_write+0x120/0x1b0
[ 38.447319] vfs_write+0x359/0x3b0
[ 38.447937] ksys_write+0xa2/0xe0
[ 38.448540] do_syscall_64+0x3a/0x80
[ 38.449183] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 38.450057]
[ 38.450057] other info that might help us debug this:
[ 38.450057]
[ 38.451297] Possible unsafe locking scenario:
[ 38.451297]
[ 38.452218] CPU0 CPU1
[ 38.452935] ---- ----
[ 38.453650] lock(&cpuset_rwsem);
[ 38.454188] lock(cpu_hotplug_lock);
[ 38.455148] lock(&cpuset_rwsem);
[ 38.456069] lock(cpu_hotplug_lock);
[ 38.456645]
[ 38.456645] *** DEADLOCK ***
[ 38.456645]
[ 38.457572] 5 locks held by bash/594:
[ 38.458156] #0: ffff888100d67470 (sb_writers#8){.+.+}-{0:0}, at: ksys_write+0xa2/0xe0
[ 38.459392] #1: ffff888100d06290 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0xfe/0x1b0
[ 38.460761] #2: ffffffff82967330 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0xcf/0x1d0
[ 38.462137] #3: ffffffff82967100 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: cgroup_procs_write_start+0x78/0x240
[ 38.463749] #4: ffffffff8296a5a0 (&cpuset_rwsem){++++}-{0:0}, at: cpuset_attach+0x3e/0x1e0
[ 38.465052]
[ 38.465052] stack backtrace:
[ 38.465747] CPU: 0 PID: 594 Comm: bash Not tainted 5.16.0-rc4+ #13
[ 38.466712] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-48-gd9c812dda519-prebuilt.qemu.org 04/01/2014
[ 38.468507] Call Trace:
[ 38.468900] <TASK>
[ 38.469241] dump_stack_lvl+0x56/0x7b
[ 38.469827] check_noncircular+0x126/0x140
[ 38.470476] ? __lock_acquire+0x17f6/0x2260
[ 38.471136] __lock_acquire+0x17f6/0x2260
[ 38.471772] lock_acquire+0x277/0x320
[ 38.472352] ? cpuset_attach+0xc2/0x1e0
[ 38.472961] cpus_read_lock+0x37/0xc0
[ 38.473550] ? cpuset_attach+0xc2/0x1e0
[ 38.474159] cpuset_attach+0xc2/0x1e0
[ 38.474742] cgroup_migrate_execute+0x3a6/0x490
[ 38.475457] cgroup_attach_task+0x22c/0x3d0
[ 38.476121] ? __cgroup1_procs_write.constprop.21+0x10d/0x170
[ 38.477021] __cgroup1_procs_write.constprop.21+0x10d/0x170
[ 38.477904] cgroup_file_write+0x6f/0x230
[ 38.478540] kernfs_fop_write_iter+0x130/0x1b0
[ 38.479241] new_sync_write+0x120/0x1b0
[ 38.479849] vfs_write+0x359/0x3b0
[ 38.480391] ksys_write+0xa2/0xe0
[ 38.480920] do_syscall_64+0x3a/0x80
[ 38.481488] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 38.482289] RIP: 0033:0x7f229f42b224
[ 38.482857] Code: 89 02 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 8d 05 c1 07 2e 00 8b 00 85 c0 75 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 f3 c3 66 90 45
[ 38.485758] RSP: 002b:00007fffaa3eadd8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[ 38.486937] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f229f42b224
[ 38.488051] RDX: 0000000000000004 RSI: 0000562679dc5410 RDI: 0000000000000001
[ 38.489164] RBP: 0000562679dc5410 R08: 000000000000000a R09: 0000000000000003
[ 38.490282] R10: 000000000000000a R11: 0000000000000246 R12: 00007f229f707760
[ 38.491395] R13: 0000000000000004 R14: 00007f229f7032a0 R15: 00007f229f702760
[ 38.492516] </TASK>


> Cheers,
> Longman
>
> .

2022-01-17 12:41:33

by Waiman Long

[permalink] [raw]
Subject: Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()

On 1/16/22 21:25, Zhang Qiao wrote:
> hello
>
> 在 2022/1/15 4:33, Waiman Long 写道:
>> On 1/14/22 11:20, Tejun Heo wrote:
>>> (cc'ing Waiman and Michal and quoting whole body)
>>>
>>> Seems sane to me but let's hear what Waiman and Michal think.
>>>
>>> On Fri, Jan 14, 2022 at 09:15:06AM +0800, Zhang Qiao wrote:
>>>> Hello everyone
>>>>
>>>>     I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
>>>> another, while I also performed the cpu hotplug operation, and got following calltrace.
>>>>
>>>>     This may lead to a inconsistency between the affinity of the task and cpuset.cpus of the
>>>> dest cpuset, but this task can be successfully migrated to the dest cpuset cgroup.
>>>>
>>>>     Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
>>>> doesn't fail, as follows:
>>>>
>>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>>>> index d0e163a02099..2535d23d2c51 100644
>>>> --- a/kernel/cgroup/cpuset.c
>>>> +++ b/kernel/cgroup/cpuset.c
>>>> @@ -2265,6 +2265,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>>>>          guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
>>>>
>>>>          cgroup_taskset_for_each(task, css, tset) {
>>>> +               cpus_read_lock();
>>>>                  if (cs != &top_cpuset)
>>>>                          guarantee_online_cpus(task, cpus_attach);
>>>>                  else
>>>> @@ -2274,6 +2275,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>>>>                   * fail.  TODO: have a better way to handle failure here
>>>>                   */
>>>>                  WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
>>>> +               cpus_read_unlock();
>>>>
>>>>
>>>>     Is there a better solution?
>>>>
>>>>     Thanks
>> The change looks OK to me. However, we may need to run the full set of regression test to make sure that lockdep won't complain about potential deadlock.
>>
> I run the test with lockdep enabled, and got lockdep warning like that below.
> so we should take the cpu_hotplug_lock first, then take the cpuset_rwsem lock.
>
> thanks,
> Zhang Qiao
>
> [ 38.420372] ======================================================
> [ 38.421339] WARNING: possible circular locking dependency detected
> [ 38.422312] 5.16.0-rc4+ #13 Not tainted
> [ 38.422920] ------------------------------------------------------
> [ 38.423883] bash/594 is trying to acquire lock:
> [ 38.424595] ffffffff8286afc0 (cpu_hotplug_lock){++++}-{0:0}, at: cpuset_attach+0xc2/0x1e0
> [ 38.425880]
> [ 38.425880] but task is already holding lock:
> [ 38.426787] ffffffff8296a5a0 (&cpuset_rwsem){++++}-{0:0}, at: cpuset_attach+0x3e/0x1e0
> [ 38.428015]
> [ 38.428015] which lock already depends on the new lock.
> [ 38.428015]
> [ 38.429279]
> [ 38.429279] the existing dependency chain (in reverse order) is:
> [ 38.430445]
> [ 38.430445] -> #1 (&cpuset_rwsem){++++}-{0:0}:
> [ 38.431371] percpu_down_write+0x42/0x130
> [ 38.432085] cpuset_css_online+0x2b/0x2e0
> [ 38.432808] online_css+0x24/0x80
> [ 38.433411] cgroup_apply_control_enable+0x2fa/0x330
> [ 38.434273] cgroup_mkdir+0x396/0x4c0
> [ 38.434930] kernfs_iop_mkdir+0x56/0x80
> [ 38.435614] vfs_mkdir+0xde/0x190
> [ 38.436220] do_mkdirat+0x7d/0xf0
> [ 38.436824] __x64_sys_mkdir+0x21/0x30
> [ 38.437495] do_syscall_64+0x3a/0x80
> [ 38.438145] entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ 38.439015]
> [ 38.439015] -> #0 (cpu_hotplug_lock){++++}-{0:0}:
> [ 38.439980] __lock_acquire+0x17f6/0x2260
> [ 38.440691] lock_acquire+0x277/0x320
> [ 38.441347] cpus_read_lock+0x37/0xc0
> [ 38.442011] cpuset_attach+0xc2/0x1e0
> [ 38.442671] cgroup_migrate_execute+0x3a6/0x490
> [ 38.443461] cgroup_attach_task+0x22c/0x3d0
> [ 38.444197] __cgroup1_procs_write.constprop.21+0x10d/0x170
> [ 38.445145] cgroup_file_write+0x6f/0x230
> [ 38.445860] kernfs_fop_write_iter+0x130/0x1b0
> [ 38.446636] new_sync_write+0x120/0x1b0
> [ 38.447319] vfs_write+0x359/0x3b0
> [ 38.447937] ksys_write+0xa2/0xe0
> [ 38.448540] do_syscall_64+0x3a/0x80
> [ 38.449183] entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ 38.450057]
> [ 38.450057] other info that might help us debug this:
> [ 38.450057]
> [ 38.451297] Possible unsafe locking scenario:
> [ 38.451297]
> [ 38.452218] CPU0 CPU1
> [ 38.452935] ---- ----
> [ 38.453650] lock(&cpuset_rwsem);
> [ 38.454188] lock(cpu_hotplug_lock);
> [ 38.455148] lock(&cpuset_rwsem);
> [ 38.456069] lock(cpu_hotplug_lock);

Yes, you need to play around with lock ordering to make sure that
lockdep won't complain.

Cheers,
Longman

2022-01-17 14:31:56

by Zhang Qiao

[permalink] [raw]
Subject: Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()



在 2022/1/15 0:20, Tejun Heo 写道:
> (cc'ing Waiman and Michal and quoting whole body)
>
> Seems sane to me but let's hear what Waiman and Michal think.
>

Thank you for taking a look!
if ok, i will send a patch.

Thanks,
Zhang Qiao.

> On Fri, Jan 14, 2022 at 09:15:06AM +0800, Zhang Qiao wrote:
>>
>> Hello everyone
>>
>> I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
>> another, while I also performed the cpu hotplug operation, and got following calltrace.
>>
>> This may lead to a inconsistency between the affinity of the task and cpuset.cpus of the
>> dest cpuset, but this task can be successfully migrated to the dest cpuset cgroup.
>>
>> Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
>> doesn't fail, as follows:
>>
>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>> index d0e163a02099..2535d23d2c51 100644
>> --- a/kernel/cgroup/cpuset.c
>> +++ b/kernel/cgroup/cpuset.c
>> @@ -2265,6 +2265,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>> guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
>>
>> cgroup_taskset_for_each(task, css, tset) {
>> + cpus_read_lock();
>> if (cs != &top_cpuset)
>> guarantee_online_cpus(task, cpus_attach);
>> else
>> @@ -2274,6 +2275,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>> * fail. TODO: have a better way to handle failure here
>> */
>> WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
>> + cpus_read_unlock();
>>
>>
>> Is there a better solution?
>>
>> Thanks
>>
>> log:
>> [ 43.853794] ------------[ cut here ]------------
>> [ 43.853798] WARNING: CPU: 7 PID: 463 at ../kernel/cgroup/cpuset.c:2279 cpuset_attach+0xee/0x1f0
>> [ 43.853806] Modules linked in:
>> [ 43.853807] CPU: 7 PID: 463 Comm: bash Not tainted 5.16.0-rc4+ #10
>> [ 43.853810] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-48-gd9c812dda519-prebuilt.qemu.org 04/01/2014
>> [ 43.853811] RIP: 0010:cpuset_attach+0xee/0x1f0
>> [ 43.853814] Code: ff ff 48 85 c0 48 89 c3 74 24 48 81 fd 40 42 54 82 75 96 80 bb 38 07 00 00 6f 48 8b 05 93 b3 55 01 48 89 05 bc 05 bb 01 75 97 <0f> 0b eb b3 48 8b 85 e8 00 00 00 48 85
>> [ 43.853816] RSP: 0018:ffffc90000623c30 EFLAGS: 00010246
>> [ 43.853818] RAX: 0000000000000000 RBX: ffff888101f39c80 RCX: 0000000000000001
>> [ 43.853819] RDX: 0000000000007fff RSI: ffffffff82cd5708 RDI: ffff888101f39c80
>> [ 43.853821] RBP: ffff8881001afe00 R08: 0000000000000000 R09: ffffc90000623d00
>> [ 43.853822] R10: ffffc900000a3de8 R11: 0000000000000001 R12: ffffc90000623cf0
>> [ 43.853823] R13: ffffffff82cd56d0 R14: ffffffff82544240 R15: 0000000000000001
>> [ 43.853824] FS: 00007f012414d740(0000) GS:ffff8882b5bc0000(0000) knlGS:0000000000000000
>> [ 43.853828] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> [ 43.853829] CR2: 000055cfdb27de28 CR3: 00000001020cc000 CR4: 00000000000006e0
>> [ 43.853830] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>> [ 43.853831] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>> [ 43.853832] Call Trace:
>> [ 43.853846] <TASK>
>> [ 43.853848] cgroup_migrate_execute+0x319/0x410
>> [ 43.853853] cgroup_attach_task+0x159/0x200
>> [ 43.853857] ? __cgroup1_procs_write.constprop.21+0x10d/0x170
>> [ 43.853858] __cgroup1_procs_write.constprop.21+0x10d/0x170
>> [ 43.853860] cgroup_file_write+0x65/0x160
>> [ 43.853863] kernfs_fop_write_iter+0x12a/0x1a0
>> [ 43.853870] new_sync_write+0x11d/0x1b0
>> [ 43.853877] vfs_write+0x232/0x290
>> [ 43.853880] ksys_write+0x9c/0xd0
>> [ 43.853882] ? fpregs_assert_state_consistent+0x19/0x40
>> [ 43.853886] do_syscall_64+0x3a/0x80
>> [ 43.853896] entry_SYSCALL_64_after_hwframe+0x44/0xae
>> [ 43.853902] RIP: 0033:0x7f012381f224
>> [ 43.853904] Code: 89 02 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 8d 05 c1 07 2e 00 8b 00 85 c0 75 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 f3 c3 66 90 45
>> [ 43.853906] RSP: 002b:00007ffd3f411f28 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
>> [ 43.853908] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f012381f224
>> [ 43.853909] RDX: 0000000000000004 RSI: 000055cfdb297a70 RDI: 0000000000000001
>> [ 43.853910] RBP: 000055cfdb297a70 R08: 000000000000000a R09: 0000000000000003
>> [ 43.853911] R10: 000000000000000a R11: 0000000000000246 R12: 00007f0123afb760
>> [ 43.853913] R13: 0000000000000004 R14: 00007f0123af72a0 R15: 00007f0123af6760
>> [ 43.853914] </TASK>
>> [ 43.853915] ---[ end trace 8292bcee7ea90403 ]---
>

2022-01-17 14:32:01

by Zhang Qiao

[permalink] [raw]
Subject: Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()



在 2022/1/17 12:35, Waiman Long 写道:
> On 1/16/22 21:25, Zhang Qiao wrote:
>> hello
>>
>> 在 2022/1/15 4:33, Waiman Long 写道:
>>> On 1/14/22 11:20, Tejun Heo wrote:
>>>> (cc'ing Waiman and Michal and quoting whole body)
>>>>
>>>> Seems sane to me but let's hear what Waiman and Michal think.
>>>>
>>>> On Fri, Jan 14, 2022 at 09:15:06AM +0800, Zhang Qiao wrote:
>>>>> Hello everyone
>>>>>
>>>>>      I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
>>>>> another, while I also performed the cpu hotplug operation, and got following calltrace.
>>>>>
>>>>>      This may lead to a inconsistency between the affinity of the task and cpuset.cpus of the
>>>>> dest cpuset, but this task can be successfully migrated to the dest cpuset cgroup.
>>>>>
>>>>>      Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
>>>>> doesn't fail, as follows:
>>>>>
>>>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>>>>> index d0e163a02099..2535d23d2c51 100644
>>>>> --- a/kernel/cgroup/cpuset.c
>>>>> +++ b/kernel/cgroup/cpuset.c
>>>>> @@ -2265,6 +2265,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>>>>>           guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
>>>>>
>>>>>           cgroup_taskset_for_each(task, css, tset) {
>>>>> +               cpus_read_lock();
>>>>>                   if (cs != &top_cpuset)
>>>>>                           guarantee_online_cpus(task, cpus_attach);
>>>>>                   else
>>>>> @@ -2274,6 +2275,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>>>>>                    * fail.  TODO: have a better way to handle failure here
>>>>>                    */
>>>>>                   WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
>>>>> +               cpus_read_unlock();
>>>>>
>>>>>
>>>>>      Is there a better solution?
>>>>>
>>>>>      Thanks
>>> The change looks OK to me. However, we may need to run the full set of regression test to make sure that lockdep won't complain about potential deadlock.
>>>
>> I run the test with lockdep enabled, and got lockdep warning like that below.
>> so we should take the cpu_hotplug_lock first, then take the cpuset_rwsem lock.
>>
>> thanks,
>> Zhang Qiao
>>
>> [   38.420372] ======================================================
>> [   38.421339] WARNING: possible circular locking dependency detected
>> [   38.422312] 5.16.0-rc4+ #13 Not tainted
>> [   38.422920] ------------------------------------------------------
>> [   38.423883] bash/594 is trying to acquire lock:
>> [   38.424595] ffffffff8286afc0 (cpu_hotplug_lock){++++}-{0:0}, at: cpuset_attach+0xc2/0x1e0
>> [   38.425880]
>> [   38.425880] but task is already holding lock:
>> [   38.426787] ffffffff8296a5a0 (&cpuset_rwsem){++++}-{0:0}, at: cpuset_attach+0x3e/0x1e0
>> [   38.428015]
>> [   38.428015] which lock already depends on the new lock.
>> [   38.428015]
>> [   38.429279]
>> [   38.429279] the existing dependency chain (in reverse order) is:
>> [   38.430445]
>> [   38.430445] -> #1 (&cpuset_rwsem){++++}-{0:0}:
>> [   38.431371]        percpu_down_write+0x42/0x130
>> [   38.432085]        cpuset_css_online+0x2b/0x2e0
>> [   38.432808]        online_css+0x24/0x80
>> [   38.433411]        cgroup_apply_control_enable+0x2fa/0x330
>> [   38.434273]        cgroup_mkdir+0x396/0x4c0
>> [   38.434930]        kernfs_iop_mkdir+0x56/0x80
>> [   38.435614]        vfs_mkdir+0xde/0x190
>> [   38.436220]        do_mkdirat+0x7d/0xf0
>> [   38.436824]        __x64_sys_mkdir+0x21/0x30
>> [   38.437495]        do_syscall_64+0x3a/0x80
>> [   38.438145]        entry_SYSCALL_64_after_hwframe+0x44/0xae
>> [   38.439015]
>> [   38.439015] -> #0 (cpu_hotplug_lock){++++}-{0:0}:
>> [   38.439980]        __lock_acquire+0x17f6/0x2260
>> [   38.440691]        lock_acquire+0x277/0x320
>> [   38.441347]        cpus_read_lock+0x37/0xc0
>> [   38.442011]        cpuset_attach+0xc2/0x1e0
>> [   38.442671]        cgroup_migrate_execute+0x3a6/0x490
>> [   38.443461]        cgroup_attach_task+0x22c/0x3d0
>> [   38.444197]        __cgroup1_procs_write.constprop.21+0x10d/0x170
>> [   38.445145]        cgroup_file_write+0x6f/0x230
>> [   38.445860]        kernfs_fop_write_iter+0x130/0x1b0
>> [   38.446636]        new_sync_write+0x120/0x1b0
>> [   38.447319]        vfs_write+0x359/0x3b0
>> [   38.447937]        ksys_write+0xa2/0xe0
>> [   38.448540]        do_syscall_64+0x3a/0x80
>> [   38.449183]        entry_SYSCALL_64_after_hwframe+0x44/0xae
>> [   38.450057]
>> [   38.450057] other info that might help us debug this:
>> [   38.450057]
>> [   38.451297]  Possible unsafe locking scenario:
>> [   38.451297]
>> [   38.452218]        CPU0                    CPU1
>> [   38.452935]        ----                    ----
>> [   38.453650]   lock(&cpuset_rwsem);
>> [   38.454188]                                lock(cpu_hotplug_lock);
>> [   38.455148]                                lock(&cpuset_rwsem);
>> [   38.456069]   lock(cpu_hotplug_lock);
>
> Yes, you need to play around with lock ordering to make sure that lockdep won't complain.
>
Thank you for taking a look!
if ok, i will send a patch.

Thanks,
Zhang Qiao.

> Cheers,
> Longman
>
> .

2022-01-21 19:15:47

by Michal Koutný

[permalink] [raw]
Subject: Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()

On Fri, Jan 14, 2022 at 09:15:06AM +0800, Zhang Qiao <[email protected]> wrote:
> I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
> another, while I also performed the cpu hotplug operation, and got following calltrace.

Do you have more information on what hotplug event and what error
(from set_cpus_allowed_ptr() you observe? (And what's src/dst cpuset wrt
root/non-root)?

> Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
> doesn't fail, as follows:

I'm wondering what can be wrong with the current actors:

cpuset_can_attach
down_read(cpuset_rwsem)
// check all migratees
up_read(cpuset_rwsem)
[ _cpu_down / cpuhp_setup_state ]
schedule_work
...
cpuset_hotplug_update_tasks
down_write(cpuset_rwsem)
up_write(cpuset_rwsem)
... flush_work
[ _cpu_down / cpu_up_down_serialize_trainwrecks ]
cpuset_attach
down_write(cpuset_rwsem)
set_cpus_allowed_ptr(allowed_cpus_weird)
up_write(cpuset_rwsem)

The statement in cpuset_attach() about cpuset_can_attach() test is not
so strong since task_can_attach() is mostly a pass for non-deadline
tasks. Still, the use of cpuset_rwsem above should synchronize (I may be
mistaken) the changes of cpuset's cpu masks, so I'd be interested about
the details above to understand why the current approach doesn't work.

The additional cpus_read_{,un}lock (when reordered wrt cpuset_rwsem)
may work but your patch should explain why (in what situation).

My .02€,
Michal

2022-01-21 21:08:18

by Zhang Qiao

[permalink] [raw]
Subject: Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()

hello

在 2022/1/19 21:02, Michal Koutný 写道:
> On Fri, Jan 14, 2022 at 09:15:06AM +0800, Zhang Qiao <[email protected]> wrote:
>> I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
>> another, while I also performed the cpu hotplug operation, and got following calltrace.
>
> Do you have more information on what hotplug event and what error
> (from set_cpus_allowed_ptr() you observe? (And what's src/dst cpuset wrt
> root/non-root)?
I ran the LTP testcases and a test scripts that do hotplug on a random cpu at the same time.
The race condition quickly, and I can't reproduce it so far.
By reading code about set_cpus_allowed_ptr(), i think __set_cpus_allowed_ptr_locked() will
be failed when new_mask and cpu_active_mask do not intersect, as follows:

__set_cpus_allowed_ptr_locked():
....
const struct cpumask *cpu_valid_mask = cpu_active_mask;
dest_cpu = cpumask_any_and_distribute(cpu_valid_mask, new_mask);
if (dest_cpu >= nr_cpu_ids) {
ret = -EINVAL;
goto out;
}
....
}


>
>> Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
>> doesn't fail, as follows:
>
> I'm wondering what can be wrong with the current actors:
>
> cpuset_can_attach
> down_read(cpuset_rwsem)
> // check all migratees
> up_read(cpuset_rwsem)
> [ _cpu_down / cpuhp_setup_state ]
> schedule_work
> ...
> cpuset_hotplug_update_tasks
> down_write(cpuset_rwsem)
> up_write(cpuset_rwsem)
> ... flush_work
> [ _cpu_down / cpu_up_down_serialize_trainwrecks ]
> cpuset_attach
> down_write(cpuset_rwsem)
> set_cpus_allowed_ptr(allowed_cpus_weird)
> up_write(cpuset_rwsem)
>

i think the troublesome scenario as follows:
cpuset_can_attach
down_read(cpuset_rwsem)
// check all migratees
up_read(cpuset_rwsem)
[ _cpu_down / cpuhp_setup_state ]
cpuset_attach
down_write(cpuset_rwsem)
guarantee_online_cpus() // (load cpus_attach)
sched_cpu_deactivate
set_cpu_active(cpu, false) // will change cpu_active_mask
set_cpus_allowed_ptr(cpus_attach)
__set_cpus_allowed_ptr_locked()
// (if the intersection of cpus_attach and
cpu_active_mask is empty, will return -EINVAL)
up_write(cpuset_rwsem)
schedule_work
...
cpuset_hotplug_update_tasks
down_write(cpuset_rwsem)
up_write(cpuset_rwsem)
... flush_work
[ _cpu_down / cpu_up_down_serialize_trainwrecks ]


Regards,
Qiao

> The statement in cpuset_attach() about cpuset_can_attach() test is not
> so strong since task_can_attach() is mostly a pass for non-deadline
> tasks. Still, the use of cpuset_rwsem above should synchronize (I may be
> mistaken) the changes of cpuset's cpu masks, so I'd be interested about
> the details above to understand why the current approach doesn't work.
>
> The additional cpus_read_{,un}lock (when reordered wrt cpuset_rwsem)
> may work but your patch should explain why (in what situation).
>
> My .02€,
> Michal
> .
>

2022-01-21 22:13:19

by Michal Koutný

[permalink] [raw]
Subject: Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()

On Thu, Jan 20, 2022 at 03:14:22PM +0800, Zhang Qiao <[email protected]> wrote:
> i think the troublesome scenario as follows:
> cpuset_can_attach
> down_read(cpuset_rwsem)
> // check all migratees
> up_read(cpuset_rwsem)
> [ _cpu_down / cpuhp_setup_state ]
> cpuset_attach
> down_write(cpuset_rwsem)
> guarantee_online_cpus() // (load cpus_attach)
> sched_cpu_deactivate
> set_cpu_active(cpu, false) // will change cpu_active_mask
> set_cpus_allowed_ptr(cpus_attach)
> __set_cpus_allowed_ptr_locked()
> // (if the intersection of cpus_attach and
> cpu_active_mask is empty, will return -EINVAL)
> up_write(cpuset_rwsem)
> schedule_work
> ...
> cpuset_hotplug_update_tasks
> down_write(cpuset_rwsem)
> up_write(cpuset_rwsem)
> ... flush_work
> [ _cpu_down / cpu_up_down_serialize_trainwrecks ]

Thanks, a locking loophole indeed.

FTR, meanwhile I noticed: a) cpuset_fork() looks buggy when
CLONE_INTO_CGROUP (and dst.cpus != src.cpus), b) it'd be affected with
similar hotplug race.

Michal

2022-01-22 00:42:15

by Zhang Qiao

[permalink] [raw]
Subject: Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()



在 2022/1/20 22:02, Michal Koutný 写道:
> On Thu, Jan 20, 2022 at 03:14:22PM +0800, Zhang Qiao <[email protected]> wrote:
>> i think the troublesome scenario as follows:
>> cpuset_can_attach
>> down_read(cpuset_rwsem)
>> // check all migratees
>> up_read(cpuset_rwsem)
>> [ _cpu_down / cpuhp_setup_state ]
>> cpuset_attach
>> down_write(cpuset_rwsem)
>> guarantee_online_cpus() // (load cpus_attach)
>> sched_cpu_deactivate
>> set_cpu_active(cpu, false) // will change cpu_active_mask
>> set_cpus_allowed_ptr(cpus_attach)
>> __set_cpus_allowed_ptr_locked()
>> // (if the intersection of cpus_attach and
>> cpu_active_mask is empty, will return -EINVAL)
>> up_write(cpuset_rwsem)
>> schedule_work
>> ...
>> cpuset_hotplug_update_tasks
>> down_write(cpuset_rwsem)
>> up_write(cpuset_rwsem)
>> ... flush_work
>> [ _cpu_down / cpu_up_down_serialize_trainwrecks ]
>
> Thanks, a locking loophole indeed.
>
> FTR, meanwhile I noticed: a) cpuset_fork() looks buggy when
> CLONE_INTO_CGROUP (and dst.cpus != src.cpus), b) it'd be affected with
> similar hotplug race.

Yes, it shouldn't set the current task's cpumak to the child process at cpuset_fork().

Regards,
Qiao
.
>
> Michal
> .
>