As a temporary storage, array staged_config in struct rdt_domain is not
cleared after it has been used. The stale value in staged_config could
cause a MSR access error.
If resctrl is mounted with CDP enabled and then remounted with CDP
disabled, the value of staged_config in domainX via mkdir changes as
follows:
CDP enabled:
mkdir <resource control group>
domainX.staged_config[CDP_NONE].have_new_ctrl = false
domainX.staged_config[CDP_NONE].new_ctrl = 0
domainX.staged_config[CDP_CODE].have_new_ctrl = true
domainX.staged_config[CDP_CODE].new_ctrl = <default mask>
domainX.staged_config[CDP_DATA].have_new_ctrl = true
domainX.staged_config[CDP_CODE].new_ctrl = <default mask>
unmount/remount resctrl (CDP disabled):
mkdir <resource control group>
domainX.staged_config[CDP_NONE].have_new_ctrl = true
domainX.staged_config[CDP_NONE].new_ctrl = <default mask>
domainX.staged_config[CDP_CODE].have_new_ctrl = true /* stale */
domainX.staged_config[CDP_CODE].new_ctrl = <default mask> /* stale */
domainX.staged_config[CDP_DATA].have_new_ctrl = true /* stale */
domainX.staged_config[CDP_CODE].new_ctrl = <default mask> /* stale */
When creating a new resource control group, hardware will be configured by
resctrl_arch_update_domains():
rdtgroup_mkdir()->rdtgroup_mkdir_ctrl_mon()->rdtgroup_init_alloc()->resctrl_arch_update_domains()
Since resctrl_arch_update_domains() iterates and updates all
resctrl_conf_type whose have_new_ctrl is true, it will continue to update
the stale CDP_CODE and CDP_DATA configurations when CDP is disabled.
Based on the above analysis, an error can be reproduced on a system with
16 usable CLOSIDs for a 15-way L3 Cache (MBA should be disabled if the
number of CLOSIDs for MB is less than 16.) :
mount -t resctrl resctrl -o cdp /sys/fs/resctrl
mkdir /sys/fs/resctrl/p{1..7}
umount /sys/fs/resctrl/
mount -t resctrl resctrl /sys/fs/resctrl
mkdir /sys/fs/resctrl/p{1..8}
dmesg will generate the following error:
[ 6180.939345] unchecked MSR access error: WRMSR to 0xca0 (tried to write
0x00000000000007ff) at rIP: 0xffffffff82249142 (cat_wrmsr+0x32/0x60)
[ 6180.951983] Call Trace:
[ 6180.954516] <IRQ>
[ 6180.956619] __flush_smp_call_function_queue+0x11d/0x170
[ 6180.962028] __sysvec_call_function+0x24/0xd0
[ 6180.966485] sysvec_call_function+0x89/0xc0
[ 6180.970760] </IRQ>
[ 6180.972947] <TASK>
[ 6180.975131] asm_sysvec_call_function+0x16/0x20
[ 6180.979757] RIP: 0010:cpuidle_enter_state+0xcd/0x400
[ 6180.984821] Code: 49 89 c5 0f 1f 44 00 00 31 ff e8 1e e5 77 ff 45 84
ff 74 12 9c 58 f6 c4 02 0f 85 13 03 00 00 31 ff e8 67 70 7d ff fb 45 85
f6 <0f> 88 75 01 00 00 49 63 c6 4c 2b 2c 24 48 8d 14 40 48 8d 14 90 49
[ 6181.003710] RSP: 0018:ffffffff83a03e48 EFLAGS: 00000202
[ 6181.009028] RAX: ffff943400800000 RBX: 0000000000000001 RCX: 000000000000001f
[ 6181.016261] RDX: 0000000000000000 RSI: ffffffff83795059 RDI: ffffffff837c101e
[ 6181.023490] RBP: ffff9434c9352000 R08: 0000059f1cb1a05e R09: 0000000000000008
[ 6181.030717] R10: 0000000000000001 R11: 0000000000005c66 R12: ffffffff83bbf3a0
[ 6181.037944] R13: 0000059f1cb1a05e R14: 0000000000000001 R15: 0000000000000000
[ 6181.045202] ? cpuidle_enter_state+0xb2/0x400
[ 6181.049678] cpuidle_enter+0x24/0x40
[ 6181.053370] do_idle+0x1dd/0x260
[ 6181.056713] cpu_startup_entry+0x14/0x20
[ 6181.060753] rest_init+0xbb/0xc0
[ 6181.064097] arch_call_rest_init+0x5/0xa
[ 6181.068137] start_kernel+0x668/0x691
[ 6181.071914] secondary_startup_64_no_verify+0xe0/0xeb
[ 6181.077086] </TASK>
As Reinette Chatre explained:
https://lore.kernel.org/all/[email protected]/
"
The value of interest here is the register it tries to write to ... 0xca0.
On a system with 16 CLOSIDs the range of registers available to set the
CBM would be 0xc90 to 0xc9f that corresponds to CLOSID 0 to CLOSID 15.
The error is an attempt to write to an unsupported register - there
appears to have been an attempt to configure non-existent CLOSID 16.
Above becomes an issue when the resource group being created is for a
CLOSID # that is more than half of the CLOSIDs supported. In the
reproducer the issue was encountered when creating resource group for
CLOSID 8 on a system that supports 16 CLOSIDs.
In this case get_config_index() called from resctrl_arch_update_domains()
will return 16 and 17 when processing this resource group and that
translated to an invalid register - 0xca0 in this scenario.
"
Fix this issue by clearing the staged configs when the configuration is
completed.
Fixes: 75408e43509ed ("x86/resctrl: Allow different CODE/DATA configurations to be staged")
Cc: <[email protected]>
Signed-off-by: Shawn Wang <[email protected]>
Suggested-by: Xin Hao <[email protected]>
---
Changes since v1:
- Move the clearing from schemata_list_destroy() to resctrl_arch_update_domains().
- Update the commit message suggested by Reiniette Chatre.
- Add stable tag suggested by James Morse.
---
arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
index 1dafbdc5ac31..2c719da5544f 100644
--- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
+++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
@@ -338,6 +338,8 @@ int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid)
msr_param.high = max(msr_param.high, idx + 1);
}
}
+ /* Clear the stale staged config */
+ memset(d->staged_config, 0, sizeof(d->staged_config));
}
if (cpumask_empty(cpu_mask))
--
2.27.0
Hi Shawn,
Thank you very much for working on getting this fixed.
On 10/9/2022 1:36 AM, Shawn Wang wrote:
> As a temporary storage, array staged_config in struct rdt_domain is not
staged_config -> staged_config[]
> cleared after it has been used. The stale value in staged_config could
staged_config -> staged_config[]
(please make the above change in rest of changelog, doing so makes it
easier to read)
> cause a MSR access error.
a MSR -> an MSR
>
> If resctrl is mounted with CDP enabled and then remounted with CDP
> disabled, the value of staged_config in domainX via mkdir changes as
> follows:
>
> CDP enabled:
> mkdir <resource control group>
> domainX.staged_config[CDP_NONE].have_new_ctrl = false
> domainX.staged_config[CDP_NONE].new_ctrl = 0
> domainX.staged_config[CDP_CODE].have_new_ctrl = true
> domainX.staged_config[CDP_CODE].new_ctrl = <default mask>
> domainX.staged_config[CDP_DATA].have_new_ctrl = true
> domainX.staged_config[CDP_CODE].new_ctrl = <default mask>
Apologies, you copied a copy&paste error from me ... the last one
should be CDP_DATA.
>
> unmount/remount resctrl (CDP disabled):
> mkdir <resource control group>
> domainX.staged_config[CDP_NONE].have_new_ctrl = true
> domainX.staged_config[CDP_NONE].new_ctrl = <default mask>
> domainX.staged_config[CDP_CODE].have_new_ctrl = true /* stale */
> domainX.staged_config[CDP_CODE].new_ctrl = <default mask> /* stale */
> domainX.staged_config[CDP_DATA].have_new_ctrl = true /* stale */
> domainX.staged_config[CDP_CODE].new_ctrl = <default mask> /* stale */
(same typo here)
>
> When creating a new resource control group, hardware will be configured by
> resctrl_arch_update_domains():
> rdtgroup_mkdir()->rdtgroup_mkdir_ctrl_mon()->rdtgroup_init_alloc()->resctrl_arch_update_domains()
>
> Since resctrl_arch_update_domains() iterates and updates all
> resctrl_conf_type whose have_new_ctrl is true, it will continue to update
> the stale CDP_CODE and CDP_DATA configurations when CDP is disabled.
>
> Based on the above analysis, an error can be reproduced on a system with
> 16 usable CLOSIDs for a 15-way L3 Cache (MBA should be disabled if the
> number of CLOSIDs for MB is less than 16.) :
> mount -t resctrl resctrl -o cdp /sys/fs/resctrl
> mkdir /sys/fs/resctrl/p{1..7}
> umount /sys/fs/resctrl/
> mount -t resctrl resctrl /sys/fs/resctrl
> mkdir /sys/fs/resctrl/p{1..8}
>
> dmesg will generate the following error:
I think it would be helpful to connect this reproducer to the
explanation of what happens. Maybe just something like:
"Upon creating directory "p8", to which CLOSID 8 is automatically
assigned, the following error is generated:"
Please consider what is the most useful information found in the backtrace
and only include that. Interesting enough, there is a very related example
of what information is useful in Documentation/process/submitting-patches.rst
(section "Backtraces in commit messages").
> [ 6180.939345] unchecked MSR access error: WRMSR to 0xca0 (tried to write
> 0x00000000000007ff) at rIP: 0xffffffff82249142 (cat_wrmsr+0x32/0x60)
> [ 6180.951983] Call Trace:
> [ 6180.954516] <IRQ>
> [ 6180.956619] __flush_smp_call_function_queue+0x11d/0x170
> [ 6180.962028] __sysvec_call_function+0x24/0xd0
> [ 6180.966485] sysvec_call_function+0x89/0xc0
> [ 6180.970760] </IRQ>
> [ 6180.972947] <TASK>
> [ 6180.975131] asm_sysvec_call_function+0x16/0x20
> [ 6180.979757] RIP: 0010:cpuidle_enter_state+0xcd/0x400
> [ 6180.984821] Code: 49 89 c5 0f 1f 44 00 00 31 ff e8 1e e5 77 ff 45 84
> ff 74 12 9c 58 f6 c4 02 0f 85 13 03 00 00 31 ff e8 67 70 7d ff fb 45 85
> f6 <0f> 88 75 01 00 00 49 63 c6 4c 2b 2c 24 48 8d 14 40 48 8d 14 90 49
> [ 6181.003710] RSP: 0018:ffffffff83a03e48 EFLAGS: 00000202
> [ 6181.009028] RAX: ffff943400800000 RBX: 0000000000000001 RCX: 000000000000001f
> [ 6181.016261] RDX: 0000000000000000 RSI: ffffffff83795059 RDI: ffffffff837c101e
> [ 6181.023490] RBP: ffff9434c9352000 R08: 0000059f1cb1a05e R09: 0000000000000008
> [ 6181.030717] R10: 0000000000000001 R11: 0000000000005c66 R12: ffffffff83bbf3a0
> [ 6181.037944] R13: 0000059f1cb1a05e R14: 0000000000000001 R15: 0000000000000000
> [ 6181.045202] ? cpuidle_enter_state+0xb2/0x400
> [ 6181.049678] cpuidle_enter+0x24/0x40
> [ 6181.053370] do_idle+0x1dd/0x260
> [ 6181.056713] cpu_startup_entry+0x14/0x20
> [ 6181.060753] rest_init+0xbb/0xc0
> [ 6181.064097] arch_call_rest_init+0x5/0xa
> [ 6181.068137] start_kernel+0x668/0x691
> [ 6181.071914] secondary_startup_64_no_verify+0xe0/0xeb
> [ 6181.077086] </TASK>
>
> As Reinette Chatre explained:
> https://lore.kernel.org/all/[email protected]/
> "
> The value of interest here is the register it tries to write to ... 0xca0.
> On a system with 16 CLOSIDs the range of registers available to set the
> CBM would be 0xc90 to 0xc9f that corresponds to CLOSID 0 to CLOSID 15.
> The error is an attempt to write to an unsupported register - there
> appears to have been an attempt to configure non-existent CLOSID 16.
>
> Above becomes an issue when the resource group being created is for a
> CLOSID # that is more than half of the CLOSIDs supported. In the
> reproducer the issue was encountered when creating resource group for
> CLOSID 8 on a system that supports 16 CLOSIDs.
>
> In this case get_config_index() called from resctrl_arch_update_domains()
> will return 16 and 17 when processing this resource group and that
> translated to an invalid register - 0xca0 in this scenario.
> "
>
I am not sure if adding this full quote is necessary to explain the issue.
Maybe you could just summarize it and move it to before the "Based on the
above analysis ..."
> Fix this issue by clearing the staged configs when the configuration is
> completed.
>
> Fixes: 75408e43509ed ("x86/resctrl: Allow different CODE/DATA configurations to be staged")
> Cc: <[email protected]>
> Signed-off-by: Shawn Wang <[email protected]>
> Suggested-by: Xin Hao <[email protected]>
> ---
> Changes since v1:
> - Move the clearing from schemata_list_destroy() to resctrl_arch_update_domains().
> - Update the commit message suggested by Reiniette Chatre.
> - Add stable tag suggested by James Morse.
> ---
> arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
> index 1dafbdc5ac31..2c719da5544f 100644
> --- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
> +++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
> @@ -338,6 +338,8 @@ int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid)
> msr_param.high = max(msr_param.high, idx + 1);
> }
> }
> + /* Clear the stale staged config */
> + memset(d->staged_config, 0, sizeof(d->staged_config));
> }
>
> if (cpumask_empty(cpu_mask))
Please also ensure that the temporary storage is cleared if there is an
early exist because of failure. Please do not duplicate the memset() code
but instead move it to a common exit location.
Thank you
Reinette
Hi Reinette,
Sorry for replying now due to other things.
On 10/12/2022 7:48 AM, Reinette Chatre wrote:
> Hi Shawn,
>
> Thank you very much for working on getting this fixed.
>
> On 10/9/2022 1:36 AM, Shawn Wang wrote:
>> As a temporary storage, array staged_config in struct rdt_domain is not
>
> staged_config -> staged_config[]
>
>> cleared after it has been used. The stale value in staged_config could
> staged_config -> staged_config[]
> (please make the above change in rest of changelog, doing so makes it
> easier to read)
>
>> cause a MSR access error.
>
> a MSR -> an MSR
>
>>
>> If resctrl is mounted with CDP enabled and then remounted with CDP
>> disabled, the value of staged_config in domainX via mkdir changes as
>> follows:
>>
>> CDP enabled:
>> mkdir <resource control group>
>> domainX.staged_config[CDP_NONE].have_new_ctrl = false
>> domainX.staged_config[CDP_NONE].new_ctrl = 0
>> domainX.staged_config[CDP_CODE].have_new_ctrl = true
>> domainX.staged_config[CDP_CODE].new_ctrl = <default mask>
>> domainX.staged_config[CDP_DATA].have_new_ctrl = true
>> domainX.staged_config[CDP_CODE].new_ctrl = <default mask>
>
> Apologies, you copied a copy&paste error from me ... the last one
> should be CDP_DATA.
>
>>
>> unmount/remount resctrl (CDP disabled):
>> mkdir <resource control group>
>> domainX.staged_config[CDP_NONE].have_new_ctrl = true
>> domainX.staged_config[CDP_NONE].new_ctrl = <default mask>
>> domainX.staged_config[CDP_CODE].have_new_ctrl = true /* stale */
>> domainX.staged_config[CDP_CODE].new_ctrl = <default mask> /* stale */
>> domainX.staged_config[CDP_DATA].have_new_ctrl = true /* stale */
>> domainX.staged_config[CDP_CODE].new_ctrl = <default mask> /* stale */
>
> (same typo here)
>
>>
>> When creating a new resource control group, hardware will be configured by
>> resctrl_arch_update_domains():
>> rdtgroup_mkdir()->rdtgroup_mkdir_ctrl_mon()->rdtgroup_init_alloc()->resctrl_arch_update_domains()
>>
>> Since resctrl_arch_update_domains() iterates and updates all
>> resctrl_conf_type whose have_new_ctrl is true, it will continue to update
>> the stale CDP_CODE and CDP_DATA configurations when CDP is disabled.
>>
>> Based on the above analysis, an error can be reproduced on a system with
>> 16 usable CLOSIDs for a 15-way L3 Cache (MBA should be disabled if the
>> number of CLOSIDs for MB is less than 16.) :
>> mount -t resctrl resctrl -o cdp /sys/fs/resctrl
>> mkdir /sys/fs/resctrl/p{1..7}
>> umount /sys/fs/resctrl/
>> mount -t resctrl resctrl /sys/fs/resctrl
>> mkdir /sys/fs/resctrl/p{1..8}
>>
>> dmesg will generate the following error:
>
> I think it would be helpful to connect this reproducer to the
> explanation of what happens. Maybe just something like:
>
> "Upon creating directory "p8", to which CLOSID 8 is automatically
> assigned, the following error is generated:"
>
> Please consider what is the most useful information found in the backtrace
> and only include that. Interesting enough, there is a very related example
> of what information is useful in Documentation/process/submitting-patches.rst
> (section "Backtraces in commit messages").
>
>> [ 6180.939345] unchecked MSR access error: WRMSR to 0xca0 (tried to write
>> 0x00000000000007ff) at rIP: 0xffffffff82249142 (cat_wrmsr+0x32/0x60)
>> [ 6180.951983] Call Trace:
>> [ 6180.954516] <IRQ>
>> [ 6180.956619] __flush_smp_call_function_queue+0x11d/0x170
>> [ 6180.962028] __sysvec_call_function+0x24/0xd0
>> [ 6180.966485] sysvec_call_function+0x89/0xc0
>> [ 6180.970760] </IRQ>
>> [ 6180.972947] <TASK>
>> [ 6180.975131] asm_sysvec_call_function+0x16/0x20
>> [ 6180.979757] RIP: 0010:cpuidle_enter_state+0xcd/0x400
>> [ 6180.984821] Code: 49 89 c5 0f 1f 44 00 00 31 ff e8 1e e5 77 ff 45 84
>> ff 74 12 9c 58 f6 c4 02 0f 85 13 03 00 00 31 ff e8 67 70 7d ff fb 45 85
>> f6 <0f> 88 75 01 00 00 49 63 c6 4c 2b 2c 24 48 8d 14 40 48 8d 14 90 49
>> [ 6181.003710] RSP: 0018:ffffffff83a03e48 EFLAGS: 00000202
>> [ 6181.009028] RAX: ffff943400800000 RBX: 0000000000000001 RCX: 000000000000001f
>> [ 6181.016261] RDX: 0000000000000000 RSI: ffffffff83795059 RDI: ffffffff837c101e
>> [ 6181.023490] RBP: ffff9434c9352000 R08: 0000059f1cb1a05e R09: 0000000000000008
>> [ 6181.030717] R10: 0000000000000001 R11: 0000000000005c66 R12: ffffffff83bbf3a0
>> [ 6181.037944] R13: 0000059f1cb1a05e R14: 0000000000000001 R15: 0000000000000000
>> [ 6181.045202] ? cpuidle_enter_state+0xb2/0x400
>> [ 6181.049678] cpuidle_enter+0x24/0x40
>> [ 6181.053370] do_idle+0x1dd/0x260
>> [ 6181.056713] cpu_startup_entry+0x14/0x20
>> [ 6181.060753] rest_init+0xbb/0xc0
>> [ 6181.064097] arch_call_rest_init+0x5/0xa
>> [ 6181.068137] start_kernel+0x668/0x691
>> [ 6181.071914] secondary_startup_64_no_verify+0xe0/0xeb
>> [ 6181.077086] </TASK>
>>
>> As Reinette Chatre explained:
>> https://lore.kernel.org/all/[email protected]/
>> "
>> The value of interest here is the register it tries to write to ... 0xca0.
>> On a system with 16 CLOSIDs the range of registers available to set the
>> CBM would be 0xc90 to 0xc9f that corresponds to CLOSID 0 to CLOSID 15.
>> The error is an attempt to write to an unsupported register - there
>> appears to have been an attempt to configure non-existent CLOSID 16.
>>
>> Above becomes an issue when the resource group being created is for a
>> CLOSID # that is more than half of the CLOSIDs supported. In the
>> reproducer the issue was encountered when creating resource group for
>> CLOSID 8 on a system that supports 16 CLOSIDs.
>>
>> In this case get_config_index() called from resctrl_arch_update_domains()
>> will return 16 and 17 when processing this resource group and that
>> translated to an invalid register - 0xca0 in this scenario.
>> "
>>
>
> I am not sure if adding this full quote is necessary to explain the issue.
> Maybe you could just summarize it and move it to before the "Based on the
> above analysis ..."
>
Thank you very much for your suggestion, I will revise it in the new
version.
>> Fix this issue by clearing the staged configs when the configuration is
>> completed.
>>
>> Fixes: 75408e43509ed ("x86/resctrl: Allow different CODE/DATA configurations to be staged")
>> Cc: <[email protected]>
>> Signed-off-by: Shawn Wang <[email protected]>
>> Suggested-by: Xin Hao <[email protected]>
>> ---
>> Changes since v1:
>> - Move the clearing from schemata_list_destroy() to resctrl_arch_update_domains().
>> - Update the commit message suggested by Reiniette Chatre.
>> - Add stable tag suggested by James Morse.
>> ---
>> arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 2 ++
>> 1 file changed, 2 insertions(+)
>>
>> diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>> index 1dafbdc5ac31..2c719da5544f 100644
>> --- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>> +++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>> @@ -338,6 +338,8 @@ int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid)
>> msr_param.high = max(msr_param.high, idx + 1);
>> }
>> }
>> + /* Clear the stale staged config */
>> + memset(d->staged_config, 0, sizeof(d->staged_config));
>> }
>>
>> if (cpumask_empty(cpu_mask))
>
> Please also ensure that the temporary storage is cleared if there is an
> early exist because of failure. Please do not duplicate the memset() code
> but instead move it to a common exit location.
>
There are two different resctrl_arch_update_domains() function call paths:
1.rdtgroup_mkdir()->rdtgroup_mkdir_ctrl_mon()->rdtgroup_init_alloc()->resctrl_arch_update_domains()
2.rdtgroup_schemata_write()->resctrl_arch_update_domains()
Perhaps there is no common exit location if we want to clear
staged_config[] after every call of resctrl_arch_update_domains().
How about doing the cleanup at the end of rdtgroup_schemata_write() and
rdtgroup_mkdir_ctrl_mon() respectively?
Thank you,
Shawn
Hi Shawn,
On 10/19/2022 10:55 PM, Shawn Wang wrote:
> Hi Reinette,
>
> Sorry for replying now due to other things.
No problem.
>
> On 10/12/2022 7:48 AM, Reinette Chatre wrote:
>> Hi Shawn,
>>
>> Thank you very much for working on getting this fixed.
>>
>> On 10/9/2022 1:36 AM, Shawn Wang wrote:
...
>>> diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>>> index 1dafbdc5ac31..2c719da5544f 100644
>>> --- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>>> +++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>>> @@ -338,6 +338,8 @@ int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid)
>>> msr_param.high = max(msr_param.high, idx + 1);
>>> }
>>> }
>>> + /* Clear the stale staged config */
>>> + memset(d->staged_config, 0, sizeof(d->staged_config));
>>> }
>>> if (cpumask_empty(cpu_mask))
>>
>> Please also ensure that the temporary storage is cleared if there is an
>> early exist because of failure. Please do not duplicate the memset() code
>> but instead move it to a common exit location.
>>
>
> There are two different resctrl_arch_update_domains() function call paths:
>
> 1.rdtgroup_mkdir()->rdtgroup_mkdir_ctrl_mon()->rdtgroup_init_alloc()->resctrl_arch_update_domains()
> 2.rdtgroup_schemata_write()->resctrl_arch_update_domains()
>
> Perhaps there is no common exit location if we want to clear staged_config[] after every call of resctrl_arch_update_domains().
I was referring to a common exit out of resctrl_arch_update_domains().
Look at how resctrl_arch_update_domains() behaves with this change:
resctrl_arch_update_domains()
{
...
if (!zalloc_cpumask_var(&cpu_mask, GFP_KERNEL))
return -ENOMEM;
...
list_for_each_entry(d, &r->domains, list) {
...
memset(d->staged_config, 0, sizeof(d->staged_config));
}
...
done:
free_cpumask_var(cpu_mask);
return 0;
}
The goal of this fix is to ensure that staged_config[] is cleared on
return from resctrl_arch_update_domains() so that there is no stale
data in staged_config[] when resctrl_arch_update_domains() is called
again.
Considering this, I can see two scenarios in the above solution where
staged_config[] is not cleared on exit from resctrl_arch_update_domains():
1) If the zalloc_cpumask_var() fails then it returns -ENOMEM right away
without clearing staged_config[].
2) If there are no domains associated with the resource (although, this
seems not possible because that would imply no CPUs are online ...
but let's make this code robust against strange scenarios).
What I referred to with a "common exit location" was something like this:
resctrl_arch_update_domains()
{
...
int ret = -EINVAL;
...
if (!zalloc_cpumask_var(&cpu_mask, GFP_KERNEL)) {
ret = -ENOMEM;
goto done;
}
...
list_for_each_entry(d, &r->domains, list) {
...
}
...
ret = 0;
done:
free_cpumask_var(cpu_mask);
memset(d->staged_config, 0, sizeof(d->staged_config));
return ret;
}
Something like the above will ensure that staged_config[] is
always cleared on exit from resctrl_arch_update_domains().
Reinette
Hi Reinette,
On 10/21/2022 12:35 AM, Reinette Chatre wrote:
...
>>>> diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>>>> index 1dafbdc5ac31..2c719da5544f 100644
>>>> --- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>>>> +++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>>>> @@ -338,6 +338,8 @@ int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid)
>>>> msr_param.high = max(msr_param.high, idx + 1);
>>>> }
>>>> }
>>>> + /* Clear the stale staged config */
>>>> + memset(d->staged_config, 0, sizeof(d->staged_config));
>>>> }
>>>> if (cpumask_empty(cpu_mask))
>>>
>>> Please also ensure that the temporary storage is cleared if there is an
>>> early exist because of failure. Please do not duplicate the memset() code
>>> but instead move it to a common exit location.
>>>
>>
>> There are two different resctrl_arch_update_domains() function call paths:
>>
>> 1.rdtgroup_mkdir()->rdtgroup_mkdir_ctrl_mon()->rdtgroup_init_alloc()->resctrl_arch_update_domains()
>> 2.rdtgroup_schemata_write()->resctrl_arch_update_domains()
>>
>> Perhaps there is no common exit location if we want to clear staged_config[] after every call of resctrl_arch_update_domains().
>
> I was referring to a common exit out of resctrl_arch_update_domains().
>
> Look at how resctrl_arch_update_domains() behaves with this change:
>
> resctrl_arch_update_domains()
> {
> ...
>
> if (!zalloc_cpumask_var(&cpu_mask, GFP_KERNEL))
> return -ENOMEM;
>
> ...
> list_for_each_entry(d, &r->domains, list) {
> ...
> memset(d->staged_config, 0, sizeof(d->staged_config));
> }
>
>
> ...
> done:
> free_cpumask_var(cpu_mask);
>
> return 0;
> }
>
>
> The goal of this fix is to ensure that staged_config[] is cleared on
> return from resctrl_arch_update_domains() so that there is no stale
> data in staged_config[] when resctrl_arch_update_domains() is called
> again.
>
> Considering this, I can see two scenarios in the above solution where
> staged_config[] is not cleared on exit from resctrl_arch_update_domains():
It may not be enough to just clear staged_config[] when
resctrl_arch_update_domains() exits. I think the fix needs to make sure
staged_config[] can be cleared where it is set.
The modification of staged_config[] comes from two paths:
Path 1:
rdtgroup_schemata_write() {
...
rdtgroup_parse_resource() // set staged_config[]
...
resctrl_arch_update_domains() // clear staged_config[]
...
}
Path 2:
rdtgroup_init_alloc() {
...
rdtgroup_init_mba()/rdtgroup_init_cat() // set staged_config[]
...
resctrl_arch_update_domains() // clear staged_config[]
...
}
If we clear staged_config[] in resctrl_arch_update_domains(), goto
statement for error handling between setting staged_config[] and calling
resctrl_arch_update_domains() will be ignored. This can still remain the
stale staged_config[].
I think maybe it is better to put the clearing work where
rdtgroup_schemata_write() and rdtgroup_init_alloc() exit.
(Sorry, I mistakenly wrote rdtgroup_init_alloc() to
rdtgroup_mkdir_ctrl_mon() in my last reply.)
Thank you,
Shawn
Hi Shawn,
On 10/21/2022 1:22 AM, Shawn Wang wrote:
> Hi Reinette,
>
> On 10/21/2022 12:35 AM, Reinette Chatre wrote:
>
> ...
>
>>>>> diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>>>>> index 1dafbdc5ac31..2c719da5544f 100644
>>>>> --- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>>>>> +++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>>>>> @@ -338,6 +338,8 @@ int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid)
>>>>> msr_param.high = max(msr_param.high, idx + 1);
>>>>> }
>>>>> }
>>>>> + /* Clear the stale staged config */
>>>>> + memset(d->staged_config, 0, sizeof(d->staged_config));
>>>>> }
>>>>> if (cpumask_empty(cpu_mask))
>>>>
>>>> Please also ensure that the temporary storage is cleared if there is an
>>>> early exist because of failure. Please do not duplicate the memset() code
>>>> but instead move it to a common exit location.
>>>>
>>>
>>> There are two different resctrl_arch_update_domains() function call paths:
>>>
>>> 1.rdtgroup_mkdir()->rdtgroup_mkdir_ctrl_mon()->rdtgroup_init_alloc()->resctrl_arch_update_domains()
>>> 2.rdtgroup_schemata_write()->resctrl_arch_update_domains()
>>>
>>> Perhaps there is no common exit location if we want to clear staged_config[] after every call of resctrl_arch_update_domains().
>>
>> I was referring to a common exit out of resctrl_arch_update_domains().
>>
>> Look at how resctrl_arch_update_domains() behaves with this change:
>>
>> resctrl_arch_update_domains()
>> {
>> ...
>>
>> if (!zalloc_cpumask_var(&cpu_mask, GFP_KERNEL))
>> return -ENOMEM;
>>
>> ...
>> list_for_each_entry(d, &r->domains, list) {
>> ...
>> memset(d->staged_config, 0, sizeof(d->staged_config));
>> }
>>
>>
>> ...
>> done:
>> free_cpumask_var(cpu_mask);
>>
>> return 0;
>> }
>>
>>
>> The goal of this fix is to ensure that staged_config[] is cleared on
>> return from resctrl_arch_update_domains() so that there is no stale
>> data in staged_config[] when resctrl_arch_update_domains() is called
>> again.
>>
>> Considering this, I can see two scenarios in the above solution where
>> staged_config[] is not cleared on exit from resctrl_arch_update_domains():
>
> It may not be enough to just clear staged_config[] when
> resctrl_arch_update_domains() exits. I think the fix needs to make
> sure staged_config[] can be cleared where it is set.
>
> The modification of staged_config[] comes from two paths:
>
> Path 1:
> rdtgroup_schemata_write() {
> ...
> rdtgroup_parse_resource() // set staged_config[]
> ...
> resctrl_arch_update_domains() // clear staged_config[]
> ...
> }
>
> Path 2:
> rdtgroup_init_alloc() {
> ...
> rdtgroup_init_mba()/rdtgroup_init_cat() // set staged_config[]
> ...
> resctrl_arch_update_domains() // clear staged_config[]
> ...
> }
>
> If we clear staged_config[] in resctrl_arch_update_domains(), goto
> statement for error handling between setting staged_config[] and
> calling resctrl_arch_update_domains() will be ignored. This can still
> remain the stale staged_config[].
ah - indeed. Thank you for catching that.
>
> I think maybe it is better to put the clearing work where
> rdtgroup_schemata_write() and rdtgroup_init_alloc() exit.
>
It may be more robust to let rdtgroup_init_alloc() follow
how rdtgroup_schemata_write() already ensures that it is
working with a clean state by clearing staged_config[] before
placing its staged config within.
Reinette
Hi Reinette,
On 10/22/2022 2:05 AM, Reinette Chatre wrote:
...
>> It may not be enough to just clear staged_config[] when
>> resctrl_arch_update_domains() exits. I think the fix needs to make
>> sure staged_config[] can be cleared where it is set.
>>
>> The modification of staged_config[] comes from two paths:
>>
>> Path 1:
>> rdtgroup_schemata_write() {
>> ...
>> rdtgroup_parse_resource() // set staged_config[]
>> ...
>> resctrl_arch_update_domains() // clear staged_config[]
>> ...
>> }
>>
>> Path 2:
>> rdtgroup_init_alloc() {
>> ...
>> rdtgroup_init_mba()/rdtgroup_init_cat() // set staged_config[]
>> ...
>> resctrl_arch_update_domains() // clear staged_config[]
>> ...
>> }
>>
>> If we clear staged_config[] in resctrl_arch_update_domains(), goto
>> statement for error handling between setting staged_config[] and
>> calling resctrl_arch_update_domains() will be ignored. This can still
>> remain the stale staged_config[].
> ah - indeed. Thank you for catching that.
>
>>
>> I think maybe it is better to put the clearing work where
>> rdtgroup_schemata_write() and rdtgroup_init_alloc() exit.
>>
>
> It may be more robust to let rdtgroup_init_alloc() follow
> how rdtgroup_schemata_write() already ensures that it is
> working with a clean state by clearing staged_config[] before
> placing its staged config within.
>
I want to make sure, do you mean just ignore the stale value and place
the clearing work before staged_config[] is used? If so, maybe the only
thing the fix should do is to add memset() to rdtgroup_init_alloc().
Thanks,
Shawn
Hi Shawn,
On 10/23/2022 7:31 PM, Shawn Wang wrote:
> On 10/22/2022 2:05 AM, Reinette Chatre wrote:
>
> ...
>
>>> It may not be enough to just clear staged_config[] when
>>> resctrl_arch_update_domains() exits. I think the fix needs to make
>>> sure staged_config[] can be cleared where it is set.
>>>
>>> The modification of staged_config[] comes from two paths:
>>>
>>> Path 1:
>>> rdtgroup_schemata_write() {
>>> ...
>>> rdtgroup_parse_resource() // set staged_config[]
>>> ...
>>> resctrl_arch_update_domains() // clear staged_config[]
>>> ...
>>> }
>>>
>>> Path 2:
>>> rdtgroup_init_alloc() {
>>> ...
>>> rdtgroup_init_mba()/rdtgroup_init_cat() // set staged_config[]
>>> ...
>>> resctrl_arch_update_domains() // clear staged_config[]
>>> ...
>>> }
>>>
>>> If we clear staged_config[] in resctrl_arch_update_domains(), goto
>>> statement for error handling between setting staged_config[] and
>>> calling resctrl_arch_update_domains() will be ignored. This can still
>>> remain the stale staged_config[].
>> ah - indeed. Thank you for catching that.
>>
>>>
>>> I think maybe it is better to put the clearing work where
>>> rdtgroup_schemata_write() and rdtgroup_init_alloc() exit.
>>>
>>
>> It may be more robust to let rdtgroup_init_alloc() follow
>> how rdtgroup_schemata_write() already ensures that it is
>> working with a clean state by clearing staged_config[] before
>> placing its staged config within.
>>
>
> I want to make sure, do you mean just ignore the stale value and
> place the clearing work before staged_config[] is used? If so, maybe
> the only thing the fix should do is to add memset() to
> rdtgroup_init_alloc().>
No, let us not leave stale data lying around.
The idea is that the function calling resctrl_arch_update_domains() is
responsible for initializing staged_config[] correctly and completely.
To confirm, yes, the idea is to clear the staged_config[] in
rdtgroup_init_alloc() before resctrl_arch_update_domains() is called
to follow how it is currently done in rdtgroup_schemata_write().
But, as you indicate, by itself this would leave stale data lying around.
The solution that you suggested earlier, to put the clearing work where
rdtgroup_schemata_write() and rdtgroup_init_alloc() exit, is most logical.
That makes the code symmetrical in that staged_config[] is cleared
where it is initialized and no stale data is left lying around. What was
not clear to me is how this would look in the end. Were you planning to
keep the staged_config[] clearing within rdtgroup_schemata_write() but
not do so in rdtgroup_init_alloc()? rdtgroup_schemata_write() and
rdtgroup_init_alloc() has to follow the same pattern to reduce confusion.
So, to be more robust, how about:
/* Clear staged_config[] to make sure working from a clean slate */
resctrl_arch_update_domains()
/* Clear staged_config[] to not leave stale data lying around */
Reinette
Hi Reinette,
On 10/25/2022 12:45 AM, Reinette Chatre wrote:
> Hi Shawn,
>
> On 10/23/2022 7:31 PM, Shawn Wang wrote:
>> On 10/22/2022 2:05 AM, Reinette Chatre wrote:
>>
>> ...
>>
>>>> It may not be enough to just clear staged_config[] when
>>>> resctrl_arch_update_domains() exits. I think the fix needs to make
>>>> sure staged_config[] can be cleared where it is set.
>>>>
>>>> The modification of staged_config[] comes from two paths:
>>>>
>>>> Path 1:
>>>> rdtgroup_schemata_write() {
>>>> ...
>>>> rdtgroup_parse_resource() // set staged_config[]
>>>> ...
>>>> resctrl_arch_update_domains() // clear staged_config[]
>>>> ...
>>>> }
>>>>
>>>> Path 2:
>>>> rdtgroup_init_alloc() {
>>>> ...
>>>> rdtgroup_init_mba()/rdtgroup_init_cat() // set staged_config[]
>>>> ...
>>>> resctrl_arch_update_domains() // clear staged_config[]
>>>> ...
>>>> }
>>>>
>>>> If we clear staged_config[] in resctrl_arch_update_domains(), goto
>>>> statement for error handling between setting staged_config[] and
>>>> calling resctrl_arch_update_domains() will be ignored. This can still
>>>> remain the stale staged_config[].
>>> ah - indeed. Thank you for catching that.
>>>
>>>>
>>>> I think maybe it is better to put the clearing work where
>>>> rdtgroup_schemata_write() and rdtgroup_init_alloc() exit.
>>>>
>>>
>>> It may be more robust to let rdtgroup_init_alloc() follow
>>> how rdtgroup_schemata_write() already ensures that it is
>>> working with a clean state by clearing staged_config[] before
>>> placing its staged config within.
>>>
>>
>> I want to make sure, do you mean just ignore the stale value and
>> place the clearing work before staged_config[] is used? If so, maybe
>> the only thing the fix should do is to add memset() to
>> rdtgroup_init_alloc().>
>
> No, let us not leave stale data lying around.
>
> The idea is that the function calling resctrl_arch_update_domains() is
> responsible for initializing staged_config[] correctly and completely.
> To confirm, yes, the idea is to clear the staged_config[] in
> rdtgroup_init_alloc() before resctrl_arch_update_domains() is called
> to follow how it is currently done in rdtgroup_schemata_write().
>
> But, as you indicate, by itself this would leave stale data lying around.
>
> The solution that you suggested earlier, to put the clearing work where
> rdtgroup_schemata_write() and rdtgroup_init_alloc() exit, is most logical.
> That makes the code symmetrical in that staged_config[] is cleared
> where it is initialized and no stale data is left lying around. What was
> not clear to me is how this would look in the end. Were you planning to
> keep the staged_config[] clearing within rdtgroup_schemata_write() but
> not do so in rdtgroup_init_alloc()? rdtgroup_schemata_write() and
> rdtgroup_init_alloc() has to follow the same pattern to reduce confusion.
>
> So, to be more robust, how about:
>
> /* Clear staged_config[] to make sure working from a clean slate */
> resctrl_arch_update_domains()
> /* Clear staged_config[] to not leave stale data lying around */
>
Thank you for your explanation, and it makes sense to me. But this will
require 4 memset() loops, how about putting the clearing work in
a separate function in rdtgroup.c, like rdt_last_cmd_clear():
void staged_configs_clear(void) {
struct resctrl_schema *s;
struct rdt_domain *dom;
lockdep_assert_held(&rdtgroup_mutex);
list_for_each_entry(s, &resctrl_schema_all, list) {
list_for_each_entry(dom, &s->res->domains, list)
memset(dom->staged_config, 0, sizeof(dom->staged_config));
}
}
Thanks,
Shawn
Hi Shawn,
On 10/25/2022 8:30 AM, Shawn Wang wrote:
> Hi Reinette,
>
> On 10/25/2022 12:45 AM, Reinette Chatre wrote:
>> Hi Shawn,
>>
>> On 10/23/2022 7:31 PM, Shawn Wang wrote:
>>> On 10/22/2022 2:05 AM, Reinette Chatre wrote:
>>>
>>> ...
>>>
>>>>> It may not be enough to just clear staged_config[] when
>>>>> resctrl_arch_update_domains() exits. I think the fix needs to make
>>>>> sure staged_config[] can be cleared where it is set.
>>>>>
>>>>> The modification of staged_config[] comes from two paths:
>>>>>
>>>>> Path 1:
>>>>> rdtgroup_schemata_write() {
>>>>> ...
>>>>> rdtgroup_parse_resource() // set staged_config[]
>>>>> ...
>>>>> resctrl_arch_update_domains() // clear staged_config[]
>>>>> ...
>>>>> }
>>>>>
>>>>> Path 2:
>>>>> rdtgroup_init_alloc() {
>>>>> ...
>>>>> rdtgroup_init_mba()/rdtgroup_init_cat() // set staged_config[]
>>>>> ...
>>>>> resctrl_arch_update_domains() // clear staged_config[]
>>>>> ...
>>>>> }
>>>>>
>>>>> If we clear staged_config[] in resctrl_arch_update_domains(), goto
>>>>> statement for error handling between setting staged_config[] and
>>>>> calling resctrl_arch_update_domains() will be ignored. This can still
>>>>> remain the stale staged_config[].
>>>> ah - indeed. Thank you for catching that.
>>>>
>>>>>
>>>>> I think maybe it is better to put the clearing work where
>>>>> rdtgroup_schemata_write() and rdtgroup_init_alloc() exit.
>>>>>
>>>>
>>>> It may be more robust to let rdtgroup_init_alloc() follow
>>>> how rdtgroup_schemata_write() already ensures that it is
>>>> working with a clean state by clearing staged_config[] before
>>>> placing its staged config within.
>>>>
>>>
>>> I want to make sure, do you mean just ignore the stale value and
>>> place the clearing work before staged_config[] is used? If so, maybe
>>> the only thing the fix should do is to add memset() to
>>> rdtgroup_init_alloc().>
>>
>> No, let us not leave stale data lying around.
>>
>> The idea is that the function calling resctrl_arch_update_domains() is
>> responsible for initializing staged_config[] correctly and completely.
>> To confirm, yes, the idea is to clear the staged_config[] in
>> rdtgroup_init_alloc() before resctrl_arch_update_domains() is called
>> to follow how it is currently done in rdtgroup_schemata_write().
>>
>> But, as you indicate, by itself this would leave stale data lying around.
>>
>> The solution that you suggested earlier, to put the clearing work where
>> rdtgroup_schemata_write() and rdtgroup_init_alloc() exit, is most logical.
>> That makes the code symmetrical in that staged_config[] is cleared
>> where it is initialized and no stale data is left lying around. What was
>> not clear to me is how this would look in the end. Were you planning to
>> keep the staged_config[] clearing within rdtgroup_schemata_write() but
>> not do so in rdtgroup_init_alloc()? rdtgroup_schemata_write() and
>> rdtgroup_init_alloc() has to follow the same pattern to reduce confusion.
>>
>> So, to be more robust, how about:
>>
>> /* Clear staged_config[] to make sure working from a clean slate */
>> resctrl_arch_update_domains()
>> /* Clear staged_config[] to not leave stale data lying around */
>>
>
> Thank you for your explanation, and it makes sense to me. But this will
> require 4 memset() loops, how about putting the clearing work in
> a separate function in rdtgroup.c, like rdt_last_cmd_clear():
Yes, thanks for avoiding duplicating code.
>
> void staged_configs_clear(void) {
> struct resctrl_schema *s;
> struct rdt_domain *dom;
>
> lockdep_assert_held(&rdtgroup_mutex);
>
> list_for_each_entry(s, &resctrl_schema_all, list) {
> list_for_each_entry(dom, &s->res->domains, list)
> memset(dom->staged_config, 0, sizeof(dom->staged_config));
> }
> }
>
I understand that you are just copying what is currently done in
rdtgroup_schemata_write() but for a separate function I think something
like below would be more efficient:
for_each_alloc_capable_rdt_resource(r) {
list_for_each_entry(dom, &r->domains, list)
memset(dom->staged_config, 0, sizeof(dom->staged_config));
}
This would be more efficient since it would not clean the same memory area
twice when CDP is enabled.
Reinette
Hi Reinette,
On 10/26/2022 3:34 AM, Reinette Chatre wrote:
> Hi Shawn,
>
> On 10/25/2022 8:30 AM, Shawn Wang wrote:
>> Hi Reinette,
>>
>> On 10/25/2022 12:45 AM, Reinette Chatre wrote:
>>> Hi Shawn,
>>>
>>> On 10/23/2022 7:31 PM, Shawn Wang wrote:
>>>> On 10/22/2022 2:05 AM, Reinette Chatre wrote:
>>>>
>>>> ...
>>>>
>>>>>> It may not be enough to just clear staged_config[] when
>>>>>> resctrl_arch_update_domains() exits. I think the fix needs to make
>>>>>> sure staged_config[] can be cleared where it is set.
>>>>>>
>>>>>> The modification of staged_config[] comes from two paths:
>>>>>>
>>>>>> Path 1:
>>>>>> rdtgroup_schemata_write() {
>>>>>> ...
>>>>>> rdtgroup_parse_resource() // set staged_config[]
>>>>>> ...
>>>>>> resctrl_arch_update_domains() // clear staged_config[]
>>>>>> ...
>>>>>> }
>>>>>>
>>>>>> Path 2:
>>>>>> rdtgroup_init_alloc() {
>>>>>> ...
>>>>>> rdtgroup_init_mba()/rdtgroup_init_cat() // set staged_config[]
>>>>>> ...
>>>>>> resctrl_arch_update_domains() // clear staged_config[]
>>>>>> ...
>>>>>> }
>>>>>>
>>>>>> If we clear staged_config[] in resctrl_arch_update_domains(), goto
>>>>>> statement for error handling between setting staged_config[] and
>>>>>> calling resctrl_arch_update_domains() will be ignored. This can still
>>>>>> remain the stale staged_config[].
>>>>> ah - indeed. Thank you for catching that.
>>>>>
>>>>>>
>>>>>> I think maybe it is better to put the clearing work where
>>>>>> rdtgroup_schemata_write() and rdtgroup_init_alloc() exit.
>>>>>>
>>>>>
>>>>> It may be more robust to let rdtgroup_init_alloc() follow
>>>>> how rdtgroup_schemata_write() already ensures that it is
>>>>> working with a clean state by clearing staged_config[] before
>>>>> placing its staged config within.
>>>>>
>>>>
>>>> I want to make sure, do you mean just ignore the stale value and
>>>> place the clearing work before staged_config[] is used? If so, maybe
>>>> the only thing the fix should do is to add memset() to
>>>> rdtgroup_init_alloc().>
>>>
>>> No, let us not leave stale data lying around.
>>>
>>> The idea is that the function calling resctrl_arch_update_domains() is
>>> responsible for initializing staged_config[] correctly and completely.
>>> To confirm, yes, the idea is to clear the staged_config[] in
>>> rdtgroup_init_alloc() before resctrl_arch_update_domains() is called
>>> to follow how it is currently done in rdtgroup_schemata_write().
>>>
>>> But, as you indicate, by itself this would leave stale data lying around.
>>>
>>> The solution that you suggested earlier, to put the clearing work where
>>> rdtgroup_schemata_write() and rdtgroup_init_alloc() exit, is most logical.
>>> That makes the code symmetrical in that staged_config[] is cleared
>>> where it is initialized and no stale data is left lying around. What was
>>> not clear to me is how this would look in the end. Were you planning to
>>> keep the staged_config[] clearing within rdtgroup_schemata_write() but
>>> not do so in rdtgroup_init_alloc()? rdtgroup_schemata_write() and
>>> rdtgroup_init_alloc() has to follow the same pattern to reduce confusion.
>>>
>>> So, to be more robust, how about:
>>>
>>> /* Clear staged_config[] to make sure working from a clean slate */
>>> resctrl_arch_update_domains()
>>> /* Clear staged_config[] to not leave stale data lying around */
>>>
>>
>> Thank you for your explanation, and it makes sense to me. But this will
>> require 4 memset() loops, how about putting the clearing work in
>> a separate function in rdtgroup.c, like rdt_last_cmd_clear():
>
> Yes, thanks for avoiding duplicating code.
>
>>
>> void staged_configs_clear(void) {
>> struct resctrl_schema *s;
>> struct rdt_domain *dom;
>>
>> lockdep_assert_held(&rdtgroup_mutex);
>>
>> list_for_each_entry(s, &resctrl_schema_all, list) {
>> list_for_each_entry(dom, &s->res->domains, list)
>> memset(dom->staged_config, 0, sizeof(dom->staged_config));
>> }
>> }
>>
>
> I understand that you are just copying what is currently done in
> rdtgroup_schemata_write() but for a separate function I think something
> like below would be more efficient:
>
>
> for_each_alloc_capable_rdt_resource(r) {
> list_for_each_entry(dom, &r->domains, list)
> memset(dom->staged_config, 0, sizeof(dom->staged_config));
> }
>
> This would be more efficient since it would not clean the same memory area
> twice when CDP is enabled.
>
Thank you, I didn't notice this function before. I will add it in a new version.
Shawn