2021-02-01 13:31:27

by Tianjia Zhang

[permalink] [raw]
Subject: [PATCH v4 2/5] x86/sgx: Reduce the locking range in sgx_sanitize_section()

The spin lock of sgx_epc_section only locks the page_list. The
EREMOVE operation and init_laundry_list is not necessary in the
protection range of the spin lock. This patch reduces the lock
range of the spin lock in the function sgx_sanitize_section()
and only protects the operation of the page_list.

Suggested-by: Sean Christopherson <[email protected]>
Signed-off-by: Tianjia Zhang <[email protected]>
---
arch/x86/kernel/cpu/sgx/main.c | 11 ++++-------
1 file changed, 4 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index c519fc5f6948..4465912174fd 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -41,20 +41,17 @@ static void sgx_sanitize_section(struct sgx_epc_section *section)
if (kthread_should_stop())
return;

- /* needed for access to ->page_list: */
- spin_lock(&section->lock);
-
page = list_first_entry(&section->init_laundry_list,
struct sgx_epc_page, list);

ret = __eremove(sgx_get_epc_virt_addr(page));
- if (!ret)
+ if (!ret) {
+ spin_lock(&section->lock);
list_move(&page->list, &section->page_list);
- else
+ spin_unlock(&section->lock);
+ } else
list_move_tail(&page->list, &dirty);

- spin_unlock(&section->lock);
-
cond_resched();
}

--
2.19.1.3.ge56e4f7


2021-02-03 00:53:55

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v4 2/5] x86/sgx: Reduce the locking range in sgx_sanitize_section()

On Mon, Feb 01, 2021 at 09:26:50PM +0800, Tianjia Zhang wrote:
> The spin lock of sgx_epc_section only locks the page_list. The
> EREMOVE operation and init_laundry_list is not necessary in the
> protection range of the spin lock. This patch reduces the lock
> range of the spin lock in the function sgx_sanitize_section()
> and only protects the operation of the page_list.
>
> Suggested-by: Sean Christopherson <[email protected]>
> Signed-off-by: Tianjia Zhang <[email protected]>

I'm not confident that this change has any practical value.

/Jarkko

> ---
> arch/x86/kernel/cpu/sgx/main.c | 11 ++++-------
> 1 file changed, 4 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
> index c519fc5f6948..4465912174fd 100644
> --- a/arch/x86/kernel/cpu/sgx/main.c
> +++ b/arch/x86/kernel/cpu/sgx/main.c
> @@ -41,20 +41,17 @@ static void sgx_sanitize_section(struct sgx_epc_section *section)
> if (kthread_should_stop())
> return;
>
> - /* needed for access to ->page_list: */
> - spin_lock(&section->lock);
> -
> page = list_first_entry(&section->init_laundry_list,
> struct sgx_epc_page, list);
>
> ret = __eremove(sgx_get_epc_virt_addr(page));
> - if (!ret)
> + if (!ret) {
> + spin_lock(&section->lock);
> list_move(&page->list, &section->page_list);
> - else
> + spin_unlock(&section->lock);
> + } else
> list_move_tail(&page->list, &dirty);
>
> - spin_unlock(&section->lock);
> -
> cond_resched();
> }
>
> --
> 2.19.1.3.ge56e4f7
>
>

2021-02-11 06:17:25

by Tianjia Zhang

[permalink] [raw]
Subject: Re: [PATCH v4 2/5] x86/sgx: Reduce the locking range in sgx_sanitize_section()



On 2/3/21 6:00 AM, Jarkko Sakkinen wrote:
> On Mon, Feb 01, 2021 at 09:26:50PM +0800, Tianjia Zhang wrote:
>> The spin lock of sgx_epc_section only locks the page_list. The
>> EREMOVE operation and init_laundry_list is not necessary in the
>> protection range of the spin lock. This patch reduces the lock
>> range of the spin lock in the function sgx_sanitize_section()
>> and only protects the operation of the page_list.
>>
>> Suggested-by: Sean Christopherson <[email protected]>
>> Signed-off-by: Tianjia Zhang <[email protected]>
>
> I'm not confident that this change has any practical value.
>
> /Jarkko
>

As a process executed during initialization, this optimization effect
may not be obvious. If possible, this critical area can be moved outside
to protect the entire while loop.

Best regards,
Tianjia
>> ---
>> arch/x86/kernel/cpu/sgx/main.c | 11 ++++-------
>> 1 file changed, 4 insertions(+), 7 deletions(-)
>>
>> diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
>> index c519fc5f6948..4465912174fd 100644
>> --- a/arch/x86/kernel/cpu/sgx/main.c
>> +++ b/arch/x86/kernel/cpu/sgx/main.c
>> @@ -41,20 +41,17 @@ static void sgx_sanitize_section(struct sgx_epc_section *section)
>> if (kthread_should_stop())
>> return;
>>
>> - /* needed for access to ->page_list: */
>> - spin_lock(&section->lock);
>> -
>> page = list_first_entry(&section->init_laundry_list,
>> struct sgx_epc_page, list);
>>
>> ret = __eremove(sgx_get_epc_virt_addr(page));
>> - if (!ret)
>> + if (!ret) {
>> + spin_lock(&section->lock);
>> list_move(&page->list, &section->page_list);
>> - else
>> + spin_unlock(&section->lock);
>> + } else
>> list_move_tail(&page->list, &dirty);
>>
>> - spin_unlock(&section->lock);
>> -
>> cond_resched();
>> }
>>
>> --
>> 2.19.1.3.ge56e4f7
>>
>>