2022-09-29 17:06:38

by Kristen Carlson Accardi

[permalink] [raw]
Subject: [PATCH] x86/sgx: Replace kmap/kunmap_atomic calls

It is not necessary to disable page faults or preemption when
using kmap calls, so replace kmap_atomic() and kunmap_atomic()
calls with more the more appropriate kmap_local_page() and
kunmap_local() calls.

Signed-off-by: Kristen Carlson Accardi <[email protected]>
---
arch/x86/kernel/cpu/sgx/encl.c | 12 ++++++------
arch/x86/kernel/cpu/sgx/ioctl.c | 4 ++--
arch/x86/kernel/cpu/sgx/main.c | 8 ++++----
3 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index f40d64206ded..63dd92bd3288 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -160,8 +160,8 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
return ret;

pginfo.addr = encl_page->desc & PAGE_MASK;
- pginfo.contents = (unsigned long)kmap_atomic(b.contents);
- pcmd_page = kmap_atomic(b.pcmd);
+ pginfo.contents = (unsigned long)kmap_local_page(b.contents);
+ pcmd_page = kmap_local_page(b.pcmd);
pginfo.metadata = (unsigned long)pcmd_page + b.pcmd_offset;

if (secs_page)
@@ -187,8 +187,8 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
*/
pcmd_page_empty = !memchr_inv(pcmd_page, 0, PAGE_SIZE);

- kunmap_atomic(pcmd_page);
- kunmap_atomic((void *)(unsigned long)pginfo.contents);
+ kunmap_local(pcmd_page);
+ kunmap_local((void *)(unsigned long)pginfo.contents);

get_page(b.pcmd);
sgx_encl_put_backing(&b);
@@ -197,10 +197,10 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,

if (pcmd_page_empty && !reclaimer_writing_to_pcmd(encl, pcmd_first_page)) {
sgx_encl_truncate_backing_page(encl, PFN_DOWN(page_pcmd_off));
- pcmd_page = kmap_atomic(b.pcmd);
+ pcmd_page = kmap_local_page(b.pcmd);
if (memchr_inv(pcmd_page, 0, PAGE_SIZE))
pr_warn("PCMD page not empty after truncate.\n");
- kunmap_atomic(pcmd_page);
+ kunmap_local(pcmd_page);
}

put_page(b.pcmd);
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index ebe79d60619f..f2f918b8b9b1 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -221,11 +221,11 @@ static int __sgx_encl_add_page(struct sgx_encl *encl,
pginfo.secs = (unsigned long)sgx_get_epc_virt_addr(encl->secs.epc_page);
pginfo.addr = encl_page->desc & PAGE_MASK;
pginfo.metadata = (unsigned long)secinfo;
- pginfo.contents = (unsigned long)kmap_atomic(src_page);
+ pginfo.contents = (unsigned long)kmap_local_page(src_page);

ret = __eadd(&pginfo, sgx_get_epc_virt_addr(epc_page));

- kunmap_atomic((void *)pginfo.contents);
+ kunmap_local((void *)pginfo.contents);
put_page(src_page);

return ret ? -EIO : 0;
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 515e2a5f25bb..4efda5e8cadf 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -159,17 +159,17 @@ static int __sgx_encl_ewb(struct sgx_epc_page *epc_page, void *va_slot,
pginfo.addr = 0;
pginfo.secs = 0;

- pginfo.contents = (unsigned long)kmap_atomic(backing->contents);
- pginfo.metadata = (unsigned long)kmap_atomic(backing->pcmd) +
+ pginfo.contents = (unsigned long)kmap_local_page(backing->contents);
+ pginfo.metadata = (unsigned long)kmap_local_page(backing->pcmd) +
backing->pcmd_offset;

ret = __ewb(&pginfo, sgx_get_epc_virt_addr(epc_page), va_slot);
set_page_dirty(backing->pcmd);
set_page_dirty(backing->contents);

- kunmap_atomic((void *)(unsigned long)(pginfo.metadata -
+ kunmap_local((void *)(unsigned long)(pginfo.metadata -
backing->pcmd_offset));
- kunmap_atomic((void *)(unsigned long)pginfo.contents);
+ kunmap_local((void *)(unsigned long)pginfo.contents);

return ret;
}
--
2.37.3


2022-09-30 22:07:36

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH] x86/sgx: Replace kmap/kunmap_atomic calls

On Thu, Sep 29, 2022 at 09:06:46AM -0700, Kristen Carlson Accardi wrote:
> It is not necessary to disable page faults or preemption when
> using kmap calls, so replace kmap_atomic() and kunmap_atomic()
> calls with more the more appropriate kmap_local_page() and
> kunmap_local() calls.
>
> Signed-off-by: Kristen Carlson Accardi <[email protected]>

Missing "why". I believe you're right but since the existing code
is working, you should extend the reasoning just a bit.

BR, Jarkko

2022-10-06 21:43:08

by Fabio M. De Francesco

[permalink] [raw]
Subject: Re: [PATCH] x86/sgx: Replace kmap/kunmap_atomic calls

On Thursday, September 29, 2022 6:06:46 PM CEST Kristen Carlson Accardi wrote:
> It is not necessary to disable page faults or preemption when
> using kmap calls

Do you refer to the page faults disabling that kmap_atomic() provides as a
side effect? Can you please clarify a little more? kmap_atomic() disables
always only page faults, instead it might not disable preemption; it depends
on CONFIG_PREEMPT_RT. Therefore, why are you also talking about preemption?

Are you converting code which runs in atomic context regardless kmap_atomic()?
If so, between kmap() and kmap_atomic(), the author(s) had only one choice, it
correctly was kmap_atomic(), otherwise we might end up with a perfect recipe
for triggering SAC bugs.

kmap() were not suited in those cases because it might sleep. If the intents
of the author are simply map a page while in atomic, so to avoid sleeping in
atomic bugs, your conversions looks good.

For the reasons above, can you please say something more about why this code
needed a kmap_atomic() instead of calling kmap()?

A different case is in choosing kmap_atomic() is there because of its side
effects, despite the code is running in non atomic context until the mapping,
but it needs to disable pagefaults only somewhere between the mapping and
unmapping. This is a trickier case than the above-mentioned one because along
with conversion developers should at least disable the pagefaults and
probably, although not necessarily, also disable preemption.

> , so replace kmap_atomic() and kunmap_atomic()
> calls with more the more appropriate kmap_local_page() and
> kunmap_local() calls.

Why is kmap_local_page() better suited in general and is safe in
this specific case?

I think that we should provide the maintainer with well supported reasons why
they should change any piece of code which is still doing what it is thought
for. A mere deprecation in favour of a newer API may not be enough to change
code that is still working properly (like in the old "if it's not broken,
don't fix it!", or something like this :)).

Thanks,

Fabio


>
> Signed-off-by: Kristen Carlson Accardi <[email protected]>
> ---
> arch/x86/kernel/cpu/sgx/encl.c | 12 ++++++------
> arch/x86/kernel/cpu/sgx/ioctl.c | 4 ++--
> arch/x86/kernel/cpu/sgx/main.c | 8 ++++----
> 3 files changed, 12 insertions(+), 12 deletions(-)
>

[snip]


2022-10-06 22:03:24

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH] x86/sgx: Replace kmap/kunmap_atomic calls

On 10/6/22 13:37, Fabio M. De Francesco wrote:
> kmap() were not suited in those cases because it might sleep. If the intents
> of the author are simply map a page while in atomic, so to avoid sleeping in
> atomic bugs, your conversions looks good.
>
> For the reasons above, can you please say something more about why this code
> needed a kmap_atomic() instead of calling kmap()?

This question is backwards. kmap_atomic() is the default that folks
use. You use kmap_atomic() *always* unless you _need_ to sleep or one
of the other kmap()-only things.

Folks don't and shouldn't have to explain why this was using kmap_atomic().

2022-10-06 22:06:09

by Fabio M. De Francesco

[permalink] [raw]
Subject: Re: [PATCH] x86/sgx: Replace kmap/kunmap_atomic calls

On Thursday, October 6, 2022 10:45:56 PM CEST Dave Hansen wrote:
> On 10/6/22 13:37, Fabio M. De Francesco wrote:
> > kmap() were not suited in those cases because it might sleep. If the
intents
> > of the author are simply map a page while in atomic, so to avoid sleeping
in
> > atomic bugs, your conversions looks good.
> >
> > For the reasons above, can you please say something more about why this
code
> > needed a kmap_atomic() instead of calling kmap()?
>
> This question is backwards. kmap_atomic() is the default that folks
> use.

Sorry, I can't understand why kmap_atomic() is the default. What advantage we
get from disabling pagefaults and probably also preemption (it depends on !
PREEMPT_RT also when we don't need that the kernel runs in atomic?

Do you mean that the more code run in atomic, the less pagefaults we allow,
the less preemption we allow, and so on, the better we get from Linux?

Do you remember that what I say above happens both on 64 bits systems as well
as in 32 bits?

I'm a newcomer so I may be wrong, however I think that in 64 bits systems we
gain from simply returning page_address() and from finer granularity
(less atomic, less critical sections, instead more preemption and / or
migration).

Why shouldn't be kmap() the default choice in a preemptible kernel if sections
don't need that disabling of pagefaults, along with preemption (which get
disabled many more times that only migration)?

Am I still missing anything fundamental?

> You use kmap_atomic() *always* unless you _need_ to sleep or one
> of the other kmap()-only things.

What would happen if you rely on switching in atomic as a side effect of
kmap_atomic() and then you convert to kmap_local_page() without explicitly
disabling, for example, preemption since who converts don't care to know if
the code is in atomic before calling kmap_atomic() before or after the call
(as I said there may be cases where non atomic execution must disable
preemption for some reasons only between the mapping and the unmapping?

If I were a maintainer I wouldn't trust changes that let me think that the
developer can't tell if we need to disable something while converting to
kmap_local_page().

I hope this time I've been to convey more clearly my thoughts. I'm sorry for
my scarce knowledge of English.

Thanks,

Fabio

>
> Folks don't and shouldn't have to explain why this was using kmap_atomic().
>




2022-10-06 22:25:31

by Ira Weiny

[permalink] [raw]
Subject: Re: [PATCH] x86/sgx: Replace kmap/kunmap_atomic calls

On Thu, Oct 06, 2022 at 01:45:56PM -0700, Hansen, Dave wrote:
> On 10/6/22 13:37, Fabio M. De Francesco wrote:
> > kmap() were not suited in those cases because it might sleep. If the intents
> > of the author are simply map a page while in atomic, so to avoid sleeping in
> > atomic bugs, your conversions looks good.
> >
> > For the reasons above, can you please say something more about why this code
> > needed a kmap_atomic() instead of calling kmap()?
>
> This question is backwards. kmap_atomic() is the default that folks
> use. You use kmap_atomic() *always* unless you _need_ to sleep or one
> of the other kmap()-only things.
>
> Folks don't and shouldn't have to explain why this was using kmap_atomic().

I've not looked at the code closely enough but there was some concern that
kmap_atomic() callers are depending on preemption being disabled vs the more
common case of them being used in an atomic context where kmap() can't work.

Ira

2022-10-06 22:57:09

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH] x86/sgx: Replace kmap/kunmap_atomic calls

On 10/6/22 15:02, Fabio M. De Francesco wrote:
> On Thursday, October 6, 2022 10:45:56 PM CEST Dave Hansen wrote:
> Am I still missing anything fundamental?

Yes. :)

kmap() users can sleep. That means the number of them that you need to
keep around is unbounded. kmap_atomic()'s fundamentally can't sleep so
you need fewer of them. That means that when you kunmap_atomic() you
can use a simple, fast, CPU-local TLB flushing operation. kunmap()
eventually requires a big fat global TLB flush.

So, you're right. On lowmem-only systems, kmap() *can* be cheaper than
kmap_atomic(). But, on highmem systems there's no contest:
kmap_atomic() is king.

That's why kmap_atomic() is and should be the default.

>> You use kmap_atomic() *always* unless you _need_ to sleep or one
>> of the other kmap()-only things.
>
> What would happen if you rely on switching in atomic as a side effect of
> kmap_atomic() and then you convert to kmap_local_page() without explicitly
> disabling, for example, preemption since who converts don't care to know if
> the code is in atomic before calling kmap_atomic() before or after the call
> (as I said there may be cases where non atomic execution must disable
> preemption for some reasons only between the mapping and the unmapping?
>
> If I were a maintainer I wouldn't trust changes that let me think that the
> developer can't tell if we need to disable something while converting to
> kmap_local_page().

In this case, it's just not that complicated. The SGX code isn't
relying on anything subtle that kmap_local_page() does not provide.

2022-10-06 23:25:56

by Ira Weiny

[permalink] [raw]
Subject: Re: [PATCH] x86/sgx: Replace kmap/kunmap_atomic calls

On Thu, Oct 06, 2022 at 03:29:35PM -0700, Hansen, Dave wrote:
> On 10/6/22 15:02, Fabio M. De Francesco wrote:
> > On Thursday, October 6, 2022 10:45:56 PM CEST Dave Hansen wrote:
> > Am I still missing anything fundamental?
>
> Yes. :)
>
> kmap() users can sleep. That means the number of them that you need to
> keep around is unbounded. kmap_atomic()'s fundamentally can't sleep so
> you need fewer of them. That means that when you kunmap_atomic() you
> can use a simple, fast, CPU-local TLB flushing operation. kunmap()
> eventually requires a big fat global TLB flush.
>
> So, you're right. On lowmem-only systems, kmap() *can* be cheaper than
> kmap_atomic(). But, on highmem systems there's no contest:
> kmap_atomic() is king.
>
> That's why kmap_atomic() is and should be the default.

But in the last few years optimizing lowmem systems has been the default. Few
care about the performance of highmem systems.

Given that we don't want to ever use kmap() nor kmap_atomic() going forward I
think this discussion is purely academic.

Ira

2022-10-07 15:59:49

by Ira Weiny

[permalink] [raw]
Subject: Re: [PATCH] x86/sgx: Replace kmap/kunmap_atomic calls

On Thu, Sep 29, 2022 at 09:06:46AM -0700, Kristen Carlson Accardi wrote:
> It is not necessary to disable page faults or preemption when
> using kmap calls, so replace kmap_atomic() and kunmap_atomic()
> calls with more the more appropriate kmap_local_page() and
> kunmap_local() calls.
>
> Signed-off-by: Kristen Carlson Accardi <[email protected]>

Minor comment below.

> ---
> arch/x86/kernel/cpu/sgx/encl.c | 12 ++++++------
> arch/x86/kernel/cpu/sgx/ioctl.c | 4 ++--
> arch/x86/kernel/cpu/sgx/main.c | 8 ++++----
> 3 files changed, 12 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
> index f40d64206ded..63dd92bd3288 100644
> --- a/arch/x86/kernel/cpu/sgx/encl.c
> +++ b/arch/x86/kernel/cpu/sgx/encl.c
> @@ -160,8 +160,8 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
> return ret;
>
> pginfo.addr = encl_page->desc & PAGE_MASK;
> - pginfo.contents = (unsigned long)kmap_atomic(b.contents);
> - pcmd_page = kmap_atomic(b.pcmd);
> + pginfo.contents = (unsigned long)kmap_local_page(b.contents);
> + pcmd_page = kmap_local_page(b.pcmd);
> pginfo.metadata = (unsigned long)pcmd_page + b.pcmd_offset;
>
> if (secs_page)
> @@ -187,8 +187,8 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
> */
> pcmd_page_empty = !memchr_inv(pcmd_page, 0, PAGE_SIZE);
>
> - kunmap_atomic(pcmd_page);
> - kunmap_atomic((void *)(unsigned long)pginfo.contents);
> + kunmap_local(pcmd_page);
> + kunmap_local((void *)(unsigned long)pginfo.contents);
>
> get_page(b.pcmd);
> sgx_encl_put_backing(&b);
> @@ -197,10 +197,10 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
>
> if (pcmd_page_empty && !reclaimer_writing_to_pcmd(encl, pcmd_first_page)) {
> sgx_encl_truncate_backing_page(encl, PFN_DOWN(page_pcmd_off));
> - pcmd_page = kmap_atomic(b.pcmd);
> + pcmd_page = kmap_local_page(b.pcmd);
> if (memchr_inv(pcmd_page, 0, PAGE_SIZE))
> pr_warn("PCMD page not empty after truncate.\n");
> - kunmap_atomic(pcmd_page);
> + kunmap_local(pcmd_page);
> }
>
> put_page(b.pcmd);
> diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
> index ebe79d60619f..f2f918b8b9b1 100644
> --- a/arch/x86/kernel/cpu/sgx/ioctl.c
> +++ b/arch/x86/kernel/cpu/sgx/ioctl.c
> @@ -221,11 +221,11 @@ static int __sgx_encl_add_page(struct sgx_encl *encl,
> pginfo.secs = (unsigned long)sgx_get_epc_virt_addr(encl->secs.epc_page);
> pginfo.addr = encl_page->desc & PAGE_MASK;
> pginfo.metadata = (unsigned long)secinfo;
> - pginfo.contents = (unsigned long)kmap_atomic(src_page);
> + pginfo.contents = (unsigned long)kmap_local_page(src_page);
>
> ret = __eadd(&pginfo, sgx_get_epc_virt_addr(epc_page));
>
> - kunmap_atomic((void *)pginfo.contents);
> + kunmap_local((void *)pginfo.contents);
> put_page(src_page);
>
> return ret ? -EIO : 0;
> diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
> index 515e2a5f25bb..4efda5e8cadf 100644
> --- a/arch/x86/kernel/cpu/sgx/main.c
> +++ b/arch/x86/kernel/cpu/sgx/main.c
> @@ -159,17 +159,17 @@ static int __sgx_encl_ewb(struct sgx_epc_page *epc_page, void *va_slot,
> pginfo.addr = 0;
> pginfo.secs = 0;
>
> - pginfo.contents = (unsigned long)kmap_atomic(backing->contents);
> - pginfo.metadata = (unsigned long)kmap_atomic(backing->pcmd) +
> + pginfo.contents = (unsigned long)kmap_local_page(backing->contents);
> + pginfo.metadata = (unsigned long)kmap_local_page(backing->pcmd) +
> backing->pcmd_offset;
>
> ret = __ewb(&pginfo, sgx_get_epc_virt_addr(epc_page), va_slot);
> set_page_dirty(backing->pcmd);
> set_page_dirty(backing->contents);
>
> - kunmap_atomic((void *)(unsigned long)(pginfo.metadata -
> + kunmap_local((void *)(unsigned long)(pginfo.metadata -
> backing->pcmd_offset));

For kunmap_local() one can use any address in the page. So this can be:

kunmap_local((void *)(unsigned long)(pginfo.metadata));


Regardless:

Reviewed-by: Ira Weiny <[email protected]>

> - kunmap_atomic((void *)(unsigned long)pginfo.contents);
> + kunmap_local((void *)(unsigned long)pginfo.contents);
>
> return ret;
> }
> --
> 2.37.3
>

2022-10-12 07:43:50

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH] x86/sgx: Replace kmap/kunmap_atomic calls

On Wed, Oct 12, 2022 at 10:15:22AM +0300, Jarkko Sakkinen wrote:
> On Fri, Oct 07, 2022 at 08:23:03AM -0700, Ira Weiny wrote:
> > On Thu, Sep 29, 2022 at 09:06:46AM -0700, Kristen Carlson Accardi wrote:
> > > It is not necessary to disable page faults or preemption when
> > > using kmap calls, so replace kmap_atomic() and kunmap_atomic()
> > > calls with more the more appropriate kmap_local_page() and
> > > kunmap_local() calls.
> > >
> > > Signed-off-by: Kristen Carlson Accardi <[email protected]>
> >
> > Minor comment below.
> >
> > > ---
> > > arch/x86/kernel/cpu/sgx/encl.c | 12 ++++++------
> > > arch/x86/kernel/cpu/sgx/ioctl.c | 4 ++--
> > > arch/x86/kernel/cpu/sgx/main.c | 8 ++++----
> > > 3 files changed, 12 insertions(+), 12 deletions(-)
> > >
> > > diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
> > > index f40d64206ded..63dd92bd3288 100644
> > > --- a/arch/x86/kernel/cpu/sgx/encl.c
> > > +++ b/arch/x86/kernel/cpu/sgx/encl.c
> > > @@ -160,8 +160,8 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
> > > return ret;
> > >
> > > pginfo.addr = encl_page->desc & PAGE_MASK;
> > > - pginfo.contents = (unsigned long)kmap_atomic(b.contents);
> > > - pcmd_page = kmap_atomic(b.pcmd);
> > > + pginfo.contents = (unsigned long)kmap_local_page(b.contents);
> > > + pcmd_page = kmap_local_page(b.pcmd);
> > > pginfo.metadata = (unsigned long)pcmd_page + b.pcmd_offset;
> > >
> > > if (secs_page)
> > > @@ -187,8 +187,8 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
> > > */
> > > pcmd_page_empty = !memchr_inv(pcmd_page, 0, PAGE_SIZE);
> > >
> > > - kunmap_atomic(pcmd_page);
> > > - kunmap_atomic((void *)(unsigned long)pginfo.contents);
> > > + kunmap_local(pcmd_page);
> > > + kunmap_local((void *)(unsigned long)pginfo.contents);
> > >
> > > get_page(b.pcmd);
> > > sgx_encl_put_backing(&b);
> > > @@ -197,10 +197,10 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
> > >
> > > if (pcmd_page_empty && !reclaimer_writing_to_pcmd(encl, pcmd_first_page)) {
> > > sgx_encl_truncate_backing_page(encl, PFN_DOWN(page_pcmd_off));
> > > - pcmd_page = kmap_atomic(b.pcmd);
> > > + pcmd_page = kmap_local_page(b.pcmd);
> > > if (memchr_inv(pcmd_page, 0, PAGE_SIZE))
> > > pr_warn("PCMD page not empty after truncate.\n");
> > > - kunmap_atomic(pcmd_page);
> > > + kunmap_local(pcmd_page);
> > > }
> > >
> > > put_page(b.pcmd);
> > > diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
> > > index ebe79d60619f..f2f918b8b9b1 100644
> > > --- a/arch/x86/kernel/cpu/sgx/ioctl.c
> > > +++ b/arch/x86/kernel/cpu/sgx/ioctl.c
> > > @@ -221,11 +221,11 @@ static int __sgx_encl_add_page(struct sgx_encl *encl,
> > > pginfo.secs = (unsigned long)sgx_get_epc_virt_addr(encl->secs.epc_page);
> > > pginfo.addr = encl_page->desc & PAGE_MASK;
> > > pginfo.metadata = (unsigned long)secinfo;
> > > - pginfo.contents = (unsigned long)kmap_atomic(src_page);
> > > + pginfo.contents = (unsigned long)kmap_local_page(src_page);
> > >
> > > ret = __eadd(&pginfo, sgx_get_epc_virt_addr(epc_page));
> > >
> > > - kunmap_atomic((void *)pginfo.contents);
> > > + kunmap_local((void *)pginfo.contents);
> > > put_page(src_page);
> > >
> > > return ret ? -EIO : 0;
> > > diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
> > > index 515e2a5f25bb..4efda5e8cadf 100644
> > > --- a/arch/x86/kernel/cpu/sgx/main.c
> > > +++ b/arch/x86/kernel/cpu/sgx/main.c
> > > @@ -159,17 +159,17 @@ static int __sgx_encl_ewb(struct sgx_epc_page *epc_page, void *va_slot,
> > > pginfo.addr = 0;
> > > pginfo.secs = 0;
> > >
> > > - pginfo.contents = (unsigned long)kmap_atomic(backing->contents);
> > > - pginfo.metadata = (unsigned long)kmap_atomic(backing->pcmd) +
> > > + pginfo.contents = (unsigned long)kmap_local_page(backing->contents);
> > > + pginfo.metadata = (unsigned long)kmap_local_page(backing->pcmd) +
> > > backing->pcmd_offset;
> > >
> > > ret = __ewb(&pginfo, sgx_get_epc_virt_addr(epc_page), va_slot);
> > > set_page_dirty(backing->pcmd);
> > > set_page_dirty(backing->contents);
> > >
> > > - kunmap_atomic((void *)(unsigned long)(pginfo.metadata -
> > > + kunmap_local((void *)(unsigned long)(pginfo.metadata -
> > > backing->pcmd_offset));
> >
> > For kunmap_local() one can use any address in the page. So this can be:
> >
> > kunmap_local((void *)(unsigned long)(pginfo.metadata));
> >
> >
> > Regardless:
> >
> > Reviewed-by: Ira Weiny <[email protected]>
>
> There's no data to show that this change would be useful to do.
>
> Thus, as said earlier, the commit message is missing "why".
>
> Definitive NAK with the current offering.

Concurrency is complex enough topic that it is pretty hard to predict
the difference without any data. Any sort of benchmark, workload or
whatever to support the change would be essential here.

If we change primitives with weak arguments, it might backlash with
someone using SGX complaining about degrade in performance in some use
case.

*Conceptually* I do not have anything against this change but it is not
good enough argument here.

BR, Jarkko

2022-10-12 08:04:21

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH] x86/sgx: Replace kmap/kunmap_atomic calls

On Fri, Oct 07, 2022 at 08:23:03AM -0700, Ira Weiny wrote:
> On Thu, Sep 29, 2022 at 09:06:46AM -0700, Kristen Carlson Accardi wrote:
> > It is not necessary to disable page faults or preemption when
> > using kmap calls, so replace kmap_atomic() and kunmap_atomic()
> > calls with more the more appropriate kmap_local_page() and
> > kunmap_local() calls.
> >
> > Signed-off-by: Kristen Carlson Accardi <[email protected]>
>
> Minor comment below.
>
> > ---
> > arch/x86/kernel/cpu/sgx/encl.c | 12 ++++++------
> > arch/x86/kernel/cpu/sgx/ioctl.c | 4 ++--
> > arch/x86/kernel/cpu/sgx/main.c | 8 ++++----
> > 3 files changed, 12 insertions(+), 12 deletions(-)
> >
> > diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
> > index f40d64206ded..63dd92bd3288 100644
> > --- a/arch/x86/kernel/cpu/sgx/encl.c
> > +++ b/arch/x86/kernel/cpu/sgx/encl.c
> > @@ -160,8 +160,8 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
> > return ret;
> >
> > pginfo.addr = encl_page->desc & PAGE_MASK;
> > - pginfo.contents = (unsigned long)kmap_atomic(b.contents);
> > - pcmd_page = kmap_atomic(b.pcmd);
> > + pginfo.contents = (unsigned long)kmap_local_page(b.contents);
> > + pcmd_page = kmap_local_page(b.pcmd);
> > pginfo.metadata = (unsigned long)pcmd_page + b.pcmd_offset;
> >
> > if (secs_page)
> > @@ -187,8 +187,8 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
> > */
> > pcmd_page_empty = !memchr_inv(pcmd_page, 0, PAGE_SIZE);
> >
> > - kunmap_atomic(pcmd_page);
> > - kunmap_atomic((void *)(unsigned long)pginfo.contents);
> > + kunmap_local(pcmd_page);
> > + kunmap_local((void *)(unsigned long)pginfo.contents);
> >
> > get_page(b.pcmd);
> > sgx_encl_put_backing(&b);
> > @@ -197,10 +197,10 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
> >
> > if (pcmd_page_empty && !reclaimer_writing_to_pcmd(encl, pcmd_first_page)) {
> > sgx_encl_truncate_backing_page(encl, PFN_DOWN(page_pcmd_off));
> > - pcmd_page = kmap_atomic(b.pcmd);
> > + pcmd_page = kmap_local_page(b.pcmd);
> > if (memchr_inv(pcmd_page, 0, PAGE_SIZE))
> > pr_warn("PCMD page not empty after truncate.\n");
> > - kunmap_atomic(pcmd_page);
> > + kunmap_local(pcmd_page);
> > }
> >
> > put_page(b.pcmd);
> > diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
> > index ebe79d60619f..f2f918b8b9b1 100644
> > --- a/arch/x86/kernel/cpu/sgx/ioctl.c
> > +++ b/arch/x86/kernel/cpu/sgx/ioctl.c
> > @@ -221,11 +221,11 @@ static int __sgx_encl_add_page(struct sgx_encl *encl,
> > pginfo.secs = (unsigned long)sgx_get_epc_virt_addr(encl->secs.epc_page);
> > pginfo.addr = encl_page->desc & PAGE_MASK;
> > pginfo.metadata = (unsigned long)secinfo;
> > - pginfo.contents = (unsigned long)kmap_atomic(src_page);
> > + pginfo.contents = (unsigned long)kmap_local_page(src_page);
> >
> > ret = __eadd(&pginfo, sgx_get_epc_virt_addr(epc_page));
> >
> > - kunmap_atomic((void *)pginfo.contents);
> > + kunmap_local((void *)pginfo.contents);
> > put_page(src_page);
> >
> > return ret ? -EIO : 0;
> > diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
> > index 515e2a5f25bb..4efda5e8cadf 100644
> > --- a/arch/x86/kernel/cpu/sgx/main.c
> > +++ b/arch/x86/kernel/cpu/sgx/main.c
> > @@ -159,17 +159,17 @@ static int __sgx_encl_ewb(struct sgx_epc_page *epc_page, void *va_slot,
> > pginfo.addr = 0;
> > pginfo.secs = 0;
> >
> > - pginfo.contents = (unsigned long)kmap_atomic(backing->contents);
> > - pginfo.metadata = (unsigned long)kmap_atomic(backing->pcmd) +
> > + pginfo.contents = (unsigned long)kmap_local_page(backing->contents);
> > + pginfo.metadata = (unsigned long)kmap_local_page(backing->pcmd) +
> > backing->pcmd_offset;
> >
> > ret = __ewb(&pginfo, sgx_get_epc_virt_addr(epc_page), va_slot);
> > set_page_dirty(backing->pcmd);
> > set_page_dirty(backing->contents);
> >
> > - kunmap_atomic((void *)(unsigned long)(pginfo.metadata -
> > + kunmap_local((void *)(unsigned long)(pginfo.metadata -
> > backing->pcmd_offset));
>
> For kunmap_local() one can use any address in the page. So this can be:
>
> kunmap_local((void *)(unsigned long)(pginfo.metadata));
>
>
> Regardless:
>
> Reviewed-by: Ira Weiny <[email protected]>

There's no data to show that this change would be useful to do.

Thus, as said earlier, the commit message is missing "why".

Definitive NAK with the current offering.

BR, Jarkko

2022-10-12 14:24:32

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH] x86/sgx: Replace kmap/kunmap_atomic calls

On 10/12/22 00:15, Jarkko Sakkinen wrote:
> There's no data to show that this change would be useful to do.

Jarkko, I think the overall transition to kmap_local_page() is a good
one. It is a superior API and having it around will pave the way for
new features. I don't think we should demand 'data' for each and every
one of these.

Please take a look around the tree and see how other maintainers are
handling these patches. They're not limited to SGX.

2022-10-12 15:09:25

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH] x86/sgx: Replace kmap/kunmap_atomic calls

On Wed, Oct 12, 2022 at 07:13:26AM -0700, Dave Hansen wrote:
> On 10/12/22 00:15, Jarkko Sakkinen wrote:
> > There's no data to show that this change would be useful to do.
>
> Jarkko, I think the overall transition to kmap_local_page() is a good
> one. It is a superior API and having it around will pave the way for
> new features. I don't think we should demand 'data' for each and every
> one of these.
>
> Please take a look around the tree and see how other maintainers are
> handling these patches. They're not limited to SGX.

Sure, I'll take a look for comparison.

BR, Jarkko

2022-10-12 16:14:26

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH] x86/sgx: Replace kmap/kunmap_atomic calls

On Wed, Oct 12, 2022 at 05:50:19PM +0300, Jarkko Sakkinen wrote:
> On Wed, Oct 12, 2022 at 07:13:26AM -0700, Dave Hansen wrote:
> > On 10/12/22 00:15, Jarkko Sakkinen wrote:
> > > There's no data to show that this change would be useful to do.
> >
> > Jarkko, I think the overall transition to kmap_local_page() is a good
> > one. It is a superior API and having it around will pave the way for
> > new features. I don't think we should demand 'data' for each and every
> > one of these.
> >
> > Please take a look around the tree and see how other maintainers are
> > handling these patches. They're not limited to SGX.
>
> Sure, I'll take a look for comparison.

Yeah, I think it is pretty solid idea.

Looking at the decription:

"It is not necessary to disable page faults or preemption when
using kmap calls, so replace kmap_atomic() and kunmap_atomic()
calls with more the more appropriate kmap_local_page() and
kunmap_local() calls."

We did not pick kmap_atomic() because it disables preeemption,
i.e. it was not a "design choice". I'd rather phrase this as
along the lines:

"Migrate to the newer kmap_local_page() interface from kmap_atomic()
in order to move away from per-CPU maps to pre-task_struct maps.
This in effect removes the need to disable preemption in the
local CPU while kmap is active, and thus vastly reduces overall
system latency."

Can be improved or written completely otherwise. I just wrote it
in the way that I had understood the whole deal in the first place.

BR, Jarkko

2022-10-13 16:22:19

by Kristen Carlson Accardi

[permalink] [raw]
Subject: Re: [PATCH] x86/sgx: Replace kmap/kunmap_atomic calls

On Wed, 2022-10-12 at 18:50 +0300, Jarkko Sakkinen wrote:
> On Wed, Oct 12, 2022 at 05:50:19PM +0300, Jarkko Sakkinen wrote:
> > On Wed, Oct 12, 2022 at 07:13:26AM -0700, Dave Hansen wrote:
> > > On 10/12/22 00:15, Jarkko Sakkinen wrote:
> > > > There's no data to show that this change would be useful to do.
> > >
> > > Jarkko, I think the overall transition to kmap_local_page() is a
> > > good
> > > one.  It is a superior API and having it around will pave the way
> > > for
> > > new features.  I don't think we should demand 'data' for each and
> > > every
> > > one of these.
> > >
> > > Please take a look around the tree and see how other maintainers
> > > are
> > > handling these patches.  They're not limited to SGX.
> >
> > Sure, I'll take a look for comparison.
>
> Yeah, I think it is pretty solid idea.
>
> Looking at the decription:
>
> "It is not necessary to disable page faults or preemption when
> using kmap calls, so replace kmap_atomic() and kunmap_atomic()
> calls with more the more appropriate kmap_local_page() and
> kunmap_local() calls."
>
> We did not pick kmap_atomic() because it disables preeemption,
> i.e. it was not a "design choice". I'd rather phrase this as
> along the lines:
>
> "Migrate to the newer kmap_local_page() interface from kmap_atomic()
> in order to move away from per-CPU maps to pre-task_struct maps.
> This in effect removes the need to disable preemption in the
> local CPU while kmap is active, and thus vastly reduces overall
> system latency."
>
> Can be improved or written completely otherwise. I just wrote it
> in the way that I had understood the whole deal in the first place.
>
> BR, Jarkko

Thanks for looking into this Jarkko - I will update the commit log for
the next version.

Kristen