2020-09-06 11:45:57

by Mateusz Nosek

[permalink] [raw]
Subject: [PATCH] mm/mmu_notifier.c: micro-optimization substitute kzalloc with kmalloc

From: Mateusz Nosek <[email protected]>

Most fields in struct pointed by 'subscriptions' are initialized explicitly
after the allocation. By changing kzalloc to kmalloc the call to memset
is avoided. As the only new code consists of 2 simple memory accesses,
the performance is increased.

Signed-off-by: Mateusz Nosek <[email protected]>
---
mm/mmu_notifier.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index 4fc918163dd3..190e198dc5be 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -625,7 +625,7 @@ int __mmu_notifier_register(struct mmu_notifier *subscription,
* know that mm->notifier_subscriptions can't change while we
* hold the write side of the mmap_lock.
*/
- subscriptions = kzalloc(
+ subscriptions = kmalloc(
sizeof(struct mmu_notifier_subscriptions), GFP_KERNEL);
if (!subscriptions)
return -ENOMEM;
@@ -636,6 +636,8 @@ int __mmu_notifier_register(struct mmu_notifier *subscription,
subscriptions->itree = RB_ROOT_CACHED;
init_waitqueue_head(&subscriptions->wq);
INIT_HLIST_HEAD(&subscriptions->deferred_list);
+ subscriptions->active_invalidate_ranges = 0;
+ subscriptions->has_itree = false;
}

ret = mm_take_all_locks(mm);
--
2.20.1


2020-09-06 14:32:33

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH] mm/mmu_notifier.c: micro-optimization substitute kzalloc with kmalloc

Hi,

On Sun, Sep 06, 2020 at 01:43:21PM +0200, [email protected] wrote:
> From: Mateusz Nosek <[email protected]>
>
> Most fields in struct pointed by 'subscriptions' are initialized explicitly
> after the allocation. By changing kzalloc to kmalloc the call to memset
> is avoided. As the only new code consists of 2 simple memory accesses,
> the performance is increased.

Is there a measurable performance increase?

The __mmu_notifier_register() is not used that frequently to trade off
robustness of kzalloc() for slight (if visible at all) performance gain.

> Signed-off-by: Mateusz Nosek <[email protected]>
> ---
> mm/mmu_notifier.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
> index 4fc918163dd3..190e198dc5be 100644
> --- a/mm/mmu_notifier.c
> +++ b/mm/mmu_notifier.c
> @@ -625,7 +625,7 @@ int __mmu_notifier_register(struct mmu_notifier *subscription,
> * know that mm->notifier_subscriptions can't change while we
> * hold the write side of the mmap_lock.
> */
> - subscriptions = kzalloc(
> + subscriptions = kmalloc(
> sizeof(struct mmu_notifier_subscriptions), GFP_KERNEL);
> if (!subscriptions)
> return -ENOMEM;
> @@ -636,6 +636,8 @@ int __mmu_notifier_register(struct mmu_notifier *subscription,
> subscriptions->itree = RB_ROOT_CACHED;
> init_waitqueue_head(&subscriptions->wq);
> INIT_HLIST_HEAD(&subscriptions->deferred_list);
> + subscriptions->active_invalidate_ranges = 0;
> + subscriptions->has_itree = false;
> }
>
> ret = mm_take_all_locks(mm);
> --
> 2.20.1
>
>

--
Sincerely yours,
Mike.

2020-09-06 16:08:02

by Mateusz Nosek

[permalink] [raw]
Subject: Re: [PATCH] mm/mmu_notifier.c: micro-optimization substitute kzalloc with kmalloc

Hi,

I performed simple benchmarks using custom kernel module with the code
fragment in question 'copy-pasted' in there in both versions. In case of
1k, 10k and 100k iterations the average time for kzalloc version was 5.1
and for kmalloc 3.9, for each iterations number.
The time was measured using 'ktime_get(void)' function and the results
given here are in ktime_t units.
The machine I use has 4 core Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz CPU.

The performance increase happens, but as you wrote it is probably not
really noticeable.
I have found 3 other places in kernel code with similar kzalloc related
issues, none of which seems to be 'hot' code.
I leave the decision if this patch and potential others I would send
regarding this issue, are worth applying to the community and maintainers.

Best regards,
Mateusz Nosek

On 9/6/2020 4:26 PM, Mike Rapoport wrote:
> Hi,
>
> On Sun, Sep 06, 2020 at 01:43:21PM +0200, [email protected] wrote:
>> From: Mateusz Nosek <[email protected]>
>>
>> Most fields in struct pointed by 'subscriptions' are initialized explicitly
>> after the allocation. By changing kzalloc to kmalloc the call to memset
>> is avoided. As the only new code consists of 2 simple memory accesses,
>> the performance is increased.
>
> Is there a measurable performance increase?
>
> The __mmu_notifier_register() is not used that frequently to trade off
> robustness of kzalloc() for slight (if visible at all) performance gain.
>
>> Signed-off-by: Mateusz Nosek <[email protected]>
>> ---
>> mm/mmu_notifier.c | 4 +++-
>> 1 file changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
>> index 4fc918163dd3..190e198dc5be 100644
>> --- a/mm/mmu_notifier.c
>> +++ b/mm/mmu_notifier.c
>> @@ -625,7 +625,7 @@ int __mmu_notifier_register(struct mmu_notifier *subscription,
>> * know that mm->notifier_subscriptions can't change while we
>> * hold the write side of the mmap_lock.
>> */
>> - subscriptions = kzalloc(
>> + subscriptions = kmalloc(
>> sizeof(struct mmu_notifier_subscriptions), GFP_KERNEL);
>> if (!subscriptions)
>> return -ENOMEM;
>> @@ -636,6 +636,8 @@ int __mmu_notifier_register(struct mmu_notifier *subscription,
>> subscriptions->itree = RB_ROOT_CACHED;
>> init_waitqueue_head(&subscriptions->wq);
>> INIT_HLIST_HEAD(&subscriptions->deferred_list);
>> + subscriptions->active_invalidate_ranges = 0;
>> + subscriptions->has_itree = false;
>> }
>>
>> ret = mm_take_all_locks(mm);
>> --
>> 2.20.1
>>
>>
>

2020-09-08 06:46:34

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH] mm/mmu_notifier.c: micro-optimization substitute kzalloc with kmalloc

On Sun, Sep 06, 2020 at 06:06:39PM +0200, Mateusz Nosek wrote:
> Hi,
>
> I performed simple benchmarks using custom kernel module with the code
> fragment in question 'copy-pasted' in there in both versions. In case of 1k,
> 10k and 100k iterations the average time for kzalloc version was 5.1 and for
> kmalloc 3.9, for each iterations number.
> The time was measured using 'ktime_get(void)' function and the results given
> here are in ktime_t units.
> The machine I use has 4 core Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz CPU.
>
> The performance increase happens, but as you wrote it is probably not really
> noticeable.

I don't think that saving a few cylces of memset() in a function that
called only on the initialization path in very particular cases is worth
risking uninitialized variables when somebody will add a new field to
the 'struct mmu_notifier_subscriptions' and will forget to explicitly
set it.

> I have found 3 other places in kernel code with similar kzalloc related
> issues, none of which seems to be 'hot' code.
> I leave the decision if this patch and potential others I would send
> regarding this issue, are worth applying to the community and maintainers.
>
> Best regards,
> Mateusz Nosek
>
> On 9/6/2020 4:26 PM, Mike Rapoport wrote:
> > Hi,
> >
> > On Sun, Sep 06, 2020 at 01:43:21PM +0200, [email protected] wrote:
> > > From: Mateusz Nosek <[email protected]>
> > >
> > > Most fields in struct pointed by 'subscriptions' are initialized explicitly
> > > after the allocation. By changing kzalloc to kmalloc the call to memset
> > > is avoided. As the only new code consists of 2 simple memory accesses,
> > > the performance is increased.
> >
> > Is there a measurable performance increase?
> >
> > The __mmu_notifier_register() is not used that frequently to trade off
> > robustness of kzalloc() for slight (if visible at all) performance gain.
> >
> > > Signed-off-by: Mateusz Nosek <[email protected]>
> > > ---
> > > mm/mmu_notifier.c | 4 +++-
> > > 1 file changed, 3 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
> > > index 4fc918163dd3..190e198dc5be 100644
> > > --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -625,7 +625,7
> > > @@ int __mmu_notifier_register(struct mmu_notifier *subscription,
> > > * know that mm->notifier_subscriptions can't change while we *
> > > hold the write side of the mmap_lock. */
> > > - subscriptions = kzalloc(
> > > + subscriptions = kmalloc(
> > > sizeof(struct mmu_notifier_subscriptions), GFP_KERNEL);
> > > if (!subscriptions)
> > > return -ENOMEM;
> > > @@ -636,6 +636,8 @@ int __mmu_notifier_register(struct mmu_notifier *subscription,
> > > subscriptions->itree = RB_ROOT_CACHED;
> > > init_waitqueue_head(&subscriptions->wq);
> > > INIT_HLIST_HEAD(&subscriptions->deferred_list);
> > > + subscriptions->active_invalidate_ranges = 0;
> > > + subscriptions->has_itree = false;
> > > }
> > > ret = mm_take_all_locks(mm);
> > > --
> > > 2.20.1
> > >
> > >
> >

--
Sincerely yours,
Mike.

2020-09-08 23:36:18

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH] mm/mmu_notifier.c: micro-optimization substitute kzalloc with kmalloc

On Tue, Sep 08, 2020 at 09:42:45AM +0300, Mike Rapoport wrote:
> On Sun, Sep 06, 2020 at 06:06:39PM +0200, Mateusz Nosek wrote:
> > Hi,
> >
> > I performed simple benchmarks using custom kernel module with the code
> > fragment in question 'copy-pasted' in there in both versions. In case of 1k,
> > 10k and 100k iterations the average time for kzalloc version was 5.1 and for
> > kmalloc 3.9, for each iterations number.
> > The time was measured using 'ktime_get(void)' function and the results given
> > here are in ktime_t units.
> > The machine I use has 4 core Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz CPU.
> >
> > The performance increase happens, but as you wrote it is probably not really
> > noticeable.
>
> I don't think that saving a few cylces of memset() in a function that
> called only on the initialization path in very particular cases is worth
> risking uninitialized variables when somebody will add a new field to
> the 'struct mmu_notifier_subscriptions' and will forget to explicitly
> set it.

Indeed, it is not a common path, it is already very expensive if code
is running here (eg it does mm_take_all_locks()).

So there is no reason at all to optimize this and risk problems down
the road.

Jason