It was observed that if order is passed as more than MAX_ORDER
allocation in __alloc_pages_nodemask, it will unnecessarily go to
slowpath and then return failure.
Since we know that more than MAX_ORDER will anyways fail, we can
avoid slowpath by returning failure in nodemask itself.
Signed-off-by: Pintu Kumar <[email protected]>
---
mm/page_alloc.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 202ab58..6d38e75 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1564,6 +1564,10 @@ __setup("fail_page_alloc=", setup_fail_page_alloc);
static bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
{
+ if (order >= MAX_ORDER) {
+ WARN_ON(!(gfp_mask & __GFP_NOWARN));
+ return false;
+ }
if (order < fail_page_alloc.min_order)
return false;
if (gfp_mask & __GFP_NOFAIL)
--
1.7.9.5
Hi Pintu,
On Mon, Jul 22, 2013 at 05:02:42PM +0530, Pintu Kumar wrote:
> It was observed that if order is passed as more than MAX_ORDER
> allocation in __alloc_pages_nodemask, it will unnecessarily go to
> slowpath and then return failure.
> Since we know that more than MAX_ORDER will anyways fail, we can
> avoid slowpath by returning failure in nodemask itself.
>
> Signed-off-by: Pintu Kumar <[email protected]>
> ---
> mm/page_alloc.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 202ab58..6d38e75 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1564,6 +1564,10 @@ __setup("fail_page_alloc=", setup_fail_page_alloc);
>
> static bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
> {
> + if (order >= MAX_ORDER) {
> + WARN_ON(!(gfp_mask & __GFP_NOWARN));
> + return false;
> + }
I don't see how this solves what you describe (should return true?)
It would also not be a good place to put performance optimization,
because this function is only called as part of a debugging mechanism
that is usually disabled.
Lastly, order >= MAX_ORDER is not supported by the page allocator, and
we do not want to punish 99.999% of all legitimate page allocations in
the fast path in order to catch an unlikely situation like this.
Having the check only in the slowpath is a good thing.
Hi Johannes,
Thank you for your reply.
This is my first kernel patch, sorry for the small mistakes.
Please find my comments inline.
>________________________________
> From: Johannes Weiner <[email protected]>
>To: Pintu Kumar <[email protected]>
>Cc: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]
>Sent: Monday, 22 July 2013 10:08 PM
>Subject: Re: [PATCH 2/2] mm: page_alloc: avoid slowpath for more than MAX_ORDER allocation.
>
>
>Hi Pintu,
>
>On Mon, Jul 22, 2013 at 05:02:42PM +0530, Pintu Kumar wrote:
>> It was observed that if order is passed as more than MAX_ORDER
>> allocation in __alloc_pages_nodemask, it will unnecessarily go to
>> slowpath and then return failure.
>> Since we know that more than MAX_ORDER will anyways fail, we can
>> avoid slowpath by returning failure in nodemask itself.
>>
>> Signed-off-by: Pintu Kumar <[email protected]>
>> ---
>>? mm/page_alloc.c |? ? 4 ++++
>>? 1 file changed, 4 insertions(+)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 202ab58..6d38e75 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1564,6 +1564,10 @@ __setup("fail_page_alloc=", setup_fail_page_alloc);
>>?
>>? static bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
>>? {
>> +??? if (order >= MAX_ORDER) {
>> +??? ??? WARN_ON(!(gfp_mask & __GFP_NOWARN));
>> +??? ??? return false;
>> +??? }
>
>I don't see how this solves what you describe (should return true?)
>
Ok, sorry, I will correct this.
>It would also not be a good place to put performance optimization,
>because this function is only called as part of a debugging mechanism
>that is usually disabled.
>
Ok, so you mean, we should add this check directly at the top of __alloc_pages_nodemask() ??
>Lastly, order >= MAX_ORDER is not supported by the page allocator, and
>we do not want to punish 99.999% of all legitimate page allocations in
>the fast path in order to catch an unlikely situation like this.
So, is it fine if we add an unlikely condition like below:
if (unlikely(order >= MAX_ORDER))
>Having the check only in the slowpath is a good thing.
>
Sorry, I could not understand, why adding this check in slowpath is only good.
We could have returned failure much before that.
Without this check, we are actually allowing failure of "first allocation attempt" and then returning the cause of failure in slowpath.
I thought it will be better to track the unlikely failure in the system as early as possible, at least from the embedded system prospective.
Let me know your opinion.
>--
>To unsubscribe, send a message with 'unsubscribe linux-mm' in
>the body to [email protected].? For more info on Linux MM,
>see: http://www.linux-mm.org/ .
>Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
>
>
>
On Mon, Jul 22, 2013 at 07:01:18PM -0700, PINTU KUMAR wrote:
> >Lastly, order >= MAX_ORDER is not supported by the page allocator, and
> >we do not want to punish 99.999% of all legitimate page allocations in
> >the fast path in order to catch an unlikely situation like this.
[...]
> >Having the check only in the slowpath is a good thing.
> >
> Sorry, I could not understand, why adding this check in slowpath is only good.
> We could have returned failure much before that.
> Without this check, we are actually allowing failure of "first allocation attempt" and then returning the cause of failure in slowpath.
> I thought it will be better to track the unlikely failure in the system as early as possible, at least from the embedded system prospective.
> Let me know your opinion.
This is a trade-off between two cases: we expect (almost) all
allocations to be order < MAX_ORDER, so we want that path as
lightweight as possible. On the other hand, we expect that only very
rarely an allocation will specify order >= MAX_ORDER. By doing the
check late, we make the common case faster at the expense of the rare
case. That's the whole point of having a fast path and a slow path.
What you are proposing would punish 99.999% of all cases in order to
speed up the 0.001% cases. In addition, these 0.001% of all cases
will fail the allocation, so performance is the least of their
worries. It's a bad trade-off.