The variable current_order decreases from MAX_ORDER-1 to order, so the
condition current_order <= MAX_ORDER-1 is always true.
Signed-off-by: Hao Lee <[email protected]>
---
mm/page_alloc.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2302f25..9120c2b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2215,9 +2215,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
bool can_steal;
/* Find the largest possible block of pages in the other list */
- for (current_order = MAX_ORDER-1;
- current_order >= order && current_order <= MAX_ORDER-1;
- --current_order) {
+ for (current_order = MAX_ORDER-1; current_order >= order;
+ --current_order) {
area = &(zone->free_area[current_order]);
fallback_mt = find_suitable_fallback(area, current_order,
start_migratetype, false, &can_steal);
--
2.9.3
On 06/19/2017 03:54 PM, Hao Lee wrote:
> The variable current_order decreases from MAX_ORDER-1 to order, so the
> condition current_order <= MAX_ORDER-1 is always true.
>
> Signed-off-by: Hao Lee <[email protected]>
Sounds right.
Acked-by: Vlastimil Babka <[email protected]>
> ---
> mm/page_alloc.c | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 2302f25..9120c2b 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2215,9 +2215,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
> bool can_steal;
>
> /* Find the largest possible block of pages in the other list */
> - for (current_order = MAX_ORDER-1;
> - current_order >= order && current_order <= MAX_ORDER-1;
> - --current_order) {
> + for (current_order = MAX_ORDER-1; current_order >= order;
> + --current_order) {
> area = &(zone->free_area[current_order]);
> fallback_mt = find_suitable_fallback(area, current_order,
> start_migratetype, false, &can_steal);
>
On Mon, Jun 19 2017, Vlastimil Babka <[email protected]> wrote:
> On 06/19/2017 03:54 PM, Hao Lee wrote:
>> The variable current_order decreases from MAX_ORDER-1 to order, so the
>> condition current_order <= MAX_ORDER-1 is always true.
>>
>> Signed-off-by: Hao Lee <[email protected]>
>
> Sounds right.
>
> Acked-by: Vlastimil Babka <[email protected]>
current_order and order are both unsigned, and if order==0,
current_order >= order is always true, and we may decrement
current_order past 0 making it UINT_MAX... A comment would be in order,
though.
>> ---
>> mm/page_alloc.c | 5 ++---
>> 1 file changed, 2 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 2302f25..9120c2b 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -2215,9 +2215,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
>> bool can_steal;
>>
>> /* Find the largest possible block of pages in the other list */
>> - for (current_order = MAX_ORDER-1;
>> - current_order >= order && current_order <= MAX_ORDER-1;
>> - --current_order) {
>> + for (current_order = MAX_ORDER-1; current_order >= order;
>> + --current_order) {
>> area = &(zone->free_area[current_order]);
>> fallback_mt = find_suitable_fallback(area, current_order,
>> start_migratetype, false, &can_steal);
>>
On 06/19/2017 09:05 PM, Rasmus Villemoes wrote:
> On Mon, Jun 19 2017, Vlastimil Babka <[email protected]> wrote:
>
>> On 06/19/2017 03:54 PM, Hao Lee wrote:
>>> The variable current_order decreases from MAX_ORDER-1 to order, so the
>>> condition current_order <= MAX_ORDER-1 is always true.
>>>
>>> Signed-off-by: Hao Lee <[email protected]>
>>
>> Sounds right.
>>
>> Acked-by: Vlastimil Babka <[email protected]>
>
> current_order and order are both unsigned, and if order==0,
> current_order >= order is always true, and we may decrement
> current_order past 0 making it UINT_MAX... A comment would be in order,
> though.
Doh, right. Thanks.
On Tue, Jun 20, 2017 at 3:05 AM, Rasmus Villemoes
<[email protected]> wrote:
> On Mon, Jun 19 2017, Vlastimil Babka <[email protected]> wrote:
>
>> On 06/19/2017 03:54 PM, Hao Lee wrote:
>>> The variable current_order decreases from MAX_ORDER-1 to order, so the
>>> condition current_order <= MAX_ORDER-1 is always true.
>>>
>>> Signed-off-by: Hao Lee <[email protected]>
>>
>> Sounds right.
>>
>> Acked-by: Vlastimil Babka <[email protected]>
>
> current_order and order are both unsigned, and if order==0,
> current_order >= order is always true, and we may decrement
> current_order past 0 making it UINT_MAX... A comment would be in order,
> though.
Thanks, I didn't notice unsigned subtraction. Sorry about that.
On Mon 19-06-17 21:05:29, Rasmus Villemoes wrote:
> On Mon, Jun 19 2017, Vlastimil Babka <[email protected]> wrote:
>
> > On 06/19/2017 03:54 PM, Hao Lee wrote:
> >> The variable current_order decreases from MAX_ORDER-1 to order, so the
> >> condition current_order <= MAX_ORDER-1 is always true.
> >>
> >> Signed-off-by: Hao Lee <[email protected]>
> >
> > Sounds right.
> >
> > Acked-by: Vlastimil Babka <[email protected]>
>
> current_order and order are both unsigned, and if order==0,
> current_order >= order is always true, and we may decrement
> current_order past 0 making it UINT_MAX... A comment would be in order,
> though.
Yes, not the first time this has been brought up
https://lkml.org/lkml/2016/6/20/493. I guess a comment is long overdue.
Or just get rid of the unsigned trap which would be probably more clean.
> >> ---
> >> mm/page_alloc.c | 5 ++---
> >> 1 file changed, 2 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >> index 2302f25..9120c2b 100644
> >> --- a/mm/page_alloc.c
> >> +++ b/mm/page_alloc.c
> >> @@ -2215,9 +2215,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
> >> bool can_steal;
> >>
> >> /* Find the largest possible block of pages in the other list */
> >> - for (current_order = MAX_ORDER-1;
> >> - current_order >= order && current_order <= MAX_ORDER-1;
> >> - --current_order) {
> >> + for (current_order = MAX_ORDER-1; current_order >= order;
> >> + --current_order) {
> >> area = &(zone->free_area[current_order]);
> >> fallback_mt = find_suitable_fallback(area, current_order,
> >> start_migratetype, false, &can_steal);
> >>
--
Michal Hocko
SUSE Labs
Since current_order starts as MAX_ORDER-1 and is then only
decremented, the second half of the loop condition seems
superfluous. However, if order is 0, we may decrement current_order
past 0, making it UINT_MAX. This is obviously too subtle ([1], [2]).
Since we need to add some comment anyway, change the two variables to
signed, making the counting-down for loop look more familiar, and
apparently also making gcc generate slightly smaller code.
[1] https://lkml.org/lkml/2016/6/20/493
[2] https://lkml.org/lkml/2017/6/19/345
Signed-off-by: Rasmus Villemoes <[email protected]>
---
Michal, something like this, perhaps?
mm/page_alloc.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2302f250d6b1..e656f4da9772 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2204,19 +2204,23 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
* list of requested migratetype, possibly along with other pages from the same
* block, depending on fragmentation avoidance heuristics. Returns true if
* fallback was found so that __rmqueue_smallest() can grab it.
+ *
+ * The use of signed ints for order and current_order is a deliberate
+ * deviation from the rest of this file, to make the for loop
+ * condition simpler.
*/
static inline bool
-__rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
+__rmqueue_fallback(struct zone *zone, int order, int start_migratetype)
{
struct free_area *area;
- unsigned int current_order;
+ int current_order;
struct page *page;
int fallback_mt;
bool can_steal;
/* Find the largest possible block of pages in the other list */
for (current_order = MAX_ORDER-1;
- current_order >= order && current_order <= MAX_ORDER-1;
+ current_order >= order;
--current_order) {
area = &(zone->free_area[current_order]);
fallback_mt = find_suitable_fallback(area, current_order,
--
2.11.0
On Wed 21-06-17 20:55:28, Rasmus Villemoes wrote:
> Since current_order starts as MAX_ORDER-1 and is then only
> decremented, the second half of the loop condition seems
> superfluous. However, if order is 0, we may decrement current_order
> past 0, making it UINT_MAX. This is obviously too subtle ([1], [2]).
>
> Since we need to add some comment anyway, change the two variables to
> signed, making the counting-down for loop look more familiar, and
> apparently also making gcc generate slightly smaller code.
>
> [1] https://lkml.org/lkml/2016/6/20/493
> [2] https://lkml.org/lkml/2017/6/19/345
>
> Signed-off-by: Rasmus Villemoes <[email protected]>
I would hope for a more consistent usage of the type but his alone
should prevent future attempts to "clean up" the code.
Acked-by: Michal Hocko <[email protected]>
> ---
> Michal, something like this, perhaps?
>
> mm/page_alloc.c | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 2302f250d6b1..e656f4da9772 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2204,19 +2204,23 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
> * list of requested migratetype, possibly along with other pages from the same
> * block, depending on fragmentation avoidance heuristics. Returns true if
> * fallback was found so that __rmqueue_smallest() can grab it.
> + *
> + * The use of signed ints for order and current_order is a deliberate
> + * deviation from the rest of this file, to make the for loop
> + * condition simpler.
> */
> static inline bool
> -__rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
> +__rmqueue_fallback(struct zone *zone, int order, int start_migratetype)
> {
> struct free_area *area;
> - unsigned int current_order;
> + int current_order;
> struct page *page;
> int fallback_mt;
> bool can_steal;
>
> /* Find the largest possible block of pages in the other list */
> for (current_order = MAX_ORDER-1;
> - current_order >= order && current_order <= MAX_ORDER-1;
> + current_order >= order;
> --current_order) {
> area = &(zone->free_area[current_order]);
> fallback_mt = find_suitable_fallback(area, current_order,
> --
> 2.11.0
>
--
Michal Hocko
SUSE Labs
On 06/21/2017 08:55 PM, Rasmus Villemoes wrote:
> Since current_order starts as MAX_ORDER-1 and is then only
> decremented, the second half of the loop condition seems
> superfluous. However, if order is 0, we may decrement current_order
> past 0, making it UINT_MAX. This is obviously too subtle ([1], [2]).
>
> Since we need to add some comment anyway, change the two variables to
> signed, making the counting-down for loop look more familiar, and
> apparently also making gcc generate slightly smaller code.
>
> [1] https://lkml.org/lkml/2016/6/20/493
> [2] https://lkml.org/lkml/2017/6/19/345
>
> Signed-off-by: Rasmus Villemoes <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
> ---
> Michal, something like this, perhaps?
>
> mm/page_alloc.c | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 2302f250d6b1..e656f4da9772 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2204,19 +2204,23 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
> * list of requested migratetype, possibly along with other pages from the same
> * block, depending on fragmentation avoidance heuristics. Returns true if
> * fallback was found so that __rmqueue_smallest() can grab it.
> + *
> + * The use of signed ints for order and current_order is a deliberate
> + * deviation from the rest of this file, to make the for loop
> + * condition simpler.
> */
> static inline bool
> -__rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
> +__rmqueue_fallback(struct zone *zone, int order, int start_migratetype)
> {
> struct free_area *area;
> - unsigned int current_order;
> + int current_order;
> struct page *page;
> int fallback_mt;
> bool can_steal;
>
> /* Find the largest possible block of pages in the other list */
> for (current_order = MAX_ORDER-1;
> - current_order >= order && current_order <= MAX_ORDER-1;
> + current_order >= order;
> --current_order) {
> area = &(zone->free_area[current_order]);
> fallback_mt = find_suitable_fallback(area, current_order,
>
On Wed, Jun 21, 2017 at 08:55:28PM +0200, Rasmus Villemoes wrote:
>Since current_order starts as MAX_ORDER-1 and is then only
>decremented, the second half of the loop condition seems
>superfluous. However, if order is 0, we may decrement current_order
>past 0, making it UINT_MAX. This is obviously too subtle ([1], [2]).
>
>Since we need to add some comment anyway, change the two variables to
>signed, making the counting-down for loop look more familiar, and
>apparently also making gcc generate slightly smaller code.
>
>[1] https://lkml.org/lkml/2016/6/20/493
>[2] https://lkml.org/lkml/2017/6/19/345
>
>Signed-off-by: Rasmus Villemoes <[email protected]>
>---
>Michal, something like this, perhaps?
>
>mm/page_alloc.c | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
>diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>index 2302f250d6b1..e656f4da9772 100644
>--- a/mm/page_alloc.c
>+++ b/mm/page_alloc.c
>@@ -2204,19 +2204,23 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
> * list of requested migratetype, possibly along with other pages from the same
> * block, depending on fragmentation avoidance heuristics. Returns true if
> * fallback was found so that __rmqueue_smallest() can grab it.
>+ *
>+ * The use of signed ints for order and current_order is a deliberate
>+ * deviation from the rest of this file, to make the for loop
>+ * condition simpler.
> */
> static inline bool
>-__rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
>+__rmqueue_fallback(struct zone *zone, int order, int start_migratetype)
> {
> struct free_area *area;
>- unsigned int current_order;
>+ int current_order;
> struct page *page;
> int fallback_mt;
> bool can_steal;
>
> /* Find the largest possible block of pages in the other list */
> for (current_order = MAX_ORDER-1;
>- current_order >= order && current_order <= MAX_ORDER-1;
>+ current_order >= order;
> --current_order) {
> area = &(zone->free_area[current_order]);
> fallback_mt = find_suitable_fallback(area, current_order,
>--
>2.11.0
Looks nice. Why I didn't come up with this change.
Acked-by: Wei Yang <[email protected]>
--
Wei Yang
Help you, Help me