If zid reaches ZONE_NORMAL, the caller will always get the NORMAL zone no
matter what zone_intersects() returns. So we can save some possible cpu
cycles by avoid calling zone_intersects() for ZONE_NORMAL.
Signed-off-by: Miaohe Lin <[email protected]>
---
mm/memory_hotplug.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index cbc67c27e0dd..140809e60e9a 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -826,7 +826,7 @@ static struct zone *default_kernel_zone_for_pfn(int nid, unsigned long start_pfn
struct pglist_data *pgdat = NODE_DATA(nid);
int zid;
- for (zid = 0; zid <= ZONE_NORMAL; zid++) {
+ for (zid = 0; zid < ZONE_NORMAL; zid++) {
struct zone *zone = &pgdat->node_zones[zid];
if (zone_intersects(zone, start_pfn, nr_pages))
--
2.23.0
On 07.02.22 14:36, Miaohe Lin wrote:
> If zid reaches ZONE_NORMAL, the caller will always get the NORMAL zone no
> matter what zone_intersects() returns. So we can save some possible cpu
> cycles by avoid calling zone_intersects() for ZONE_NORMAL.
>
> Signed-off-by: Miaohe Lin <[email protected]>
> ---
> mm/memory_hotplug.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index cbc67c27e0dd..140809e60e9a 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -826,7 +826,7 @@ static struct zone *default_kernel_zone_for_pfn(int nid, unsigned long start_pfn
> struct pglist_data *pgdat = NODE_DATA(nid);
> int zid;
>
> - for (zid = 0; zid <= ZONE_NORMAL; zid++) {
> + for (zid = 0; zid < ZONE_NORMAL; zid++) {
> struct zone *zone = &pgdat->node_zones[zid];
>
> if (zone_intersects(zone, start_pfn, nr_pages))
Makes sense, although we just don't care about the CPU cycles at that point.
Reviewed-by: David Hildenbrand <[email protected]>
--
Thanks,
David / dhildenb
On Mon, Feb 07, 2022 at 09:36:41PM +0800, Miaohe Lin wrote:
> If zid reaches ZONE_NORMAL, the caller will always get the NORMAL zone no
> matter what zone_intersects() returns. So we can save some possible cpu
> cycles by avoid calling zone_intersects() for ZONE_NORMAL.
>
> Signed-off-by: Miaohe Lin <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
> ---
> mm/memory_hotplug.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index cbc67c27e0dd..140809e60e9a 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -826,7 +826,7 @@ static struct zone *default_kernel_zone_for_pfn(int nid, unsigned long start_pfn
> struct pglist_data *pgdat = NODE_DATA(nid);
> int zid;
>
> - for (zid = 0; zid <= ZONE_NORMAL; zid++) {
> + for (zid = 0; zid < ZONE_NORMAL; zid++) {
> struct zone *zone = &pgdat->node_zones[zid];
>
> if (zone_intersects(zone, start_pfn, nr_pages))
> --
> 2.23.0
>
>
--
Oscar Salvador
SUSE Labs