Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932356AbbG1NmT (ORCPT ); Tue, 28 Jul 2015 09:42:19 -0400 Received: from mx2.suse.de ([195.135.220.15]:50674 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752635AbbG1NmP (ORCPT ); Tue, 28 Jul 2015 09:42:15 -0400 Subject: Re: [PATCH 06/10] mm, page_alloc: Use jump label to check if page grouping by mobility is enabled To: Mel Gorman , Linux-MM References: <1437379219-9160-1-git-send-email-mgorman@suse.com> <1437379219-9160-7-git-send-email-mgorman@suse.com> Cc: Johannes Weiner , Rik van Riel , Pintu Kumar , Xishi Qiu , Gioh Kim , LKML , Mel Gorman , Peter Zijlstra From: Vlastimil Babka Message-ID: <55B786B4.8030702@suse.cz> Date: Tue, 28 Jul 2015 15:42:12 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: <1437379219-9160-7-git-send-email-mgorman@suse.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4784 Lines: 128 On 07/20/2015 10:00 AM, Mel Gorman wrote: > From: Mel Gorman > > The global variable page_group_by_mobility_disabled remembers if page grouping > by mobility was disabled at boot time. It's more efficient to do this by jump > label. > > Signed-off-by: Mel Gorman [+CC Peterz] > --- > include/linux/gfp.h | 2 +- > include/linux/mmzone.h | 7 ++++++- > mm/page_alloc.c | 15 ++++++--------- > 3 files changed, 13 insertions(+), 11 deletions(-) > > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > index 6d3a2d430715..5a27bbba63ed 100644 > --- a/include/linux/gfp.h > +++ b/include/linux/gfp.h > @@ -151,7 +151,7 @@ static inline int gfpflags_to_migratetype(const gfp_t gfp_flags) > { > WARN_ON((gfp_flags & GFP_MOVABLE_MASK) == GFP_MOVABLE_MASK); > > - if (unlikely(page_group_by_mobility_disabled)) > + if (page_group_by_mobility_disabled()) > return MIGRATE_UNMOVABLE; > > /* Group based on mobility */ > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 672ac437c43c..c9497519340a 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -73,7 +73,12 @@ enum { > for (order = 0; order < MAX_ORDER; order++) \ > for (type = 0; type < MIGRATE_TYPES; type++) > > -extern int page_group_by_mobility_disabled; > +extern struct static_key page_group_by_mobility_key; The "disabled" part is no longer in the name, but I suspect you didn't want it to be too long? > + > +static inline bool page_group_by_mobility_disabled(void) > +{ > + return static_key_false(&page_group_by_mobility_key); > +} > > #define NR_MIGRATETYPE_BITS (PB_migrate_end - PB_migrate + 1) > #define MIGRATETYPE_MASK ((1UL << NR_MIGRATETYPE_BITS) - 1) > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 56432b59b797..403cf31f8cf9 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -228,7 +228,7 @@ EXPORT_SYMBOL(nr_node_ids); > EXPORT_SYMBOL(nr_online_nodes); > #endif > > -int page_group_by_mobility_disabled __read_mostly; > +struct static_key page_group_by_mobility_key __read_mostly = STATIC_KEY_INIT_FALSE; > > #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT > static inline void reset_deferred_meminit(pg_data_t *pgdat) > @@ -303,8 +303,7 @@ static inline bool update_defer_init(pg_data_t *pgdat, > > void set_pageblock_migratetype(struct page *page, int migratetype) > { > - if (unlikely(page_group_by_mobility_disabled && > - migratetype < MIGRATE_PCPTYPES)) > + if (page_group_by_mobility_disabled() && migratetype < MIGRATE_PCPTYPES) > migratetype = MIGRATE_UNMOVABLE; > > set_pageblock_flags_group(page, (unsigned long)migratetype, > @@ -1501,7 +1500,7 @@ static bool can_steal_fallback(unsigned int order, int start_mt) > if (order >= pageblock_order / 2 || > start_mt == MIGRATE_RECLAIMABLE || > start_mt == MIGRATE_UNMOVABLE || > - page_group_by_mobility_disabled) > + page_group_by_mobility_disabled()) > return true; > > return false; > @@ -1530,7 +1529,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page, > > /* Claim the whole block if over half of it is free */ > if (pages >= (1 << (pageblock_order-1)) || > - page_group_by_mobility_disabled) > + page_group_by_mobility_disabled()) > set_pageblock_migratetype(page, start_type); > } > > @@ -4156,15 +4155,13 @@ void __ref build_all_zonelists(pg_data_t *pgdat, struct zone *zone) > * disabled and enable it later > */ > if (vm_total_pages < (pageblock_nr_pages * MIGRATE_TYPES)) > - page_group_by_mobility_disabled = 1; > - else > - page_group_by_mobility_disabled = 0; > + static_key_slow_inc(&page_group_by_mobility_key); Um so previously, booting with little memory would disable grouping by mobility, and later hotpluging would enable it again, right? But this is now removed, and once disabled means always disabled? That can't be right, and I'm not sure about the effects of the recently introduced delayed initialization here? Looks like the API addition that Peter just posted, would be useful here :) http://marc.info/?l=linux-kernel&m=143808996921651&w=2 > > pr_info("Built %i zonelists in %s order, mobility grouping %s. " > "Total pages: %ld\n", > nr_online_nodes, > zonelist_order_name[current_zonelist_order], > - page_group_by_mobility_disabled ? "off" : "on", > + page_group_by_mobility_disabled() ? "off" : "on", > vm_total_pages); > #ifdef CONFIG_NUMA > pr_info("Policy zone: %s\n", zone_names[policy_zone]); > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/